text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Enhancing Pixel Charging Efficiency by Optimizing Thin-Film Transistor Dimensions in Gate Driver Circuits for Active-Matrix Liquid Crystal Displays Flat panel displays are electronic displays that are thin and lightweight, making them ideal for use in a wide range of applications, from televisions and computer monitors to mobile devices and digital signage. The Thin-Film Transistor (TFT) layer is responsible for controlling the amount of light that passes through each pixel and is located behind the liquid crystal layer, enabling precise image control and high-quality display. As one of the important parameters to evaluate the display performance, the faster response time provides more frames in a second, which benefits many high-end applications, such as applications for playing games and watching movies. To further improve the response time, the single-pixel charging efficiency is investigated in this paper by optimizing the TFT dimensions in gate driver circuits in active-matrix liquid crystal displays. The accurate circuit simulation model is developed to minimize the signal’s fall time (Tf) by optimizing the TFT width-to-length ratio. Our results show that using a driving TFT width of 6790 μm and a reset TFT width of 640 μm resulted in a minimum Tf of 2.6572 μs, corresponding to a maximum pixel charging ratio of 90.61275%. These findings demonstrate the effectiveness of our optimization strategy in enhancing pixel charging efficiency and improving display performance. Introduction Active-matrix (AM) [1] displays, including AM liquid crystal displays (AMLCDs) [2] and AM organic light-emitting diode displays (AMOLEDs) [3], have become the leading products in the flat panel display market [4].Thin-Film Transistors (TFTs) are essential to these displays and are continuously improved for enhanced performance and reliability.However, as LCD resolution improves, a significant challenge arises: the diminished pixel charging ratio.The reduction in charging time is a critical bottleneck that affects the overall response time and display quality, especially in high-frequency refresh AMLCDs.The need to maintain high image quality at higher resolutions and refresh rates has driven extensive research and development in TFT technology.Innovations have focused on enhancing TFT materials, refining circuit layouts, and boosting the operational efficiency of the pixel-driving mechanisms.In modern display applications, clarity, color accuracy, and refresh rate are critical for delivering an optimal viewing experience that matches the demanding standards of modern visual applications. To address the challenge of improving pixel charging efficiency, the industry has evolved in various areas, including the enhancement of digital to analog converters (DAC), the development of new pixel driving circuits, and advancements in TFT device design [5][6][7][8][9].These innovations have been crucial in addressing the limitations of current display technologies.Notably, Seo et al. improved the DAC structure by increasing the timing controller's input data from 8 to 10 bits, thereby augmenting the charging ratio in high grayscale ranges through pixel overdrive methods [10].Chen et al. proposed to simplify the amorphous silicon (a-Si) gate drive circuit by reducing the clock duty cycle, which saves layout space and boosts the charging ratio [11].Furthermore, Chiang et al. suggested enhancing the drive current by increasing the gate input signal voltage to improve gate drive efficiency [12].Recent developments have shown promising results in using low-temperature polycrystalline silicon (LTPS) TFTs for their superior electron mobility, aiming to improve pixel charging in high-resolution displays [13].Advancements in amorphous indium-gallium zinc oxide (a-IGZO) TFTs have been pivotal in achieving stable pixel circuits for AMOLEDs, particularly in mobile devices [14].These advancements collectively represent a significant leap in TFT technology, aiming to overcome the challenges posed by higher resolution and refresh rates in modern displays. In this paper, we introduce a novel approach to enhance pixel charging efficiency in AMLCDs by modifying the device structure within the gate drive circuit, maintaining the integrity of the pixel drive circuit's original design.We detail the operational principles of the 11T1C (11 transistor 1 capacitor) gate driver on the array (GOA) circuit and analyze the gate drive circuit's output dynamics, particularly focusing on how the fall time (T f ) affects pixel drive circuit performance.Using hydrogenated a-Si TFTs, our comprehensive simulations revealed a notable enhancement in pixel charging efficiency, primarily driven by the reduction of the gate drive circuit's output signal T f .This discovery underscores the critical impact of T f on display quality and offers a novel lens through which to view circuit optimization.Additionally, we determined an optimal device layout, balancing the demands of spatial efficiency and performance optimization.By maintaining the GOA circuit's unit layout area at 1200 µm × 199.8 µm, we not only adhered to standard design norms but also achieved a remarkable charging efficiency of up to 90.61275%. Materials and Methods In field-effect transistors, the drain-source current I ds is critically influenced by two voltages: the voltage between source and drain, V DS , and the voltage between gate and source, V GS .The relationship is essential for grasping transistor operations and is described in the following equation [15]: In this equation, W and L denote the channel width and length, respectively, demonstrating how the channel's physical dimensions impact current flow of the transistor.C g is the gate capacitance and is pivotal in controlling charge accumulation at the gate.V T denotes the threshold voltage, which is the minimum gate voltage to establish a conductive path between the source and drain.The mobility of the majority carrier within the TFT channel, represented by µ, dictates the ease with which charge carriers traverse the channel.Assuming β = W L µC g , the expression for I ds can be further simplified to [16]: This streamlined equation offers a simpler method for calculating the drain-source current despite being less detailed.It accurately captures the impact of crucial variables, including gate-to-source voltage, drain-to-source voltage, channel dimensions, carrier mobility, and gate capacitance. Pixel Circuit Since the anisotropic liquid crystal molecules in LCDs are torqued in a specific direction by the alignment layer, with an external electric field applied, the molecules' orientation is moved either parallel or cross to the electric field concerning the dielectric polarity of the molecules.The reorientation of the liquid crystal molecules allows light to pass through or be blocked under cross polarizers, creating the full-color display information with individual sub-pixels with red, green, or blue color filters.Figure 1 illustrates the novel pixel structure, consisting of a TFT and two capacitors.The two main phases of the AMLCD pixel circuit are charging and holding [17].During the high-level state of the gate signal G, the TFT activates and allows the data signal D to enter the liquid crystal capacitor C LC , thereby charging the pixel voltage.Conversely, when G is positioned at a low-level state, the TFT deactivates, maintaining the pixel's voltage through the storage capacitor C ST .This modification in voltage leads to a change in the liquid crystal molecules' orientation, modulating light transmittance and creating the display information.Thus, during the row-on period, the pixel voltage V pixel (t) can be given by [18]: where V GH and V GL denote row scanning signal's high and low levels, respectively, while V DH and V DL represent the data signal's high and low levels, respectively.R on is the on-state resistance, and C pixel is the total capacitance, combining storage and liquid crystal capacitances.The charging ratio is defined as follows: From Equations ( 3) and (4), we can find that during the row-on period, the charging ratio is directly proportional to the charging duration t. Pixel Circuit Since the anisotropic liquid crystal molecules in LCDs are torqued in a specific direction by the alignment layer, with an external electric field applied, the molecules' orientation is moved either parallel or cross to the electric field concerning the dielectric polarity of the molecules.The reorientation of the liquid crystal molecules allows light to pass through or be blocked under cross polarizers, creating the full-color display information with individual sub-pixels with red, green, or blue color filters.Figure 1 illustrates the novel pixel structure, consisting of a TFT and two capacitors.The two main phases of the AMLCD pixel circuit are charging and holding [17].During the high-level state of the gate signal G, the TFT activates and allows the data signal D to enter the liquid crystal capacitor , thereby charging the pixel voltage.Conversely, when G is positioned at a low-level state, the TFT deactivates, maintaining the pixel's voltage through the storage capacitor .This modification in voltage leads to a change in the liquid crystal molecules' orientation, modulating light transmi ance and creating the display information.Thus, during the row-on period, the pixel voltage ( ) can be given by [18]: where and denote row scanning signal's high and low levels, respectively, while and represent the data signal's high and low levels, respectively.is the onstate resistance, and is the total capacitance, combining storage and liquid crystal capacitances.The charging ratio is defined as follows: From Equations ( 3) and ( 4), we can find that during the row-on period, the charging ratio is directly proportional to the charging duration .To prevent erroneous signals in the pixel unit, it is necessary to apply delay compensation to the signal timing [11].The preferred method involves prematurely closing the scan line, ensuring that the gate signal G falls before the corresponding signal D. If G falls earlier than D by a margin exceeding the time, this guarantees that the TFT is off when D begins to fall, thus ensuring accurate circuit output.While this approach guarantees that the TFT shuts off before the signal line data switch to the next scan line, it consequently reduces the available charging time for the pixel during the charging interval, leading to a decreased charging rate.Therefore, decreasing the TF value can extend the circuit's actual charging time, enhancing the charging rate.This relationship demonstrates the inverse proportionality between the charging rate and .To prevent erroneous signals in the pixel unit, it is necessary to apply delay compensation to the signal timing [11].The preferred method involves prematurely closing the scan line, ensuring that the gate signal G falls before the corresponding signal D. If G falls earlier than D by a margin exceeding the T f time, this guarantees that the TFT is off when D begins to fall, thus ensuring accurate circuit output.While this approach guarantees that the TFT shuts off before the signal line data switch to the next scan line, it consequently reduces the available charging time for the pixel during the charging interval, leading to a decreased charging rate.Therefore, decreasing the TF value can extend the circuit's actual charging time, enhancing the charging rate.This relationship demonstrates the inverse proportionality between the charging rate and T f .[20], where M1 is the input TFT, M3 is the driving TFT, and M2, M4-M10 are the reset TFTs.This circuit operates through four phases: Pre-Charge Period, Pull-Up Period, Pull-Down Period, and Low-Level Holding Period [21,22].Within this circuit, the PU node acts as the pull-up point to control the OUTPUT signal, while the PD node serves as the pull-down point.This configuration enables the progressive scan-driving function of the LCD panel.The scanning drive circuit operates as a shift register, generating a shift pulse signal as the OUTPUT signal in response to external control signals.This OUTPUT signal simultaneously activates the TFTs in the current row and also functions as the initiation signal for the next row and the termination signal for the previous row, presenting the driving principle of the novel liquid crystal panels. Results and Discussions To develop precise circuit models, we performed detailed feature extraction on a-Si TFT devices used in pixel-driving circuits.These extracted features are critical for subsequent simulation and verification processes.Figure 3a,b display the parameter extraction results for these TFT device models.Specifically, Figure 3a shows the transfer characteristic curve of a-Si TFT, and Figure 3b presents its current characteristic curve.The linear From t1 to t2, with the INPUT signal at a high level and both RST and CLK at a low level, M1 activates.This transition raises V PU at node PU to V PU1 , representing the gate level of M3.With CLK low, M3 remains off.Meanwhile, M6 and M8 activate, lowering the PD node's voltage, while M10 and M11 are off [11,23]. From t2 and t3, the CLK signal attains a high level, INPUT shifts from high to low, and RST maintains a low level.The capacitive coupling through the parasitic capacitance between the gate and source of M3 leads to an increase in the voltage at the PU node to V PU2 .This elevation enhances M3's conduction capability, resulting in an increase in the voltage of the OUTPUT signal V o due to M3's charging.It is important to recognize that the current GOA unit's OUTPUT signal serves as the input for the subsequent stage.During this transmission, the next stage undergoes pre-charging. From t3 to t4, the RST signal shifts from low to high level, while CLK shifts from high to low.M7 and M4 are then activated, pulling down the voltages at PU and OUTPUT points to V GL .It is crucial to note that the PU point's voltage does not diminish instantaneously, assisting in M3 ′ s activation to expedite the OUTPUT signal's discharge.The primary discharge occurs across the terminals of capacitor C, between the OUTPUT signal and the V GL potential. After t4, to maintain the OUTPUT's low state prior to the next ascending phase, M6 and M8 deactivate.M9 and M5 conduct, keeping PD high and M10 and M11 off, stabilizing the capacitor's voltage at V GL . The T f is predominantly influenced by the discharge path's resistance R and the output node's capacitance C. Regarding the discharge phase, both M3 and M4 significantly affect the OUTPUT node's discharge [24], thus being integral to the OUTPUT signal's T f .We posit that the high level of the CLK signal is V H , the low level is V L , the INPUT signal's voltage is V in , and the RST signal's voltage is V R . For transistor M3, the equivalent resistance R M3 can be approximated as follows [25]: where V GS3 denotes the voltage difference between the gate and source, which is pivotal for controlling the transistor's conductance.V T3 is M3's threshold voltage, defined as the minimum voltage required to activate the transistor.Furthermore, C PU signifies the overlap capacitance at the PU point and R M7 denotes M7 ′ s equivalent resistance, which plays a crucial role in determining the discharge path resistance during the high-to-low transition of the CLK signal.During the CLK signal's high-to-low transition, the voltage at node PU changes from V PU2 to V PU3 , been calculated by From Equations ( 5) and ( 6), we derive The average value of R M3 is where t a is a fitted parameter.For M4, if As M3 and M4 jointly provide a discharge path for OUTPUT, their parallel resistance is determined by Considering R L and C L as the connected resistor and capacitor to the OUTPUT node, respectively, the T f simplifies to Equations ( 8), ( 9) and ( 11) suggest that T f is inversely proportional to M3 and M4 ′ s channel widths, indicating that increasing their widths can effectively reduce T f . Results and Discussions To develop precise circuit models, we performed detailed feature extraction on a-Si TFT devices used in pixel-driving circuits.These extracted features are critical for subsequent simulation and verification processes.Figure 3a,b display the parameter extraction results for these TFT device models.Specifically, Figure 3a shows the transfer characteristic curve of a-Si TFT, and Figure 3b presents its current characteristic curve.The linear root mean square error (LinRMS) values in the linear and saturation regions are 0.92% and 1.59%, respectively.Moreover, Figure 3b demonstrates the electrical characteristics of a-Si TFTs in off, transition, and on states, with LinRMS values for these states being 2.27%, 2.81%, and 0.82%, respectively.The pixel-driving circuit investigated in this study incorporates a TFT device with a channel width of 10 µm and a channel length of 4 µm.1.59%, respectively.Moreover, Figure 3b demonstrates the electrical characteristics of a-Si TFTs in off, transition, and on states, with LinRMS values for these states being 2.27%, 2.81%, and 0.82%, respectively.The pixel-driving circuit investigated in this study incorporates a TFT device with a channel width of 10 µm and a channel length of 4 µm.Figure 4 presents a U-shaped TFT design.This design efficiently increases the channel width while minimally enlarging the total TFT area.Due to its resemblance to the le er 'U', the channel is termed U-shaped.This design strategy balances the control of both the area and the of the TFT.In the on state, the area, outlined by a blue dotted line and determined by the gate insulating layer's thickness between the gate metal and the active layer is visible.In contrast, during the off-state, the area, marked by a red do ed line, predominantly overlaps the region between the drain and gate metals.The thickness of this area is represented by the combined thickness of both metal layers, expressed as + .Figure 4 presents a U-shaped TFT design.This design efficiently increases the channel width W while minimally enlarging the total TFT area.Due to its resemblance to the letter 'U', the channel is termed U-shaped.This design strategy balances the control of both the area and the C gs of the TFT.In the on state, the C gs area, outlined by a blue dotted line and determined by the gate insulating layer's thickness t ox between the gate metal and the active layer is visible.In contrast, during the off-state, the C gs area, marked by a red dotted line, predominantly overlaps the region between the drain and gate metals.The thickness of this area is represented by the combined thickness of both metal layers, expressed as t ox + t si . both the area and the of the TFT.In the on state, the area, outlined by a blue dotted line and determined by the gate insulating layer's thickness between the gate metal and the active layer is visible.In contrast, during the off-state, the area, marked by a red do ed line, predominantly overlaps the region between the drain and gate metals.The thickness of this area is represented by the combined thickness of both metal layers, expressed as + .The channel width of the U-shaped TFT is determined using the equation: where A and B represent direct layout measurement values, as shown in Figure 4.The design parameters of the pixel drive circuit, as shown in Table 1, are derived from the extracted values of capacitance and resistance.The channel width W of the U-shaped TFT is determined using the equation: where A and B represent direct layout measurement values, as shown in Figure 4.The design parameters of the pixel drive circuit, as shown in Table 1, are derived from the extracted values of capacitance and resistance.In this study, we aim to engineer a display with a resolution of 1680 × 320 pixels, focusing on optimizing the pixel charging ratio.We rigorously investigate the cascade effect in the gate drive circuit and assess the impact of scan electrode signal delay on display performance.This methodology underscores our dedication to improving display quality by meticulously balancing technical precision and efficiency through comprehensive circuit analysis and optimization strategies.For this analysis, a total of nine waveforms are selected on pixels at Column 1, Column 840, and Column 1680 of Rows 2, 180, and 360.In this selection, T f denotes the maximum delay time among the nine waveforms, and the "charge ratio" indicates the lowest charge ratio among the chosen pixels.The layout area for the GOA is 1200 µm × 199.8 µm.The channel width L of all TFTs is standardized at 3.5 µm.Utilizing Equation ( 12), the actual channel width is calculated as where n is the number of U-shaped structures.For M3 and M4's channel width calculations, with n varying from 7 to 39 (an integer), the formulas are For M1, the calculation yields For M2 and other transistors (M5 to M11), the calculation yields the following: These calculations facilitate the determination of dimensions corresponding to various channel widths, enabling further layout design and analysis.The OUTPUT signal primarily originates from transistor M3, making it the most directly associated TFT with the OUTPUT signal.Simultaneously, the conduction of M4 significantly reduces the OUTPUT signal, affecting the fall time T f .Other TFTs in the circuit mainly function as switches or for noise reduction.Figure 5a,b illustrate the relationship between the T f and the channel widths of M3 and M4, respectively.The simulation results indicate an inverse relationship between T f and W M3 : T f decreases as W M3 increases from 1430 µm to 7150 µm.Similarly, with W M3 fixed at 1430 µm, an increase in W M4 from 320 µm to approximately 1000 µm results in a reduction of T f .This finding is consistent with Equation (12).Increasing the channel widths of M3 and M4 raises their parasitic capacitance, resulting in greater capacitive reactance during signal transmission.However, a larger channel width substantially enhances their conduction ability while active.Thus, for M3 and M4, which initially have narrow channels, widening the channel width improves conduction during operation and effectively shortens the fall time.Balancing and is crucial for optimizing efficiency and performance to minimize . In our study, the optimal widths for M3 and M4 are established through a series of simulation experiments under fixed layout design parameters.These simulations demonstrate that a width of 6790 µm for M3 and 640 µm for M4 achieve the shortest OUTPUT signal and the highest pixel charging ratio.Specifically, the of the OUTPUT signal is recorded at 2.6572 µs, while the pixel charging ratio reaches 90.61275%.The configuration of M3 and M4 transistors influences the value and charging ratio, as presented in Figure 6a.The experiment is conducted within a constant layout area, and the results show that as the channel width of M3 increases, the channel width of M4 correspondingly decreases.However, it is not simply a case of "the larger, the be er" for the M3 channel width.The optimal charging rate occurs when M3's width is 6790 µm and M4's width is 640 µm.Further increasing M3's channel width beyond this point leads to a reduction in the charging ratio.The essence of achieving the highest charging ratio lies not solely in maximizing dimensions but in striking a delicate balance that ensures the most efficient charging process. Figure 6b illustrates the time-dependent variation of pixel voltage waveforms at the selected nine positions with the configurations of 6790 µm and of 640 µm.An examination of the figure shows that during the charging process, the waveform experiences three distinct peaks [26].The initial two peaks correspond to the circuit's "warmup" charging phase, designed to eliminate residual charge in the parasitic capacitance from the previous cycle, thus avoiding interference from past charges.The first pre-charge phase clears any remaining charge, while the second sets a clear threshold or reference for subsequent operations or display cycles.The importance of the "warm-up" charging phase lies in its role in removing residual charge and avoiding interference from prior charges, ensuring stable and precise operation.The presence of parasitic capacitances in the OUTPUT signal and the GOA unit causes a enuation of the CLK signal during trans- Increasing the channel widths of M3 and M4 raises their parasitic capacitance, resulting in greater capacitive reactance during signal transmission.However, a larger channel width substantially enhances their conduction ability while active.Thus, for M3 and M4, which initially have narrow channels, widening the channel width improves conduction during operation and effectively shortens the fall time.Balancing W M3 and W M4 is crucial for optimizing efficiency and performance to minimize T f . In our study, the optimal widths for M3 and M4 are established through a series of simulation experiments under fixed layout design parameters.These simulations demonstrate that a width of 6790 µm for M3 and 640 µm for M4 achieve the shortest OUTPUT signal T f and the highest pixel charging ratio.Specifically, the T f of the OUTPUT signal is recorded at 2.6572 µs, while the pixel charging ratio reaches 90.61275%.The configuration of M3 and M4 transistors influences the T f value and charging ratio, as presented in Figure 6a.The experiment is conducted within a constant layout area, and the results show that as the channel width of M3 increases, the channel width of M4 correspondingly decreases.However, it is not simply a case of "the larger, the better" for the M3 channel width.The optimal charging rate occurs when M3's width is 6790 µm and M4's width is 640 µm.Further increasing M3's channel width beyond this point leads to a reduction in the charging ratio.The essence of achieving the highest charging ratio lies not solely in maximizing dimensions but in striking a delicate balance that ensures the most efficient charging process. Therefore, in situations with limited layout space preventing the further expansion of the TFT channel width, reallocating space to enlarge the CLK signal line's linewidth is an effective strategy.Such modifications ensure the judicious use of a confined space, contributing significantly to the circuit design's overall optimization.Additionally, the layout ensures the even distribution of VDD, GND, CLK, TTR, and VGL signals throughout the display panel, underscoring the thoroughness of the design to achieve optimal performance.Figure 6b illustrates the time-dependent variation of pixel voltage waveforms at the selected nine positions with the configurations W M3 of 6790 µm and W M4 of 640 µm.An examination of the figure shows that during the charging process, the waveform experiences three distinct peaks [26].The initial two peaks correspond to the circuit's "warm-up" charging phase, designed to eliminate residual charge in the parasitic capacitance from the previous cycle, thus avoiding interference from past charges.The first pre-charge phase clears any remaining charge, while the second sets a clear threshold or reference for subsequent operations or display cycles.The importance of the "warm-up" charging phase lies in its role in removing residual charge and avoiding interference from prior charges, ensuring stable and precise operation.The presence of parasitic capacitances in the OUT-PUT signal and the GOA unit causes attenuation of the CLK signal during transmission, resulting in insufficiently charged voltage for subsequent OUTPUT signals.To address this issue, multiple CLK designs are commonly implemented.The intricate design and strategic implementation of these multiple CLK lines, coupled with careful consideration of the effects of parasitic capacitance, are critical in improving display performance.This holistic approach to circuit design guarantees efficient charging, stability, and consistency of the display output.Moreover, Figure 6b also presents the consistency of the normalized pixel voltage waveforms at the selected nine locations, proving the robustness of this practical design. In the primary charging phase, the voltage rapidly increases, with readings at all observed points exceeding 9.2391 V and a charging ratio over 90.61275% under the simulation parameters listed in Table 2.This design also ensures that future signal charging or modulations start from this established value, thereby improving the consistency and stability of signal processing.Figure 6c displays the layout structure of a single GOA circuit, highlighting the organization of TFT within the circuit.To achieve a higher charging ratio within a constrained layout area, the strategic placement of each TFT and via is crucial to optimize space utilization.This design prioritizes the ample allocation of space for M3 and M4, while other TFTs, vias, and traces are meticulously organized to comply with layout design principles and conserve space.Two approaches are adopted to minimize the area used for wiring between TFTs: firstly, grouping TFTs with dual-end connections, such as pairing M5 with M9, M10 with M11, and M6 with M8, along with M7 and M4; secondly, designing non-resistor-value-optimization-related traces to be as narrow as possible.This systematic approach to layout maximizes space efficiency, thereby improving the charging ratio in the GOA layout.Increasing the linewidth of the CLK signal line positively influences the OUTPUT signal of the GOA unit, primarily by reducing its T f .Therefore, in situations with limited layout space preventing the further expansion of the TFT channel width, reallocating space to enlarge the CLK signal line's linewidth is an effective strategy.Such modifications ensure the judicious use of a confined space, contributing significantly to the circuit design's overall optimization.Additionally, the layout ensures the even distribution of VDD, GND, CLK, TTR, and VGL signals throughout the display panel, underscoring the thoroughness of the design to achieve optimal performance. Conclusions In this study, we conduct a comprehensive analysis of the substantial impact that driving and resetting TFTs have on the T f of GOA output signals in AMLCDs.The accuracy of extracting model parameters and identifying parasitic components of TFTs is crucial for precise circuit simulations.Our extensive simulations and experiments demonstrate that modifying the width-to-length ratio of TFTs within the constraints of the GOA cell layout can markedly reduce T f .The implementation of an optimized driving TFT width of 6790 µm and a reset TFT width of 640 µm achieve a minimum T f of 2.6572 µs and a maximum charging ratio of 90.61275% in our simulations, confirming the significance of TFT dimension optimization for enhancing the pixel charging ratio, indicating the improvement of the TFT driving circuit for high-frame-rate display devices. Figure 2 Figure 2 presents the schematic of the gate driver used in this study, consisting of three distinct sections [19]: (a) the single-stage circuit schematic, (b) the timing diagram, and (c) the block diagram.The single-stage circuit (a) features a capacitor C1 alongside 11 TFTs (M1-M11)[20], where M1 is the input TFT, M3 is the driving TFT, and M2, M4-M10 are the reset TFTs.This circuit operates through four phases: Pre-Charge Period, Pull-Up Period, Pull-Down Period, and Low-Level Holding Period[21,22].Within this circuit, the PU node acts as the pull-up point to control the OUTPUT signal, while the PD node serves as the pull-down point.This configuration enables the progressive scan-driving function of the LCD panel.The scanning drive circuit operates as a shift register, generating a shift pulse signal as the OUTPUT signal in response to external control signals.This OUTPUT signal simultaneously activates the TFTs in the current row and also functions as the initiation signal for the next row and the termination signal for the previous row, presenting the driving principle of the novel liquid crystal panels. Figure 3 . Figure 3. (a) The output characteristics and (b) the transfer characteristics of the a-Si TFTs. Figure 3 . Figure 3. (a) The output characteristics and (b) the transfer characteristics of the a-Si TFTs. Figure 5 . Figure 5.The relationship between T f and channel widths of (a) M3 and (b) M4, respectively. Figure 6 . Figure 6.(a) The charging ratio and signal's fall time curves versus the channel width of M3, with a fixed GOA layout of 1200 µm × 199.8 µm; (b) normalized pixel voltage waveforms at the selected nine locations, configurations = 6790 µm, = 640 µm; (c) layout structure of a single GOA circuit. Figure 6 . Figure 6.(a) The charging ratio and signal's fall time T f curves versus the channel width of M3, with a fixed GOA layout of 1200 µm × 199.8 µm; (b) normalized pixel voltage waveforms at the selected nine locations, configurations W M3 = 6790 µm, W M4 = 640 µm; (c) layout structure of a single GOA circuit. Table 1 . Design parameters of the pixel circuit. Table 2 . Design parameters of the GOA circuit.
7,207.2
2024-02-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Tail index estimation, concentration and adaptivity This paper presents an adaptive version of the Hill estimator based on Lespki's model selection method. This simple data-driven index selection method is shown to satisfy an oracle inequality and is checked to achieve the lower bound recently derived by Carpentier and Kim. In order to establish the oracle inequality, we derive non-asymptotic variance bounds and concentration inequalities for Hill estimators. These concentration inequalities are derived from Talagrand's concentration inequality for smooth functions of independent exponentially distributed random variables combined with three tools of Extreme Value Theory: the quantile transform, Karamata's representation of slowly varying functions, and R\'enyi's characterisation of the order statistics of exponential samples. The performance of this computationally and conceptually simple method is illustrated using Monte-Carlo simulations. Introduction The basic questions faced by Extreme Value Analysis consist in estimating the probability of exceeding a threshold that is larger than the sample maximum and estimating a quantile of an order that is larger than 1 minus the reciprocal of the sample size, that is making inferences on regions that lie outside the support of the empirical distribution.In order to face these challenges in a sensible framework, Extreme Value Theory (EVT) assumes that the sampling distribution F satisfies a regularity condition.Indeed, in heavy-tail analysis, the tail function F = 1−F is supposed to be regularly varying that is, lim τ →∞ F (τ x)/F (τ ) exists for all x > 0. This amounts to assume the existence of some γ > 0 such that the limit is x −1/γ for all x.In other words, if we define the excess distribution above the threshold τ by its survival function: x → F τ (x) = F (x)/F (τ ) for x ≥ τ , then F is regularly varying if and only if F τ converges weakly towards a Pareto distribution.The sampling distribution F is then said to belong to the maxdomain of attraction of a Fréchet distribution with index γ > 0 (abbreviated in F ∈ MDA(γ)) and γ is called the extreme value index. The main impediment to large exceedance and large quantile estimation problems alluded above turns out to be the estimation of the extreme value index. Since the inception of Extreme Value Analysis, many estimators have been defined, analysed and implemented into software.Hill [1975] introduced a simple, yet remarkable, collection of estimators: for k < n, where X (1) ≥ . . .≥ X (n) are the order statistics of the sample X 1 , . . ., X n (the non-increasing rearrangement of the sample). An integer sequence (k n ) n is said to be intermediate if lim n→∞ k n = ∞ while lim n→∞ k n /n = 0.It is well known that F belongs to MDA(γ) for some γ > 0 if and only if, for all intermediate sequences (k n ) n , γ(k n ) converges in probability towards γ [de Haan andFerreira, 2006, Mason, 1982].Under mildly stronger conditions, it can be shown that ) is asymptotically Gaussian with variance γ 2 .This suggests that, in order to minimise the quadratic risk E[( γ(k n ) − γ) 2 ] or the absolute risk E | γ(k n ) − γ|, an appropriate choice for k n has to be made.If k n is too large, the Hill estimator γ(k n ) suffers a large bias and, if k n is too small, γ(k n ) suffers erratic fluctuations.As all estimators of the extreme value index face this dilemma [See Beirlant et al., 2004, de Haan and Ferreira, 2006, Resnick, 2007, and references therein], during the last three decades, a variety of data-driven selection methods for k n has been proposed in the literature (See Hall and Weissman [1997], Hall and Welsh [1985], Danielsson et al. [2001], Draisma et al. [1999], Drees and Kaufmann [1998], Drees et al. [2000], Grama and Spokoiny [2008], Carpentier and Kim [2014a] to name a few).A related but distinct problem is considered by Carpentier and Kim [2014b]: constructing uniform and adaptive confidence intervals for the extreme value index. The rationale for investigating adaptive Hill estimation stems from computational simplicity and variance optimality of properly chosen Hill estimators [Beirlant et al., 2006]. In this paper, we combine Talagrand's concentration inequality for smooth functions of independent exponentially distributed random variables (Theorem 2.15) with three traditional tools of EVT: the quantile transform, Karamata's representation for slowly varying functions, and Rényi's characterisation of the joint distribution of order statistics of exponential samples.This allows us to establish concentration inequalities for the Hill process ( √ k( γ(k) − E γ(k)) k ) (Theorem 3.3, Propositions 3.9, 3.10 and Corollary 3.13) in Section 3.2.Then in Section 3.3, we build on these concentration inequalities to analyse the performance of a variant of Lepki's rule defined in Sections 2.3 and 3.3: Theorem 3.14 describes an oracle inequality and Corollary 3.18 assesses the performance of this simple selection rule under the assumption that for some ρ < 0 and C, C > 0. It reveals that the performance of Hill estimators selected by Lepski's method matches known lower bounds.Proofs are given in Section 4. Finally, in Section 5, we examine the performance of this resulting adaptive Hill estimator for finite sample sizes using Monte-Carlo simulations. 2. Background, notations and tools 2.1.The Hill estimator as a smooth tail statistics Even though it is possible and natural to characterise the fact that a distribution function F belongs to the max-domain of attraction of a Fréchet distribution with index γ > 0 by the regular variation property of F (lim τ →∞ F (τ x)/F (τ ) = x −1/γ ), we will repeatedly use an equivalent characterisation based on the regular variation property of the associated quantile function.We first recall some classical definitions. If f is a non-decreasing function from (a, b) (where a and b may be infinite) to (c, d), its generalised inverse f ← : (c, d) → (a, b) is defined by f ← (y) = inf{x : a < x < b, f (x) ≥ y} .The quantile function F ← is the generalised inverse of the distribution function F .The tail quantile function of F is a non-decreasing function defined on (1, ∞) by U = (1/(1 − F )) ← , or Quantile function plays a prominent role in stochastic analysis thanks to the fact that if Z is uniformly distributed over [0, 1], F ← (Z) is distributed according to F .In this text, we use a variation of the quantile transform that fits EVT: if E is exponentially distributed, then U (exp(E)) is distributed according to F .Moreover, by the same argument, the order statistics X (1) ≥ . . .≥ X (n) are distributed as a monotone transformation of the order statistics Y (1) ≥ . . .Y (n) of a sample of n independent standard exponential random variables, (X (1) , . . ., X (n) ) d = U (e Y (1) ), . . ., U (e Y (n) ) . The quantile transform and Rényi's representation are complemented by Karamata's representation for slowly varying functions.Recall that a function L is slowly varying at infinity if for all x > 0, lim t→∞ L(tx)/L(t) = x 0 = 1.The von Mises condition specifies the form of Karamata's representation [See Resnick, 2007, Corollary 2.1] of the slowly varying component of U (t) (t −γ U (t)).Definition 2.1 (von Mises condition).A distribution function F belonging to MDA(γ), γ > 0, satisfies the von Mises condition if there exist a constant s ds with lim s→∞ η(s) = 0.The function η is called the von Mises function.In the sequel, we assume that the sampling distribution F ∈ MDA(γ), γ > 0, satisfies the von Mises condition with t 0 = 1, von Mises function η and define the non-increasing function Combining the quantile transformation, Rényi's and Karamata's representations, it is straightforward that under the von Mises condition, the sequence of Hill estimators satisfies a distributional identity.It is distributed as a function of the largest order statistics of a standard exponential sample.The next proposition follows easily from the definition of the Hill estimator as a weighted sum of log-spacings, as advocated in [Beirlant et al., 2004]. Proposition 2.2.The vector of Hill estimators ( γ(k)) k<n is distributed as the random vector where (E 1 , . . ., E n ) are independent standard exponential random variables while, for i ≤ n, Y (i) = n j=i E j /j is distributed like the ith order statistic of an nsample of the exponential distribution. For a fixed k < n, a second distributional representation is available, where (E 1 , . . ., E k ) and Y (k+1) are defined as in Proposition 2.2.This second, simpler, distributional representation stresses the fact that, conditionally on Y (k+1) , γ(k) is distributed as a mixture of sums of independent identically distributed random variables.Moreover, these independent random variables are close to exponential random variables with scale γ.This distributional identity suggests that the variance of γ(k) scales like γ 2 /k, an intuition that is corroborated by analysis, see Section 3.1. The bias of γ(k) is connected with the von Mises function η by the next formula Henceforth, let b be defined for t > 1 by The quantity b(t) is the bias of the Hill estimator γ(k) given F (X (k+1) ) = 1/t.The second expression for b shows that b is differentiable with respect to t (even though η might be nowhere differentiable), and that The von Mises function governs both the rate of convergence of U (tx)/U (t) towards x γ , or equivalently of F (tx)/F (t) towards x −1/γ , and the rate of convergence of |E γ(k) − γ| towards 0. In the sequel, we work on the probability space where the independent standard exponential random variables E i , 1 ≤ i ≤ n are all defined, and therefore consider the Hill estimators defined by Representation (2.3). Frameworks The difficulty in extreme value index estimation stems from the fact that, for any collection of estimators, for any intermediate sequence (k n ) n , and for any γ > 0, there is a distribution function F ∈ MDA(γ) such that the bias |E γ(k n ) − γ| decays at an arbitrarily slow rate.This has led authors to put conditions on the rate of convergence of U (tx)/U (t) towards x γ as t tends to infinity while x > 0, or equivalently on the rate of convergence of F (tx)/F (t) towards x −1/γ .These conditions have then to be translated into conditions on the rate of decay of the bias of estimators.As we focus on Hill estimators, the connection between the rate of convergence of U (tx)/U (t) towards x γ and the rate of decay of the bias is transparent and well-understood [Segers, 2002]: the theory of O-regular variation provides an adequate setting for describing both rates of convergence [Bingham et al., 1987].In words, if a positive function g defined over [1, ∞) is such that for some α ∈ R, for all Λ > 1, lim sup t sup x∈[1,Λ] g(tx)/g(t) < ∞, g is said to have bounded increase.If g has bounded increase, the class OΠ g is the class of measurable functions f on some interval [a, ∞), a > 0, such that as t → ∞, f (tx) − f (t) = O(g(t)) for all x ≥ 1. For example, the analysis carried out by Carpentier and Kim [2014a] rests on the condition that, if F ∈ MDA(γ), for some C > 0, D = 0 and ρ < 0, (2.6) This condition implies that ln(t −γ U (t)) ∈ OΠ g with g(t) = t ρ [Segers, 2002, p. 473].Thus under the von Mises condition, Condition (2.6) implies that the function ∞ t (η(s)/s)ds belongs to OΠ g with g(t) = t ρ .Moreover, the Abelian and Tauberian Theorems from [Segers, 2002] In this text, we are ready to assume that if F ∈ MDA(γ), then for some C > 0 and ρ < 0, However, we do not want to assume that U (or equivalently F ) satisfies a so-called second-order regular variation property (t → |x −γ U (tx)/U (t) − 1| is asymptotically equivalent to a ρ-regularly varying function where ρ < 0).By [Segers, 2002], this would be equivalent to assuming that t → |b(t)| is ρ-regularly varying.Indeed, assuming as in [Hall and Welsh, 1985] and several subsequent papers that F satisfies where C > 0, D = 0 are constants and ρ < 0, or equivalently [Csörgő, Deheuvels, andMason, 1985, Drees andKaufmann, 1998] that U satisfies makes the problem of extreme value index estimation easier (but not easy).These conditions entail that, for any intermediate sequence (k n ), the ratio et al., 2004, de Haan and Ferreira, 2006, Segers, 2002], this makes the estimation of the second-order parameter a very natural intermediate objective [See for example Drees and Kaufmann, 1998]. Lepski's method and adaptive tail index estimation The necessity of developing data-driven index selection methods is illustrated in Figure 1, which displays the estimated standardised root mean squared error (rmse) of Hill estimators as a function of k for four related sampling distributions which all satisfy the second-order condition (2.7).Under this second-order condition (2.7), Hall and Welsh proved that the asymptotic mean squared error of the Hill estimator is minimal for sequences 1+2|ρ|) . Since, C > 0, D = 0 and the second-order parameter ρ < 0 are usually unknown, many authors have been interested in the construction of data-driven selection procedure for k n under conditions such as (2.7), a great deal of ingenuity has been dedicated to the estimation of the second-order parameters and to the use of such estimates when estimating first order parameters.As we do not want to assume a second-order condition such as (2.7), we resort to Lepski's method which is a general attempt to balance bias and variance. Since its introduction [Lepski, 1991], this general method for model selection that has been proved to achieve adaptivity and provide one with oracle inequalities in a variety of inferential contexts ranging from density estimation to inverse problems and classification [Lepski, 1990[Lepski, , 1991[Lepski, , 1992 Tsybakov , 2000].Very readable introductions to Lepski's method and its connections with penalised contrast methods can be found in [Birgé, 2001, Mathé, 2006].In Extreme Value Theory, we are aware of three papers that explicitly rely on this methodology: [Drees and Kaufmann, 1998], [Grama and Spokoiny, 2008] and [Carpentier and Kim, 2014a]. The selection rule analysed in the present paper (see Section 3.3 for a precise definition) is a variant of the preliminary selection rule introduced in [Drees and Kaufmann, 1998] where (r n ) n is a sequence of thresholds such that √ ln ln n = o(r n ) and r n = o( √ n), and γ(i) is the Hill estimator computed from the (i + 1) largest order statistics.The definition of this "stopping time" is motivated by Lemma 1 from [Drees and Kaufmann, 1998] which asserts that, under the von Mises condition, In words, this selection rule almost picks out the largest index k such that, for all i smaller than k, γ(k) differs from γ(i) by a quantity that is not much larger than the typical fluctuations of γ(i).This index selection rule can be performed graphically by interpreting an alternative Hill plot as shown on Figure 2 [See Drees et al., 2000, Resnick, 2007, for a discussion on the merits of alt-Hill plots]. Under mild conditions on the sampling distribution, κ n (r n ) should be asymptotically equivalent to the deterministic sequence 1 10 1.5 10 2 10 2.5 10 3 10 3.5 k Hill estimates Fig 2 .Lepski's method illustrated on a alt-Hill plot.The plain line describes the sequence of Hill estimates as a function of index k computed on a pseudo-random sample of size n = 10000 from Student distribution with 1 degree of freedom (Cauchy distribution).Hill estimators are computed from the positive order statistics.The grey ribbon around the plain line provides a graphic illustration of Lepski's method.For a given value of i, the width of the ribbon is 2rn γ(i)/ √ i.A point (k, γ(k)) on the plain line corresponds to an eligible index if the horizontal segment between this point and the vertical axis lies inside the ribbon that is, if for all i, 30 If rn were replaced by an appropriate quantile of the Gaussian distribution, the grey ribbon would just represent the confidence tube that is usually added on Hill plots.The triangle represents the selected index with rn = √ 2.1 ln ln n.The cross represents the oracle index estimated from Monte-Carlo simulations, see Table 2. The intuition behind the definition of κ n (r n ) is the following: if the bias is increasing with index i, and if the bias suffered by γ(k) is smaller than the typical fluctuations of γ(k), then the index k should be eligible that is, should pass all the pairwise tests with high probability. The goal of Drees and Kaufmann [1998] was not to investigate the performance of the preliminary selection rule defined in Display (2.8) but to design a selection rule κ n (r n ), based on κ n (r n ), that would, under second-order conditions, asymptotically mimic the optimal selection rule k * n .Some of their intermediate results shed light on the behaviour of k n (r n ) for a wide variety of choices for r n .As they are relevant to our work, we will briefly review them.Drees and Kaufmann [1998] characterise the asymptotic behaviours of κ n (r n ) and κ n (r n ) when (r n ) grows sufficiently fast that is, n where c ρ is a constant depending on ρ, and that This suggests that using κ n (r n ) instead of k * n has a price of the order r 2/(1+2|ρ|) n .Not too surprisingly, Corollary 1 from [Drees and Kaufmann, 1998] implies that the preliminary selection rule tends to favor smaller variance over reduced bias. Our goal, as in [Carpentier andKim, 2014a, Grama andSpokoiny, 2008], is to derive non-asymptotic risk bounds.We briefly review their approaches.Both papers consider sequences of estimators γ(1), . . ., γ(k), . . .defined by thresholds τ 1 ≤ . . .≤ τ k ≤ . ... For each i, the estimator γ(i) is computed from sample points that exceed τ i if there are any.For example, in [Carpentier and Kim, 2014a], | is smaller than a random quantity that is supposed to bound the typical fluctuations of γ(i).The selected index k is the largest eligible index.In both papers, the rationale for working with some special collection of estimators seems to be the ability to derive nonasymptotic deviation inequalities for γ(k) either from exponential inequalities for log-likelihood ratio statistics or from simple binomial tail inequalities such as Bernstein's inequality [See Boucheron et al., 2013, Section 2.8]. We aim at achieving optimal risk bounds under Condition (2.6), using a simple estimation method requiring almost no calibration effort and based on mainstream extreme value index estimators.Before describing the keystone of our approach in Section 2.5, we recall the recent lower risk bound for adaptive extreme value index estimation. Lower bound One of key results in [Carpentier and Kim, 2014a] is a lower bound on the accuracy of adaptive tail index estimation.This lower bound reveals that, just as for estimating a density at a point [Lepski, 1991[Lepski, , 1992]], as far as tail index estimation is concerned, adaptivity has a price.Using Fano's Lemma, and a Bayesian game that extends cleanly in the frameworks of [Grama and Spokoiny, 2008] and [Novak, 2014], Carpentier and Kim were able to prove the next minimax lower bound. Theorem 2.9.Let ρ 0 < −1, and 0 ≤ v ≤ e/(1 + 2e).Then, for any tail index estimator γ and any sample size n such that M = ln n > e/v, there exists a probability distribution P such that i) P ∈ MDA(γ) with γ > 0, ii) P meets the von Mises condition with von Mises function η satisfying . Using Birgé's Lemma instead of Fano's Lemma, we provide a simpler, shorter proof of this theorem (Appendix D). The lower rate of convergence provided by Theorem 2.9 is another incentive to revisit the preliminary tail index estimator from [Drees and Kaufmann, 1998], but, instead of using a sequence (r n ) n of order larger than √ ln ln n in order to calibrate pairwise tests and ultimately to design estimators of the second-order parameter (if there are any), it is worth investigating a minimal sequence where r n is of order √ ln ln n, and check whether the corresponding adaptive estimator achieves the Carpentier-Kim lower bound (Theorem 2.9). In this paper, we focus on r n of the order √ ln ln n.The rationale for imposing r n of the order √ ln ln n can be understood by the fact that if lim sup r n /(γ √ 2 ln ln n) < 1, even if the sampling distribution is a pure Pareto distribution with shape parameter γ (F (x) = (x/τ ) −1/γ for x ≥ τ > 0), the preliminary selection rule will, with high probability, select a small value of k and thus pick out a suboptimal estimator.This can be justified using results from [Darling and Erdös, 1956] (See Appendix A for details). Such an endeavour requires sharp probabilistic tools.They are the topic of the next section. Talagrand's concentration phenomenon for products of exponential distributions Before introducing Talagrand's Theorem, which will be the key tool of our investigation, we comment and motivate the use of concentration arguments in Extreme Value Theory.Talagrand's concentration phenomenon for products of exponential distributions is one instance of a general phenomenon: concentration of measure in product spaces [Ledoux, 2001, Ledoux andTalagrand, 1991]. The phenomenon may be summarised in a simple quote: functions of independent random variables that do not depend too much on any of them are almost constant [Talagrand, 1996a].This quote raises a first question: in which way are tail functionals (as used in Extreme Value Theory) smooth functions of independent random variables?We do not attempt here to revisit the asymptotic approach described by [Drees, 1998b] which equates smoothness with Hadamard differentiability.Our approach is non-asymptotic and our conception of smoothness somewhat circular, smooth functionals are these functionals for which we can obtain good concentration inequalities. The concentration approach helps to split the investigation in two steps: the first step consists in bounding the fluctuations of the random variable under concern around its median or its expectation, while the second step focuses on the expectation.This approach has serioulsy simplified the investigation of suprema of empirical processes and made the life of many statisticians easier [Koltchinskii, 2008, Massart, 2007, Talagrand, 1996b, 2005].Up to our knowledge, the impact of the concentration of measure phenomenon in Extreme Value Theory has received little attention.To point out the potential uses of concentration inequalities in the field of Extreme Value Theory is one purpose of this paper.In statistics, concentration inequalities have proved very useful when dealing with adaptivity issues: sharp, non-asymptotic tail bounds can be combined with simple union bounds in order to obtain uniform guarantees of the risk of collection of estimators.Using concentration inequalities to investigate adaptive choice of the number of order statistics to be used in tail index estimation is a natural thing to do. Deriving authentic concentration inequalities for Hill estimators is not straightforward.Fortunately, the construction of such inequalities turns out to be possible thanks to general functional inequalities that hold for functions of independent exponentially distributed random variables.We recall these inequalities (Proposition 2.10 and Theorem 2.15) that have been largely overlooked in statistics.A thorough and readable presentation of these inequalities can be found in [Ledoux, 2001].We start by the easiest result, a variance bound that pertains to the family of Poincaré inequalities. Proposition 2.10 (Poincaré inequality for exponentials, [Bobkov and Ledoux, 1997]).If f is a differentiable function over R n , and Z = f (E 1 , . . ., E n ) where E 1 , . . ., E n are independent standard exponential random variables, then imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 Remark 2.11.The constant 4 can not be improved.The next corollary is stated in order to point the relevance of this Poincaré inequality to the analysis of general order statistics and their functionals.Recall that the hazard rate of an absolutely continuous probability distribution with distribution F is: h = f /F where f and F = 1 − F are the density and the survival function associated with F , respectively. Corollary 2.12.Assume the distribution of X has a positive density, then the kth order statistic X (k) satisfies where C can be chosen as 4. Remark 2.13.By Smirnov's Lemma [de Haan and Ferreira, 2006], C can not be smaller than 1.If the distribution of X has a non-decreasing hazard rate, the factor of 4 can be improved into a factor 2 [Boucheron and Thomas, 2012].Bobkov and Ledoux [1997], Maurey [1991], Talagrand [1991] show that smooth functions of independent exponential random variables satisfy Bernstein type concentration inequalities.The next result is extracted from the derivation of Talagrand's concentration phenomenon for product of exponential random variables in [Bobkov and Ledoux, 1997]. The definition of sub-gamma random variables will be used in the formulation of the theorem and in many arguments.Definition 2.14.A real-valued centered random variable X is said to be subgamma on the right tail with variance factor v and scale parameter c if for every λ such that 0 < λ < 1/c . We denote the collection of such random variables by Γ + (v, c).Similarly, X is said to be sub-gamma on the left tail with variance factor v and scale parameter c if −X is sub-gamma on the right tail with variance factor v and tail parameter c.We denote the collection of such random variables by Γ − (v, c). ), then for all δ ∈ (0, 1), then with probability larger than Let v be the essential supremum of ∇f 2 , then Z is sub-gamma on both tails with variance factor 4v and scale factor max i |∂ i f |.Again, we illustrate the relevance of these versatile tools to the analysis of general order statistics.This general theorem implies that if the sampling distribution has non-decreasing hazard rate, then the order statistics X (k) satisfy Bernstein type inequalities [see Boucheron et al., 2013, Section 2.8] with variance factor 4/kE 1/h(X (k) ) 2 (the Poincaré estimate of variance), and scale parameter (sup x 1/h(x))/k.Starting back from the Efron-Stein-Steele inequality, the authors derived a somewhat sharper inequality [Boucheron and Thomas, 2012]. Corollary 2.16.Assume the distribution function F has non-decreasing hazard rate h that is, ) be distributed as the kth order statistic of a sample distributed according to F . Then Z is sub-gamma on both tails with variance factor This corollary describes in which way central, intermediate and extreme order statistics can be portrayed as smooth functions of independent exponential random variables.This possibility should not be taken for granted as it is non trivial to capture in a non-asymptotic way the tail behaviour of maxima of independent Gaussians [Boucheron and Thomas, 2012, Chatterjee, 2014, Ledoux, 2001].In the next section, we show in which way the Hill estimator can fit into this picture. Main results In this section, the sampling distribution F is assumed to belong to MDA(γ) with γ > 0 and to satisfy the von Mises condition (Definition 2.1) with von Mises function η.Beirlant et al., 2004, de Haan and Ferreira, 2006, Geluk et al., 1997, Resnick, 2007]. It is well known, that under the von Mises condition In this subsection, we will use the representation (2.4): Proposition 3.1 provides us with a handy upper bound on Var[ γ(k)] − γ 2 /k using the von Mises function. Proposition 3.1.Let γ(k) be the Hill estimator computed from the (k + 1) largest order statistics of an n-sample from F .Then, The next Abelian result might help in appreciating these variance bounds. Proposition 3.2.Assuming that η is ρ-regular varying with ρ < 0, then for any intermediate sequence We may now move to genuine concentration inequalities for the Hill estimator. Concentration inequalities for the Hill estimators The exponential representation (2.3) suggests that the Hill estimator γ(k) should be approximately distributed according to a gamma(k, γ) distribution where k is the shape parameter and γ the scale parameter.We expect the Hill estimators to satisfy Bernstein type concentration inequalities that is, to be sub-gamma on both tails with variance factors connected to the tail index γ and to the von Mises function.Representation (2.3) actually suggests more.Following [Drees and Kaufmann, 1998], we actually expect the sequence √ k( γ(k) − E γ(k)) to behave like normalized partial sums of independent square integrable random variables that is, we believe max 2≤k≤n √ k( γ(k) − E γ(k)) to scale like √ ln ln n and to be sub-gamma on both tails.The purpose of this section is to meet these expectations in a non-asymptotic way. Proofs use the Markov property of order statistics: conditionally on the (J + 1)th order statistic, the first largest J order statistics are distributed as the order statistics of a sample of size J of the excess distribution.They consist of appropriate invokations of Talagrand's concentration inequality (Theorem 2.15).However, this theorem generally requires a uniform bound on the gradient of the relevant function.When Hill estimators are analysed as functions of independent exponential random variables, the partial derivatives depend on the points at which the von Mises function is evaluated.In order to get interesting bounds, it is worth conditioning on an intermediate order statistic. Throughout this subsection, let be an integer larger than ln log 2 n and J an integer not larger than n.As we use the exponential representation of order statistics, besides Hill estimators, the random variables that appear in the main statements are order statistics of exponential samples, Y (k) will denote the kth order statistic of a standard exponential sample of size n (we agree on Y (n+1) = 0).Theorem 3.3, Propositions 3.9 and 3.10 complement each other in the following way.Theorem 3.3 is concerned with the supremum of the Hill process Note the use of random centering.The components of this process are shown to be sub-gamma using Talagrand's inequality, and then chaining is used to control the maximum of the process.Propositions 3.9 and 3.10 are concerned with conditional bias fluctuations, they state that the fluctuations of conditional expectations |E[ γ(i) | Y (k+1) ]| are small and even negligible with respect to the fluctuation of γ(i). The first theorem provides an exponential refinement of the variance bound stated in Proposition 3.1.However, as announced, there is a price to pay, statements hold conditionally on some order statistic. In the sequel, let where c 1 may be chosen not larger than 4 and c 1 not larger than 16. ii) The random variable Z is sub-gamma on both tails with variance factor 4(γ + 2η(T )) 2 and scale factor (γ + 2η(T ))/ and Remark 3.5.If F is a pure Pareto distribution with shape parameter γ > 0, then k γ(k)/γ is distributed according to a gamma distribution with shape parameter k and scale parameter 1. Tight and well-known tail bounds for gamma distributed random variables assert that Remark 3.6.If we choose J = n, all three statements hold unconditionally, but the variance factor may substantially exceed the upper bounds described in Proposition 3.1.Lemma 1 from [Drees and Kaufmann, 1998] reads as follows The second and third statements in Theorem 3.3 provide a non-asymptotic counterpart to this lemma: while the random variable in the expectation is sub-gamma.Remark 3.7.Thanks to the Markov property, Statement i) reads as where 0 < δ < 1/2.Combining Statements ii) and iii), we also get Remark 3.8.The reader may wonder whether resorting to the exponential representation and usual Chernoff bounding would not provide a simpler argument. The straightforward approach leads to the following conditional bound on the logarithmic moment generating function, k+1) . A similar statement holds for the lower tail.This leads to exponential bounds for deviation of the Hill estimator above that is, to control deviations of the Hill estimator above its expectation plus a term that may be of the order of magnitude of the bias. ] for 1 ≤ i ≤ k and to exhibit an exponential supermartingale met the same impediments. At the expense of inflating the variance factor, Theorem 2.15 provides a genuine (conditional) concentration inequality for Hill estimators.As we will deal with values of k for which bias exceeds the typical order of magnitudes of fluctations, this is relevant to our purpose. The next propositions are concerned with the fluctuations of the conditional bias of Hill estimators.In both propositions, J satisfies ≤ k ≤ J ≤ n, and again T = exp Y (J+1) . Proposition 3.9.For all 1 ≤ i ≤ k, conditionally on T , is sub-gamma on both tails with variance factor at most 16η(T ) 2 /k and scale factor 2η(T )/k. The last proposition deals with the maximum of centered conditional biases.The collection of centered conditional biases does not behave like partial sums. Proposition 3.10.Let the random variable Z be defined by Then, i) Conditionally on T , Z is sub-gamma with variance factor 16η(T ) 2 /k and scale factor 2η(T )/k.ii) Remark 3.11.Statements i) and ii) in Proposition 3.10 can be summarized by the following inequality.For any 0 < δ < 1/2, (3.12) Combining Theorem 3.3, Propositions 3.9 and 3.10 leads to another nonasymptotic perspective on Lemma 1 from [Drees and Kaufmann, 1998]. Adaptive Hill estimation We are now able to investigate the variant of the selection rule defined by (2.8) [Drees and Kaufmann, 1998] with r n = c 2 √ ln ln n where c 2 is a constant not smaller than √ 2. The deterministic sequence of indices ( k n (r n )) is defined by: and the sequence ( k n (1)) n is defined by Let 0 < δ < 1/2.The index k n is selected according to the following rule: where c 3 is a constant larger than 60 and r n (δ) = 8 2 ln ((2/δ) log 2 n).The tail index estimator is γ( k n ).Note that As tail adaptivity has a price (see Theorem 2.9), the ratio between the risk of the would-be adaptive estimator γ( k n ) and the risk of γ( k n (1)) cannot be upper bounded by a constant factor, let alone by a factor close to 1.This is why in the next theorem, we compare the risk of γ( k n ) with the risk of γ( k n ). In the sequel, k n stands for k n (r n ).If the context is not clear, we specify k n (1) or k n (r n ).Recall that The next theorem describes a non-asymptotic risk bound for γ( k n ). Assume that n is large enough so that With probability larger than 1 − 4δ, Remark 3.16.For 0 < δ < 1/2, Remark 3.17.If we assume that the bias b is ρ-regularly varying, then elaborating on Proposition 1 from [Drees and Kaufmann, 1998], the oracle index sequence (k * n ) n and the sequence ( k n (1)) n are connected by and their quadratic risk are related by Thus if the bias is ρ-regularly varying, Theorem 3.14 provides us with a connection between the performance of the simple selection rule and the performance of the (asymptotically) optimal choice.The next corollary upper bounds the risk of the preliminary estimator when we just have an upper bound on the bias. Corollary 3.18.Assume that for some C > 0 and ρ < 0, for all n, k, then, there exists a constant κ δ,ρ depending on δ and ρ such that, with probability larger than 1 − 4δ, Under the assumption that the bias of the Hill estimators is upper bounded by a power function, the performance of the data-driven estimator γ( k n ) meets the information-theoretic lower bound of Theorem 2.9. Proof of Proposition 2.2 This proposition is a straightforward consequence of Rényi's representation of order statistics of standard exponential samples. As F belongs to MDA(γ) and meets the von Mises condition, there exists a function η on (1, ∞) with lim x→∞ η(x) = 0 such that Then, Proof of Proposition 3.1 Let Z = k γ(k).By the Pythagorean relation, Representation (2.4) asserts that, conditionally on Y (k+1) , Z is distributed as a sum of independent, exponentially distributed random variables.Let E be an exponentially distributed random variable.where we have used the Cauchy-Schwarz inequality and Var E 0 η(e y+u )du ≤ η(e y ) 2 .Taking expectation with respect to Y (k+1) leads to The last term in the Pythagorean decomposition is also handled using elementary arguments.k+1) du . As Y (k+1) is a function of independent exponential random variables (Y ] may be upper bounded using Poincaré inequality (Proposition 2.10) In order to derive the lower bound, we first observe that Proof of Theorem 3.3 In the proofs of Theorem 3.3 and Propositon 3.10, we will use the next maximal inequality.Proofs follow a common pattern.In order to check that some random variable is sub-gamma, we rely on its representation as a function of independent exponential variables and compute partial derivatives, derive convenient upper bounds on the squared Euclidean norm and the supremum norm of the gradient and then invoke Theorem 2.15. At some point, we will use the next corollary of Theorem 2.15. Corollary 4.2.If f is an almost everywhere differentiable function on R with uniformly bounded derivative f , then f (Y (k+1) ) is sub-gamma with variance factor 4 f 2 ∞ /k and scale factor f ∞ /k.Proof of Theorem 3.3.We start from the exponential representation of Hill estimators (Proposition 2.2) and represent all γ(i) as functions of independent random variables E 1 , . . ., E k , Y (k+1) where the E j , 1 ≤ j ≤ k are standard exponentially distributed and Y (k+1) is distributed like the (k + 1)th largest order statistic of an n-sample of the standard exponential distribution.Let i be such that 0 ≤ i < i, agree on γ(0) = 0.Then, a few lines of computations lead to for i < j ≤ k which entails that Recalling that T = exp Y (J+1) , this can be summarised by Theorem 2.15 now allows us to establish that, conditionally on T , the centered version of i γ(i) − i γ(i ) is sub-gamma on both tails with variance factor 4|i − i |(γ + 2η(T )) 2 and scale factor (γ + η(T )).Using Theorem 2.15 conditionally on T , and choosing i = 0, imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 Taking expectation on both sides, this implies that The proof of the upper bound on E[Z | T ] in Statement ii) from Theorem 3.3 relies on standard chaining techniques from the theory of empirical processes and uses repeatedly the concentration Theorem 2.15 for smooth functions of independent exponential random variables and the maximal inequality for subgamma random variables (Proposition 4.1). Recall that As it is commonplace in the analysis of normalized empirical processes [See Giné and Koltchinskii, 2006, Massart, 2007, van de Geer, 2000, and references therein], we peel the index set over which the maximum is computed. Let L n = { log 2 ( ) , . . ., log 2 (k) }.For all j ∈ L n , let S j = { ∨ 2 j , . . ., k ∧ 2 j+1 − 1} and define Z j as Then, We now derive upper bounds on both summands by resorting to the maximum inequality for sub-gamma random variables (Proposition 4.1).We first bound In order to alleviate notation, let be the binary expansion of i.Then, for h ∈ {0, . . ., j}, let π h (i) be defined by imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 Using that W (π 0 (i)) does not depend on i and that E W (π Now for each h ∈ {0, . . ., j −1}, the maximum is taken over 2 h random variables which are sub-gamma with variance factor 4 × 2 j−h−1 (γ + 2η(T )) 2 and scale factor (γ + η(T )).By Proposition 4.1, So that for all j ∈ L n , and In order to prove Statement iii), we check that for each j ∈ L n , Z j is subgamma on the right-tail with variance factor at most 4 (γ + 2η(T )) 2 and scale factor not larger than (γ + 2η(T )) / .Under the von Mises Condition (2.1), the sampling distribution is absolutely continuous with respect to Lebesgue measure.For almost every sample, the maximum defining Z j is attained at a single index i ∈ S j .Starting again from the exponential representation, and repeating the computation of partial derivatives, we obtain the desired bounds. By Proposition 4.1, Combining the different bounds leads to Inequality (3.4). imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 4.4.Proof of Proposition 3.9 Adopting again the exponential representation, Its derivative with respect to y is readily computed, and after integration by parts and handling a telescoping sum, it reads as A conservative upper-bound on |f (y)| is 2η(e Y (k+1) ) which is upper bounded by 2η(T ).The statement of the proposition then follows from Proposition 4.2.A byproduct of the proof is the next variance bound, for i ≤ k, Proof of Proposition 3.10 In the proof, ∆ i denotes the spacing Y (i) − Y (k+1) , E ∆i the expectation with respect to ∆ i , Y (k+1) an independent copy of Y (k+1) , and E the expectation with respect to Y (k+1) .We will also use the next lemma. Lemma 4.3.Let X be a non-negative random variable, and imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 and recall that b The expectation of Z is thus upper bounded by the following sum . Finally, thanks to Inequality (4.4), where the constant C can be chosen not larger than 3. We check that under E 1 ∩ E 2 ∩ E 3 , the selected index is not smaller than k n .This amounts to check that for all k ≤ k n − 1, and for all i ∈ { c 3 ln n , . . ., k}, Meanwhile, for all k ≤ k n − 1 and for all i ∈ { c 3 ln n , . . ., k}, . Using again (4.5), under Now, under E 2 , thanks to Assumption i) in the theorem statement, imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 Plugging upper bounds on (i), (ii) and (iii), it comes that under E δ , for all k ≤ k n − 1 and for all i ∈ { c 3 ln n , . . ., k}, In order to warrant that under which holds by definition of r n (δ).We now check that the risk of γ( k n ) is not much larger than the risk of γ( k n ). Simulations Risk bounds like Theorem 3.14 and Corollary 3.18 are conservative.For all practical purposes, they are just meant to be reassuring guidelines.In this numerical section, we intend to shed some light on the following issues: imsart-generic ver.2011/11/15 file: adaptHill.texdate: March 18, 2015 1. Is there a reasonable way to calibrate the threshold r n (δ) used in the definition of k n ?How does the method perform if we choose r n (δ) close to 2 ln ln(n)? 2. How large is the ratio between the risk of γ( k n ) and the risk of γ(k * n ) for moderate sample sizes? The finite-sample performance of the data-driven index selection method described and analysed in Section 3.3 has been assessed by Monte-Carlo simulations.Computations have been carried out in R using packages ggplot2, knitr, foreach, iterators, xtable and dplyr [See Wickham, 2014, for a modern account of the R environment].To get into the details, we investigated the performance of index selection methods on samples of sizes 10000, 20000 and 100000 from the collection of distributions listed in Table 1.The list comprises the following distributions.i) Fréchet distributions F γ (x) = exp(x −1/γ ) for x > 0 and γ ∈ {1, 0.5, 0.2}.ii) Student distributions t ν with ν ∈ {1, 2, 4, 10} degrees of freedom.iii) log-gamma distribution with density proportional to (ln(x)) 2−1 x −3−1 , which means γ = 1/3 and ρ = 0. Table 1, which is complemented by Figure 3, describes the difficulty of tail index estimation from samples of the different distributions.Monte-Carlo estimates of the standardised root mean square error (rmse) of Hill estimators are represented as functions of the number of order statistics k for samples of size 10000 from the sampling distributions.All curves exhibit a common pattern: for small values of k, the rmse is dominated by the variance term and scales like 1/ √ k.Above a threshold that depends on the sampling distribution but that is not completely characterised by the second-order regular variation index, the rmse grows at a rate that may reflect the second-order regular variation property of the distribution.Not too surprisingly, the three Fréchet distributions exhibit the same risk profile.The three curves are almost undistinguishable.The Student distributions illustrate the impact of the second-order parameter on the difficulty of the index selection problem.For sample size n = 10000, the optimal index for t 10 is smaller than 30, it is smaller than the usual recommendations and for such moderate sample sizes seems as hard to handle as the log-gamma distribution which usually fits in the Horror Hill Plot gallery.The 1/2-stable Lévy distribution and the H-distribution behave very differently.Even though they both have second-order parameter ρ equal to −1, the H distribution seems almost as challenging as the t 4 distribution while the Lévy distribution looks much easier than the Fréchet distributions.The Pareto change point distributions exhibit an abrupt transition.with r n = √ c ln ln n where c = 2.1 unless otherwise specified.The Fréchet, Student, H and stable distributions all fit into the framework considered by [Drees and Kaufmann, 1998].They provide a favorable ground for comparing the performance of the optimal index selection method described by Drees and Kaufmann [1998] which attempts to take advantage of the secondorder regular variation property and the performance of the simple selection rule described in this paper. Index γ( k dk n ) was computed following the recommandations from Theorem 1 and discussion in [Drees and Kaufmann, 1998]: where ρ should belong to a consistent family of estimators of ρ (under a secondorder regular variation assumption), γ should be a preliminary estimator of γ such as γ( √ n), ζ = .7,and r n = 2n 1/4 .Following the advice from [Drees and Kaufmann, 1998], we replaced | ρ| by 1.Note that the method for computing k dk n depends on a variety of tunable parameters. Comparison between performances of γ( k n (r n )) and γ( k dk n ) are reported in Tables 2 and 3.For each distribution from Table 1, for sample sizes n = 10000, 20000, and 1000000, 5000 experiments were replicated.As pointed out in [Drees and Kaufmann, 1998], on the sampling distributions that satisfy a second-order regular variation property, carefully tuned k dk n is able to take advantage of it.Despite its computational and conceptual simplicity, despite the fact that it is almost parameter free, the estimator γ( k n (r n )) only suffers a moderate loss with respect to the oracle.When |ρ| = 1, the observed ratios are of the same order as (2 ln ln n) 1/3 ≈ 1.65.Moreover, wheres γ( k dk n ) behaves erratically when facing Pareto change point distributions, γ( k n (r n )) behaves consistently. Figure 4 concisely describes the behaviour of the two index selection methods on samples from the Pareto change point distribution with parameters γ = 1.5, γ = 1 and threshold τ corresponding to the 1 − 1/15 quantile.The plain line represents the standardised rmse of Hill estimators as a function of selected index.This figure contains the superposition of two density plots corresponding to k dk n and k(r n ).The density plots were generated from 5000 points with coordinates ( k(r n ), | γ( k(r n ))/γ − 1|) and 5000 points with coordi- The contoured and well-concentrated density plot corresponds to the performance of γ( k n ).The diffuse tiled density plot corresponds to the performance of k dk n .Facing Pareto change point samples, the two selection methods behave differently.Lepski's rule detects correctly an abrupt change at some point and selects an index slightly above that point.As the conditional bias varies sharply around the change point, this slight over estimation of the correct index still results in a significant loss as far as rmse is concerned.The Drees-Kaufmann rule, fed with an a priori estimate of the second-order parameter, picks out a much smaller index, and suffers a larger excess risk. where the last inequality follows from Chebychev negative association inequality.Hence, This differential inequality is readily solved and leads to the corollary.The second summand can be further decomposed using (2.4), . We check that (i) and (ii) tend to 0 and then that (iii) converges towards a finite limit.Fix , δ > 0 and define M = sup{η(t), t ≤ t 0 }.Let A n denote the event {Y (kn+1) > ln t 0 ( , δ)}.For n such that ln(n/k n ) ≤ 2 ln t 0 , as Y (kn+1) sub-gamma with variance factor 1/k n , We first check that (ii) tends to 0. Let n be such that n/k n ≥ t 0 and W n denote the random variable Y (kn+1) − ln (n/k n ).Note that for 0 ≤ λ ≤ k n /2, Ee λ|Wn| ≤ 2e The first summand has a finite limit thanks to Lemma C.1.The second summand converges to 0 as E1 A c n tends to 0 exponentially fast while 1/η(n/k n ) 2 tends to infinity algebraically fast. Bounds on (i) are easily obtained, using Jensen's Inequality and Poincaré Inequality.Using the line of arguments as for handling the limit of (ii), we establish that (i) converges to 0. We now check that (iii) converges towards a finite limit.Note that The expected value of the last random variable is 1/(1 − ρ) 2 .We check that for sufficiently large n, In order to take advantage of Lemma D.1, we use the Bayesian game designed in [Carpentier and Kim, 2014a]. Theorem D.2.Let γ > 0, ρ < −1, and 0 ≤ v ≤ e/(1 + 2e).Then, for any tail index estimator γ and any sample size n such that M = ln n > e/v, there exists a collection (P i ) i≤M of probability distributions such that i) P i ∈ MDA(γ i ) with γ i > γ, ii) P i meets the von Mises condition with von Mises function η i satisfying η i (t) ≤ γt ρi where ρ i = ρ + i/M < 0, iii) Proof of Theorem D.2.Choose v so that 0 ≤ v ≤ 2e/(1 + 2e).The number of alternative hypotheses M is chosen in such a way that ln (n/(v ln M )) ≤ M .If ln n ≥ e/v, M = ln n will do. Fig 3 . Fig 3. Monte-Carlo estimates of the standardised root mean square error ( rmse) of Hill estimators as a function of the number of order statistics k for samples of size 10000 from the sampling distributions. Table 3 Ratios between median rmse of and median optimal rmse.
12,159
2015-03-17T00:00:00.000
[ "Mathematics" ]
A novel state space reduction algorithm for team formation in social networks Team formation (TF) in social networks exploits graphs (i.e., vertices = experts and edges = skills) to represent a possible collaboration between the experts. These networks lead us towards building cost-effective research teams irrespective of the geolocation of the experts and the size of the dataset. Previously, large datasets were not closely inspected for the large-scale distributions & relationships among the researchers, resulting in the algorithms failing to scale well on the data. Therefore, this paper presents a novel TF algorithm for expert team formation called SSR-TF based on two metrics; communication cost and graph reduction, that will become a basis for future TF’s. In SSR-TF, communication cost finds the possibility of collaboration between researchers. The graph reduction scales the large data to only appropriate skills and the experts, resulting in real-time extraction of experts for collaboration. This approach is tested on five organic and benchmark datasets, i.e., UMP, DBLP, ACM, IMDB, and Bibsonomy. The SSR-TF algorithm is able to build cost-effective teams with the most appropriate experts–resulting in the formation of more communicative teams with high expertise levels. Introduction Since the beginning of time, the human race has collaborated and coordinated on activities that are deemed impossible for one human to execute independently. The collaboration on these activities has always been highly influenced by geography and location constraints. In the past, the teams were created based on the individuals present in the same vicinity. This practice resulted in the team formation of individuals who lacked the necessary skills to execute the project successfully [1]. Being an operation's research problem, team formation (TF) effectively selects qualified members for software project management, community collaboration, social networks, etc. More recently, TF is used for selecting team members in a social network graph, in which each individual is represented by a node and has some skills and can connect • One-hot encoding machine learning scheme is applied for the first time in team formation problem in SSR algorithm during the binarization process of skills. One-hot encoding is used to label the skills as present (1) or absent (0) [6]. This led to the faster execution of the algorithm over binary data. • One-hot encoding helps in realizing the edges with or without weights. The removal of zero weighted edges resulted in a reduced graph with only the required skills or features. • The SSR algorithm has shown polynomial-time during convergence when tested on organic/ benchmark datasets against state-of-the-art metaheuristic algorithms. The following section explains the Team formation problem in social networks, followed by the related work on team formation, the performance of the improved algorithm on a real dataset from the Association for Computing Machinery (ACM) is discussed along with a case study, then proposed methodology along with the simulations results is presented, and finally the paper is concluded with discussions on the simulation results generated by the proposed SSR-TF and the comparison algorithms. Team formation in social networks The Team Formation (TF) problem is defined as the minimization of two objectives: the communication cost and the search space, to form an effective team that can perform all the required tasks. The terms and mathematical notations are given in Table 1. Problem 1 Team formation can be considered a graph, G(X,S) consisting of m number of experts, X = {x 1 ,. . .,x m } and n number of skills S = {s 1 ,. . .,s n }. Each expert x i has a set of skills, s(x i )�S, then the set of the skilled expert with skills s k is denoted by SP(s k )�X. The Task T tries to find all the experts x i that cover all/some of the skills belonging to set S [7]. Communication Cost (CC) : is the measure of how closely related two experts are in the given social network based on their common skills. The CC between the two adjacent experts (x i , x j ) in graph G(X,S) is calculated with Jaccard distance as given in Eq (1). Meanwhile, the CC between non-adjacent experts (x i , x j ) is the sum of the shortest path between them. Total Cost (TC) is the measure of the total distance between a Team of Experts (TE) with skills from graph G(X,S) and defined as [8]; Fig 1 shows the possible team formation of three experts X = {x 1 , x 2 , x 3 } with respect to the connection cost based on five skills S = {s 1 , s 2 , s 3 , s 4 , s 5 }. For example, TE can be formed for the required skills {s 1 , s 2 , s 4 }. Forming a social network of teams is to reduce the communication cost between all the experts. Here, some of the possible teams are T 1 = {x 1 , x 3 } and T 2 = {x 2 , x 3 }. The goal of the proposed heuristic algorithm is to find the least communication cost among all team members. Search Space Reduction: To reduce the search space, a sub-set of original data was obtained, which was able to represent the original set of data. The data was reduced both horizontally (selecting skills only required in the given task) and vertically (discarding experts not having any required skills for a given task) to obtain the sub-search space [9,10]. Ultimately a reduced graph G 0 is generated, which contains reduced experts X 0 and reduced skills S 0 . The optimal solutions are then searched in the reduced graph G 0 (X 0 ,S 0 ). Related work Since its inception, Team formation is considered solely dependent on the communication cost. With the passage of time, attributes like personal cost [1], workload balancing [11], unique experts [8,12], and team reliability [13,14] were also added by the researchers to create teams according to their needs. Team formation attributes are given in the Fig 2 and the TF algorithms are discussed based on all or some of these attributes subsequently. Extensive work has been done by the Operations research community on Team Formation (TF) in which they have considered it as a linear integer problem (LIP) and focused entirely on finding links between people and the required functional skills [4,15]. In 2009, Lappas et al. [4] introduced TF as a graph to the data mining community and considered the minimum cost of communication between the social network of experts. They utilized search heuristic functions to approximate the communication costs of the team. The radius function used by them finds the longest shortest path between the two experts, and the Enhanced-Steiner used the minimum-spanning tree (MST) cost of the sub-graph. Nevertheless, both methods were PLOS ONE insensitive to adding or deleting a connection in the graph, thus bringing a radical change to the solution [3]. The same year, Abdelsalam suggested using multi-objective particle swarm optimization (MBPSO) algorithm for efficient team formation in integrated product development (IPD) for complex environments. The problem was broken into three parts: (1) team formation by collecting individuals with specific skills; (2) to ensure team efficiency Myer-Briggs Type Indicators (MBTI) was used for an individual's personality profiling. MBTI helped create teams with people of the same personalities, thus helps in increasing the company's profits; and (3) time management of a person was ensured so he can be made available for multiple project assignments. MBPSO was applied to maximize team effectiveness and team efficiency, and the results of the algorithm were satisfactory [13]. However, intelligent MBPSO lost its significance when all objectives were merged into a single objective using a utility function to search for global minima [16]. In 2011, Kargar et al. [3] proposed a system for finding the team of experts with or without a leader with polynomial delay time. They considered different cost models, in which a person participates with different skills to perform a task; meanwhile, the contribution to the cost was independent for each skill. Also, their model avoids the set covering aspect and thus simplifies the problem [11]. Earlier in 2012, Aris et al. [11] considered the Lappas task assignment method an inefficient one because it only paid attention to coordination costs and ignored workload balancing among team members. Therefore, a new method of online team formation was used to find a delicate balance between workload and coordination costs so that an expert can finish multiple projects without overloading his schedule. The same year in 2012, Kargar et al. tried to answer the team formation problem by inducting personnel cost of an expert based on the number of skills he possessed. Besides personnel cost, the minimum edge connection between the experts was also considered [12,17]. This approach created large teams that practically cannot easily incorporate the minimization of team size altogether [8]. Zhang et al. [16] argued that in order to form effective product development teams, a multi-objective particle swarm optimization (MOPSO) is required that considers all comprehensive capabilities and interpersonal relationships team members. An improved fuzzy Analytic Hierarchy Process based on fuzzy linguistic preference relation is applied to ensure the accuracy and correctness of a member's skills. MBTI is used to model interpersonal relationships based on personality. The results of MOPSO showed that the proposed optimization model is efficient for TF. In early 2014, Teng et al. reported the non-effectiveness of a single team leader to control team members and suggested the use of multiple team leaders to control an ever-growing team. They applied constrained communication load to limit leader communication to team members and used minimum communication cost function to create effective teams [18]. Seeing the wide possibility of creating teams in social networks, Ashenagar et al. [17] discussed two issues of team formation, i.e., the combined minimum cost of the team and the minimum time spent on team formation. In this paper, the algorithm proposed to find experts based on their closeness and eigenvector centralities. In the proposed algorithm, central experts that can reach the other nodes with minimum cost were selected based on the required skills. Central experts always select important neighbors to do other skills. If the expert's neighbors can do other skills, the algorithm selects the minimum cost. If they do not have skills, this algorithm selects from the neighbors-neighbors central expert. The neighborhood search continues until an expert with the required skills is found. Ultimately, the algorithm finds the team with minimum cost from all candidate teams. This approach was tested on the DBLP dataset, and it accomplished less CPU time than the previous methods. Habibur Rehman et al. [19] termed TF a crowdsourcing problem in which larger groups hinder successful collaborations between members. They suggested using two factors optimization, i.e., high affinity and upper critical mass, to overcome unsuccessful collaborations in teams. The concept of high affinity was borrowed by Lappas [4], which means the experts must be comfortable or, in other words, at a close distance from each other. The use of upper critical mass was relatively novel, which effectively constrains the size of groups by splitting them into sub-groups, thus diminishing unsuccessful collaborations. Bahareh et al. [20] also tried to answer team formation problems to minimize the team's personnel and communication costs. To an extent, their algorithm was able to reduce the overall Team formation cost. For the first time in the data mining community in 2016, Wang et al. [7,21] tried to introduce a framework consisting of all the previously proposed methods to form effective teams on a single platform. They effectively implemented the following TF algorithms, i.e., Rarest-First, EnSteiner, MinSD, and MinLD in C plus language. Same year Wu et al. proposed a reasonable human resource allocation through multiple team formation mechanisms. Following this mechanism, a task is based on working strength and sorted according to the contribution of agents/members in the descending edict. Ultimately, the agents who have greater contributions than others are chosen to fulfill the task [22]. Niveditha et al. proposed a Non-Dominated Sorting Genetic Algorithm (NSGA-II) based tri-objective Team formation framework to minimize communication cost, personnel cost, and cardinality the teams. Team formation in social networks was defined to produce compact, cooperative, and low-cost teams. Instead of using decade-old scalarization techniques for multi-objective problems, the NSGA-II algorithm was proposed to solve tri-objectives with affluence. The TF framework was tested on the DBLP skill and co-author dataset to obtain Pareto Optimal Solutions. The precision and recall of the obtained Pareto front to the true Pareto front generated using exhaustive search are evaluated. It was shown in the results that the NSGA-II gives compact teams that converge to the Paretooptimal in less time [8]. Li et al. addressed maintaining and optimizing team performance in a more extensive social network against certain changes made to the team. The proposed Tea-mOPT worked interactively with the users to form teams with special requirements, respond to changes, and team optimization. TeamOPT was effective in finding the best candidates and provided an interactive user experience [23]. Salami et al. tried to answer the Team formation problem with an age-old metaheuristic-based Genetic algorithm (GA). Instead of using social networks of experts to answer a specific problem, experts (i.e., supervisors) interaction with the non-experts (i.e., students) for student-supervisor project allocation was presented. GA effectively allocated supervisors to students based on the fit chromosome. Besides keeping workload balance in mind, GA compared well with optimal integer programming due to the inherent advantage of producing multiple fit solutions [14]. Staden et al. also applied Team formation in digital forensics to detect the most suitable group of persons that could have committed a digital crime. This helped reduce the number of suspect groups to start the investigation, resulting in narrowing the search down to the real suspects [24]. Until 2017, all TF models tried to find people's skills, costs of communication, personality, and other traits, but nobody tried to find reliable teams. However, Fathian et al. not only found better teams but also calculated the reliability/unreliability of a person present in a team. The team performance was further augmented by introducing backup persons if an unreliable person leaves without notice [1]. Yashar et al. redefined scientific social networks in which they defined two objectives, i.e., chemistry level (to measure the scale of communication) and expertise level (to measure the overall skills of experts filtered by chemistry level). They called their approach Chemistry Oriented Team Formation (ChemoTF) and tested on a large expertise corpus of 472,365 individual authors. The ChemoTF algorithm built more communicative and cost-effective teams with higher expertise levels [25,26]. Taghiyareh et al. also proposed a swarm intelligent Brain Drain Optimization (BRADO) to find a team of experts in DBLP and IMDB datasets. Their results were effective PSO, GA, RarestFirst, and EnhancedSteiner algorithms [5]. The year 2018 saw several metaheuristics approaches applied in the field of TF. Baghel et al. used a genetic algorithm for creating multiple teams for different projects and a sociometric matrix for finding a positive social relationship in a TF [27]. Bagherina et al. presented a novel cat swarm-based algorithm to find the team's communication cost and cardinality. In the proposed algorithm, each cat represents a team in the social network graph. All cats are either in seeking or tracing mode throughout the iterations until the final fit team with the minimum communication cost is found [28]. More recently, El-Ashmawi et al. proposed an improved African Buffalo (IABO) algorithm for Team formation in social networks. The IABO algorithm is unified with discrete crossover operator with swap sequence to generate better teams that cover all the skills. For minimum cost calculation among the experts, the Jaccard distance formula is used. IABO generated a team for maximum skills of 10 on DBLP and Stack Overflow datasets successfully [29]. Although IABO was quite efficient in finding teams on ten skills, large enterprises require large skills-size and teams. Although, it would have been better; if IABO was tested on more skillsets. In early 2019, El-Ashmawi again tried to answer the problem of TF with a particle swarm optimization (PSO) and the same old swap operator [2]. This time the skillsets were increased to answer large enterprise requirements, but no heed was paid to enhance the team performance other than just the minimum-cost calculation. The year 2020 brought several advancements in the field of team formation algorithms. Earlier in 2020, Kouvatis et al. proposed a team formation signed network (TFSN) algorithm for effective communication among many individuals in a social network. They tackled the team formation problem differently than previous research by assuming that not all connections in a social network are effective. Two people can be foes or friends depending on the kind of communication they have (i.e., positive or negative). This leads them to build a signed network for two compatible individuals who can perform a task with the least communication cost. TFSN algorithm was effective on medium-sized datasets, but it was not tested on several datasets [30]. The primary goal of team formation is to utilize collective team efforts to achieve any task. Alqahtani tried to find biasness against minorities in a team formation algorithm that incorporates demographic information of an individual. The proposed diversity ranking algorithm considers race or gender during the formation of teams with minimum cost. The proposed algorithms were tested on a real dataset and produced teams with more diversity [31]. Although their work was commendable, big organizations primarily do not consider demographics for hiring a skilled individual. In early 2020, Abdulkader et al. adopted the Jaya algorithm for team formation problems in expert collaborative networks. Jaya offers intrinsic non-parametric tuning, and it always avoids the worst solutions, thus offering global best solutions. The Jaya was tested against a state-of-the-art Sine-Cosine algorithm on an ACM dataset containing experts and their skills. The results indicate that Jaya is a reliable team formation algorithm than the Sine-Cosine algorithm [32]. The same year, Walaa H. El-Ashmawi, minimized the communication cost among skilled individuals in a team with an improved Jaya optimization algorithm. The improved Jaya algorithm used a single-point crossover swap operator to speed up the search process while minimizing the team formation problem. The proposed algorithm was tested on two real datasets and compared with genetic and other algorithms. The results show that the proposed algorithm found effective teams with minimum communication cost [33]. Seeing the unreliable nature of individuals leaving teams and causing recurrent losses to the organization, multiple team formation problems (MTFP) was proposed by Campelo. MTFP utilizes integer linear programming to group individuals into a social network of teams. For individuals, time fractions were created to facilitate him to work on different teams. MTFP was highly reliable in finding multiple teams tested on realworld social networks [34]. The major contributions to team formation (TF) in literature are given in Table 2. Despite providing several optimized solutions to the TF-problems, previous researchers didn't try to overcome the problems associated with the datasets being utilized or the CPU time offered by the algorithms. TF deemed an NP-hard problem, this paper will try to overcome both of these problems and will try to converge in polynomial time. The proposed SSR-TF algorithm is discussed in the next section. The proposed SSR-TF algorithm Search Space Reduction-Team Formation (SSR-TF) is an entirely different approach towards solving the TF problem than the previous algorithms. Instead of entirely relying on communication-cost calculation first, this algorithm tries to reduce the features in the graph to only the appropriate ones, so there is nothing left insignificant in the data. This starts with the extraction of skills in the given task and selecting experts related to those specific skills from the dataset and then the sub-graph is formed. This step leads us towards the formation of teams with significantly lower communication costs and team members in real-time. The SSR-TF methodology is illustrated in Fig 3. Using social network Graph, G, and a task T, SSR-TF builds a network in which each expert has at least one skill. Then, all the expert data is converted into binary form for faster execution. HashMap is used for linking experts with their skills. Then, one-hot encoding is applied to filter out those skills/experts which are not required, resulting in a sub-graph G'. At that time, SSR-TF starts on G' and continues to finds all successful combinations of experts with skills. The team's fitness is checked at each iteration with Eqs (1) and (2). SSR-TF continues to create/drop teams until the threshold level is reached or the team with the best fitness value and required skills are reached. Fig 4 shows the SSR-TF algorithm for finding the best team. Time complexity of the SSR-TF algorithm The time complexity of the proposed algorithm refers to characterize the execution time, regardless of the hardware, programming language, and compiler used for implementation. This time complexity analysis evaluates the execution time variation of the proposed algorithm based on the input data size. Typically, the time complexity of such an algorithm is denoted by the asymptotic notation (O). The proposed algorithm has two main searching criteria that are vertical and horizontal searches. Each search has a complexity of nlogn, where n is the number of individuals in the data set during the vertical search. In contrast, n represents the number of searched expertise during the horizontal search. In such a case, the searching complexity of the proposed team formation algorithm is 2nlog(n). After searching for required individuals and their skills, there is an addition of individuals to PLOS ONE the merger in a team that has complexity equal to the array addition that is O(n). Finally, the overall time complexity of the proposed team formation algorithm becomes O(2nlog (n 2 )), which is comparatively less than the other approaches. Preliminaries In order to demonstrate the efficiency of the proposed SSR-TF algorithm was tested on five datasets, i.e., UMP, DBLP, ACM, IMDB, and Bibsonomy. The simulation experiments were performed on an Intel Core i5 processor with 8 GB of RAM, using Java Eclipse software and Microsoft Windows 10. The proposed SSR-TF algorithm was compared with the most recent state-of-the-art metaheuristic Hill-Climbing TF, Jaya-TF [33], and Sine-Cosine-TF [32] algorithms. The selected performance parameters for team formation are Total Communication-Cost (TC), CPU time in milliseconds, and experts in a team. For all experiments, best tuning parameters are used. The datasets and their statistics are given in Table 3. Also, all the algorithm's parameter settings are given in Table 4. Universiti Malaysia Pahang (UMP) dataset (D01). Universiti Malaysia Pahang (UMP) dataset (D01) is a medium-size dataset that contains comprehensive information about 96 academicians with 164 skills related to the computer science field. It was collected by Kamal et al. [32] to find successful collaborating teams within the faculty of computing, UMP to run costeffective projects. This team formation dataset is one of the cleanest available online [35]. A single instance of the dataset is available in the following manner<EMAIL_ADDRESS>= Combinatorial Testing, Computational Intelligence, Artificial Intelligence" and normalized using one-hot encoding in SSR as<EMAIL_ADDRESS>my = 1 1 1 0 0 0 0 ". Database Systems & Logic Programming (DBLP) dataset (D02). The DBLP dataset has the largest number of experts from the Database, Theory, Data-mining, and Artificial Intelligence fields. In this dataset, people having more than one paper indexed on DBLP are selected as experts. The skills of each expert are based on the title of the authored paper broken down into meaningful words. The dataset is available online [36]. PLOS ONE Association for Computing Machinery (ACM) dataset (D03). It is another dataset collected by Prof. Min-Yen Kan from the National University of Singapore. The dataset was extracted from papers published between 2003 to 2010. The authors of the paper are considered experts, and keywords are considered their unique skills. The dataset can be found online [37]. Internet Movie Database (IMDB) dataset (D04). The dataset (D04) extracted from Internet Movie Database (IMDB) is quite dense than the other datasets and can test the scalability of an algorithm getting tested [7]. The dataset is collected from the year 2000 to 2002, and only those actors are considered experts who have appeared in at least eight movies during this period. The acting skills of an actor are justified by the number of genres he can perform. The communication cost of two experts is calculated with Eq (1). The dataset is normalized in the same manner as other datasets so that one algorithm can be tested on several datasets. The dataset can be downloaded here [38]. Bibsonomy dataset (D05). The dataset (DO5) is a large dataset taken by Bibsonomy that provides sharing and bookmarking of scientific publications online [21]. The authors of the bookmarked publications are considered experts, and bookmarks are considered their expertise. Statistical evaluation of the SSR-TF algorithm The experimental results of the proposed SSR-TF with the Hill-Climbing TF, Jaya-TF, and Sine-Cosine-TF algorithms for each skillset are discussed in the sub-sections. SSR-TF and parallel metaheuristics on D01 dataset. The proposed SSR-TF efficiency is tested on an organic UMP (D01) dataset against state-of-the-art metaheuristic algorithms, i.e., Hill Climbing-TF, Java-TF, and Sine Cosine-TF. The results of SSR-TF for total communication cost, elapsed time, experts, and a varying number of skills, S = {5,10,15,20} are given in Tables 5-8. Minimum cost vs. skills, elapsed time vs. skills, and experts vs. skills are given in Figs 5-7 (D01), respectively. For five skills, D01 was not able to find the minimum communication cost. However, its elapsed time was relatively low, as given in Table 5. The number of experts identified was the same as Jaya-TF and Sine Cosine TF. Nevertheless, as the number of skills was increased, the proposed SSR-TF started showing the best communication cost, CPU Table 5. Algorithm's performance on datasets (D01, D02, D03, D04, & d05) for skillset (05). D01 D02 D03 D04 D05 D01 D02 D03 D04 D05 D01 D02 D03 D04 D05 SSR-TF 2 0 1 0 1.39 4 30 43 07 13 2 1 2 1 2 Hill Tables 5-8. SSR-TF and parallel metaheuristics on D02 dataset. The proposed SSR-TF efficiency is evaluated on a benchmark DBLP (D02) dataset with state-of-the-art metaheuristic algorithms. The comparison results of SSR-TF are given in Tables 5-8. Minimum cost vs. skills, elapsed time vs. skills, and experts vs. skills are given in Figs 5-7 (D02), respectively. SSR-TF showed a similar communication cost as Jaya-TF, i.e., 0, but the CPU time was relatively low compared to Jaya-TF. However, both algorithms were able to identify a single expert for the same skills. Again, as the number of skills was increased, the SSR-TF started producing better results than other algorithms. Algorithm (s) Total Communication-Cost Total Time (In milli-seconds) Number of Experts SSR-TF and parallel metaheuristics on D03 dataset. The proposed SSR-TF efficiency is verified on an organic ACM (D03) dataset against state-of-the-art metaheuristic algorithms. As evident from Tables 5-8, the proposed algorithm performed similarly to Jaya-TF and Sine Cosine TF. However, as the number of skills increases, SSR-TF began to generalize well on finding experts with less communication cost and time than other comparison algorithms. Minimum cost vs. skills, elapsed time vs. skills, and experts vs. skills for SSR-TF and comparison algorithms are given in Non-parametric test analysis In this paper, the Wilcoxon rank-sum test is used to determine the significance of the communication cost obtained by the proposed SSR-TF over other algorithms [39]. Wilcoxon determines hypothesis h 0 : that all algorithms perform the same versus the alternative hypothesis, h 1 : that at least one algorithm is significantly better than the others. The test is performed by considering the best communications cost obtained by the proposed SSR-TF and the parallel algorithms. The test is conducted on the best solution obtained by each algorithm on each dataset with a 95 percent significance level (α = 0.05). In Table 9, the positive (+) sign specifies that the proposed algorithm is better than the parallel algorithm. The negative (-) sign specifies that the proposed algorithm is inferior to the compared one. As shown in Table 9, the proposed SSR-TF algorithm seems to obtain statistically significant performance than the other parallel algorithms most of the time. Threats to validity The proposed SSR-TF algorithm has been proved to achieve better results than the other considered approaches, but there are still a few drawbacks/threats that are worth attention to be solved in the near future. In team formation research, different threats are addressed during the experimentations and evaluations. Normally, these threats are classified into internal and external. Depending on the type of research, this study is also not devoid of these threats. External threats to validity occur when the algorithm cannot generalize the experiments to the real-world problems. Mostly, the adopted benchmarks do not represent the real-world applications with the same parameters, values, and interaction strength. This threat is eliminated by choosing the most commonly used experimental benchmarks in the literature. These benchmarks are commonly used for practical evaluations and obtained from a real configurable software. Internal validity threats occur due to the factors that directly or indirectly affect the experiments and are out of control. Some of the threats to internal validity are population size, number of iterations, and parameter settings of algorithms. Besides obtaining best results, mean results are used to ensure robust performance on each algorithm. Generation time for each algorithm also threatens the internal validity. Running environments, data structures, implementation languages, and the operating environments highly effects the generation time. This threat was eliminated by implementing all algorithms in the same language and operating environments. SSR algorithm is tested on clean and middle-sized datasets containing less complex and low volume instances, which does not give the behavior of this approach on high volume and complex datasets. The algorithm also contains the string to binary and binary to string conversions, which is an additive process other than the actual working of the algorithm. Less complex data transformation methods can replace this dual conversion process of data. Conclusions and future works Team formation (TF) in social networks uses the graph search to provide collaboration between experts. This led us towards forming cost-effective research teams irrespective of the geolocation of the experts and the size of the dataset. Several TF-formation algorithms were proposed in the past decade, but they failed to scale well on large datasets. Therefore, this paper presents a novel TF algorithm for expert team formation called SSR-TF based on two metrics; communication cost and graph reduction, that will become a basis for future TF's. The decades-long efforts to produce cost-effective teams in social networks that can converge in polynomial time are successfully achieved with SSR-TF. SSR-TF has efficaciously created social teams of experts and showed its prowess when tested against state-of-the-art metaheuristic Hill-Climbing TF, Jaya-TF, and Sine-Cosine-TF algorithms. The reduced graph feature of SSR-TF has enabled to selection most appropriate experts with the proper skills to finish a task. Besides offering benefits like appropriate person selection and polynomial time, SSR-TF has opened new future horizons for the researchers towards creating teams in a number of ways; • SSR-TF performance will be enhanced with the introduction of personal cost for each expert based on the years of experience, task/project leader selection based on the number of skills for leading a specific project team, and identifying backup teams in case the leading team's personnel are missing or unable to finish the task. • Sometimes, global collaborations require more skills to be handled by the team, therefore in the future, SSR-TF will be tested on a large number of skills against other metaheuristic algorithms. • The current COVID'19 pandemic and the death toll it caused led us to believe that we should be prepare for any future pandemics. The preparedness to stop any future pandemics can be ensured by creating an expert dataset of virology and other diseases. So, when an outbreak occurs, TF can be applied to gather brilliant minds from all over the globe and solve the problem effectively.
7,455
2021-12-02T00:00:00.000
[ "Computer Science" ]
Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction : Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability to personalize a museum visit to the goals and interests of the user that intends to visit the museum by taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, we introduce and analyze a special type of help (critical help) that leads to a substantial change in the user’s request, with the objective of taking into account the needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo, which integrates three different multi-agent programming levels. We provide the results of a pilot study that we conducted in order to test the potential of the computational model. The experiment was conducted with 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot interaction (HRI) scenarios. Introduction Information and communication technologies (ICT) have been well represented over the years, and today, they are a fundamental aspect of support for cultural heritage (CH) documentation, interpretation, recreation, and dissemination. Digitization and ICT applications have been recognized as effective support for cultural heritage preservation, as well as for the production of a significant number of additional resources for the management of cultural heritage itself. For example, the Europeana project [1] has created new scientific and public access to cultural heritage resources. Archivists today have a suite of tools for creating digital copies, ranging from scanning, photography, and 3D volumetric photogrammetry. Many of these tools have emerged as fundamental apparatuses for material-based CH archives. ArCo [2] is the Italian cultural heritage knowledge graph, consisting of a network of seven vocabularies and 169 million triples of about 820 thousand cultural entities. It collects and validates the catalog records of (ideally) all Italian cultural heritage properties (excluding libraries and archives). A number of research projects have also made notable progress in intangible assets. For instance, the i-Treasures project [3] focuses on organizing the intangible know-how. The aim of the project is to carry out a series of research to digitize CH assets, covering traditional dances, folk singing, craftsmanship, and contemporary music composition. Similarly, the Terpsichore project [4] aims to integrate ICT strategies with storytelling to advance the digitization of CH content related to traditional dances. Recent studies in cultural computing have triggered a growing number of algorithmic advancements to facilitate CH data usage [5,6]. In parallel to these approaches, innovative paradigms and technologies for cultural heritage exploitation have been proposed [7]. These technologies enable user-centred presentation and make cultural heritage digitally accessible by providing the possibility of a user experience when physical access is constrained. For example, virtual reality (VR) is becoming an increasingly important tool for the research, communication, and popularization of cultural heritage [8]. A great deal of 3D interactive reconstructions of artifacts, monuments, and entire sites have been realized, which meet the consent of both specialists and the public at large [9]. Related Work The last two decades have seen many efforts in deploying social robots in museum settings [10][11][12][13][14]. Social robots expose a wide range of capabilities that allow them to interact and assist humans in a natural manner. This makes them suitable for museum settings as they can greet, educate, or provide guides to visitors. Early and remarkable work in this field includes the autonomous Rhino robot, a mobile tour guide robot able to navigate in a museum and play pre-recorded descriptions of the exhibitions. The robot has been deployed in real museums with the effect of increasing the overall attendance to the museum by at least 50%. Other important pioneering work [15] has led to the development of various autonomous and mobile robots with the purposes of greeting visitors, giving guides, and showing additional information (e.g., videos) that bring exhibitions to life. Lastly, a similar robot, Minerva [11], provided guides to visitors; unlike Rhino, Minerva was equipped with a face and could display emotions using changes in vocal tonality and facial expressions. Despite the pioneering work, these robots were much more focused on aspects related to motion and were designed to guide visitors inside the museum without bothering to investigate the real tastes of the visitors. In fact, speaking about Minerva, for example, when questioned, 36.9% of 63 people perceived Minerva as having an intelligence similar to humans. However, 69.8% did not perceive Minerva to be alive, suggesting that its social interactivity capabilities were still limited. Over the years, the purpose of several museums has shifted from providing static information about the resources they handle (e.g., collection of artworks) to providing a much more wide user experience. In this perspective, different Human-Robot interaction (HRI) approaches have been proposed in order to design intelligent robots able to properly interact with users in museums [13,14,16]. For example, the CiceRobot project [13] aimed to develop a robotic tour guide whose behavior was based on a cognitive architecture integrating perception, self-perception, planning, and Human-Robot interaction. In addition to navigating properly inside a museum, the robot was able to explain the contents of windows to visitors and enabled them to ask queries on topics related to objects in the windows themselves. Vasquez and Matia [16] proposed a research project with the main goal of developing a smart social robot showing sufficient intelligence to work as a tour guide in different environments. A fuzzy emotion system that controls the face and the voice modules form part of the architecture underlying the robot's behavior for assisting interactions. Other works leverage robot and user gaze in order to establish a much more deep interaction with visitors. Recently developed museum robots also try to achieve personalization [17] of the user experience. For example, Tamakasa et al. [18] developed an autonomous human-like guide robot for a science museum. The robot identifies individuals, estimates the exhibits at which visitors are looking, and proactively approaches them to provide explanations with gaze autonomously, using our new approach called speak-and-retreat interaction. The robot also performs such relation-building behaviors as greeting visitors by their names and expressing a friendlier attitude to repeat visitors. However, despite these approaches allowing the development of natural and human-like interaction, they do not take into account the mental states of the interacting user. Therefore, they do not achieve real personalization [19], which could be possibly based on complex models that describe the user, starting from features that can be investigated by the robot during the interaction. Personalization of cultural heritage information requires a system that is able to model the user (e.g., interest, knowledge, and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. Contextually to the ability to consider the user's needs, an intelligent system should also take into account the interests, goals, and plans of those who manage the cultural heritage and allow its usage. These plans, goals, and interests are, in general, implicit in the restrictions and mandatory choices that the museum makes available to the users. However, they can be adapted and personalized for each user. In practice, on the basis of the mental attitudes attributed to the user and the constraints or needs attributable to the museum curators, a mediation system between these two subjects (e.g., users and museum curators) can play a role in museum visit customization, in order to best satisfy both parties. For example, the mediation system should not only personalize a visit based on the user's artistic interests and other characteristics declared or attributable to them (e.g., time available, level of interests, and so on), but it should also consider all those features related to the interests, goals, and plans that the museum curators designed for a museum tour. Most of the time (this is the approach followed in this paper), the goals/interests of the museum curators are oriented to the satisfaction of the user, not in contrast to that (e.g., intent to guide the user to visit a really relevant collection that the user did not know and cannot assess the value of). However, a negotiation process is necessary; in our case, this process takes place through the role of the mediator (e.g., a robot). In any case, it will be the user at the end of the visit to declare their satisfaction with the mediation process realized by the robot. A museum potentially has a huge amount of digital information to present, in addition to the artworks that can be physically visited. Intelligent systems have to be able to handle this amount of information in order to adapt the level of detail in describing a specific artwork, not only with respect to the level of accuracy required by the user but also on the basis of the level of accuracy that the museum curators believe it is necessary to understand an artwork. Contribution In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability of personalizing a museum tour with respect to the goals and interests of the users that intend to visit the museum, also taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, the computational model is able to: • Investigate the artistic interests of the user and model the user with respect to those interests by attributing to them specific mental states (beliefs, goals, plans) and creating a complex user model; • Model the beliefs, goals, and plans of the museum curators; • Select the most suitable museum tour as a result of a negotiation internal to the agent, between the represented mental states of the user and the represented mental states of the exhibition curators; • Investigate different dimensions of the user's satisfaction with respect to the tour proposed by the intelligent agent. We provide the results of a Human-Robot interaction pilot study that we designed in order to test the capabilities of the computational model. We recruited 26 real participants that have interacted with the humanoid robot Nao [20], widely used in Human-Robot interaction scenarios. The robot plays the role of a museum assistant in a virtual museum, and it has the goal of providing a museum exhibition to the user. At the end of each interaction, the robot proposes a short survey to the user, with the aim of investigating different dimensions of her satisfaction with respect to the presented exhibition. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo [21], which integrates three different multi-agent programming levels: agent-oriented (AOP), environment-oriented (EOP), and organization-oriented programming (OOP). In conclusion, the main contribution of our work consists of investigating the possibility of offering a kind of help (this help is provided by an artificial system, such as a robot) that does not necessarily correspond to the explicit and declared request of the user. This kind of collaboration, which tries to offer an answer to protect the interests and goals of the user that the same user is not always able to perceive, represents a novelty in the panorama of Human-Robot collaboration. The novelty of this work lies in the fact that, in our model, the system solicits a request from the user but then analyzes it critically, also taking into account the actual collections of the museum and how these can best satisfy the profile that the system has built of the same user. The satisfactory results we have witnessed show how this experiment, albeit preliminary, goes in the right direction. The paper is organized as follows: Section 2 describes the background underlying our approach; Section 3 focuses on the description of the cognitive model; Sections 4 and 5 are dedicated to the experiment and its results; finally, Sections 6 and 7 are dedicated to conclusions and future works. Background The human capability to attribute mental representations and states to AI agents becomes crucial in the context of Human-Agent Cooperation [22], where it is desirable that the role of such agents is not that of a passive executor but it becomes that of an active collaborator. Let us consider a collaborative scenario in which a human X and an artificial agent Y share the same plan. In this context, X relies on Y to realize some part of their common plan or of the X's plan (task delegation); on its side, Y decides to help X to achieve some of her goals by replacing itself in some role/action of X's plan and achieving some goal (task adoption). Now, in order to do something for X, Y has to understand X's goals and beliefs, for example, X's expectations about Y's behavior. Cooperation and, consequently, task delegation/adoption implies more than simple obedience to orders or simple execution of a prescribed action [23]. From the artificial agent's point of view, delegation and adoption distinguishes a collaborator from a simple tool and presuppose intelligence and autonomy [24]. In their complex sense, cooperation and help are not just order/task execution; they require more autonomy and even initiative. Let us focus on a deep level of cooperation, where agent Y can adopt a task delegated by X at different levels of effective help. The different levels of adoption can be individuated, according to [22]: • Sub help: agent Y satisfies a sub-part of the delegated world-state (so satisfying just a sub-goal of agent X), • Literal help: agent Y adopts exactly what has been delegated by agent X, • Over help: agent Y goes beyond what has been delegated by agent X without changing X plan (but including it within a hierarchically superior plan), • Critical over help: agent Y realizes an over help and, in addition, modifies the original plan/action (included in the new meta-plan), • Critical help: agent Y satisfies the relevant results of the requested plan/action (the goal), but modifies that plan/action, • Critical-sub help: agent Y realizes a sub help and, in addition, modifies the (sub) plan/action. The theory of delegation and adoption and the more general theory of adjustable social autonomy [24] represents the core theoretical background underlying the design of the computational cognitive model proposed in this work. Two additional theoretical tools applied in the field of HAI, have supported the design process: theory of mind (ToM) and BDI agent modeling. Theory of mind [25] can be defined as the ability of an agent (human or artificial) to ascribe to other agent specifics mental states, and to take them into account for making decisions. Modeling other agents is one of the most important abilities learned by humans when they cooperate with each other. Humans have a strong predisposition to anthropomorphize anything that surrounds them and to evaluate or predict behaviors of other humans on the basis of a strong ToM of their interlocutors, with the result of fostering an intelligent collaboration. However, the increasing but recent introduction of intelligent systems in society has not yet allowed people (mainly non-specialists) to have a ToM of the systems based on correct assumptions. Providing artificial agents with the capability to build complex models of the interlocutor's mental states and to adapt their decisions on the basis of these models represents a crucial point for promoting an intelligent and trustworthy collaboration. BDI agent modeling [26] is one of the most popular models in agent theory [27]. Originally inspired by the theory of human practical reasoning developed by Michael Bratman [28], the BDI model focuses on the role of intentions in reasoning and allows the characterization of agents using a human-like point of view. Very briefly, in the BDI model, the agent has beliefs, information representing what it perceives in the environment and communicates with other agents, and desires, states of the world that the agent means to accomplish. The agent deliberates on its desires and decides to commit to one of them: committed desires become intentions. To satisfy its intentions, it executes plans in the form of a course of action or sub-goals to achieve. The behavior of the agent is thus described or predicted by what it has committed to carry out. An important feature of BDI agents is the ability to react to changes in their environment as soon as possible while keeping their proactive behavior. An Overview of the Computational Cognitive Model The proposed computational cognitive model ( Figure 1) provides a cognitive artificial agent (with its own beliefs, goals, intentions, and so on) with the capability of personalizing a museum tour on the basis of the mental state of the user that intends to visit the museum, by also taking into account the mental states of the museum curators that have designed the exhibition. The final tour recommended is the result of an agent's internal process of negotiation between the mental states of the user and those of the museum curators. The mental states of the user; that is, the beliefs, goals, and plans that the agent attributes to the user thanks to the capability of having a ToM of the user themselves; • The mental states of other agents involved in the scenario. In this case, the agents are the museum curators; that is, those who designed, realized, and maintain the museum exhibition; • General beliefs, which correspond to the agent's knowledge. The computational model provides the agent with the tools to interact with the user in order to map, into its Beliefs Base, the information it considers relevant for adapting the museum visit to the user themself. The agent establishes an initial interaction with the user, with the goal of profiling them by investigating their artistic interests (Artistic User Profiling). Through a voice interaction and supported by interactive tools (GUI), the agent is able to extract information and collect it into a user profile P U = <p F , P D , Acc u >, defined as a tuple of features encoding: • p F : the artistic period favorited by the user, • P D : the artistic periods in which the user has no interest, • Acc u : the level of accuracy with which the user intends to view the material proposed during the visit to the museum. In addition to the user, the cognitive model allows the agent to model the mental states of other agents involved in the scenario. In this case, the agent is able to model in its Beliefs Base some beliefs, goals, and plans ascribed to the museum curators that have designed the entire exhibition. Unlike the user model, which is created at run-time based on their profile, the exhibition curators' model was previously described in the agent's Beliefs Base. While representing an a priori knowledge of the agent, this model can be modified by the agent itself through interaction with the museum curators themselves. After investigating the user's artistic interests, profiling them, and attributing mental states consistent with the profile created, the agent has to select a museum visit to propose to the user. The cognitive model defines multiple heuristics that can be exploited by the agent to identify the most suitable museum tour; these heuristics implement different internal negotiation processes that the agent triggers with the aim of mediating the choice of the museum visit, considering the mental state of the user and those of the curators of the exhibition (Negotiation strategies). The selection of the most suitable heuristic depends on the mental states that are modeled on the agent's Beliefs Base (Strategy selection). In Section 4, we will describe the heuristic exploited by the agent in the pilot study. The Pilot Study This section describes a Human-Robot interaction (HRI) pilot study that we designed in order to test the capabilities of the computational model. We recruited 26 participants that have interacted with the Nao robot. The robot plays the role of a museum assistant in a virtual museum, and it has the goal of providing a museum exhibition to the user. During the interaction, the robot collects information for profiling the user and, therefore, plays the role of assistant to the visit to the museum by offering the possibility to listen to the descriptions of the artworks read by itself. At the end of each tour, the robot proposes a short survey to the user, with the aim of investigating different dimensions of their satisfaction with respect to the recommended exhibition. The robot helps users to visit the part of the museum that is the most appropriate to their artistic interests and that represents a mediation between these interests and those of the museum curators. The museum tour resulting from the mediation process can be suited to the artistic interests explicitly declared by the user, or it could be slightly different from the declared user interest. In the first case, the robot provides literal help to the user; in the second case, it provides critical help to the user. In the case where the robot provides critical help, it tries to satisfy the user by leveraging the implicit assumptions that are based on the user's artistic interests explicitly declared. Experimental Design The museum that the user explores is organized in multiple thematic tours (Figure 2), each containing artworks (Figure 3) that belong to the same artistic period (e.g., Impressionism, Surrealism, Baroque, Greek Art, and so on). The museum is designed in such a way that it covers the entire body of the history of art. As a reference for the classification of the history of art into historical periods, we referred to the work of one of the most important art historians of the 20th century, Giulio Carlo Argan [29]. The categorization of the history of art periods follows the schema shown in Figure 4. This categorization allows us to establish potential assumptions believed by the users: for example, the artistic periods belonging to the same category are more homogeneous and, therefore, correspond to the preferences of the users with respect to artistic periods of other categories. For example, a user that indicates their preferred artistic period is "impressionism" will probably be more inclined to "modern art" rather than "ancient art". The final model that the agent attributes to the user will be a collection of beliefs, goals, etc., that the agent infers on the basis of the features perceived during the profiling phase. The museum is organized into thematic tours. Each thematic tour is described by three attributes: relevance, accuracy, and category. • The relevance of an artistic period is defined on the basis of the originality of the artworks that compose it and the impact they had in the field of art history. • The accuracy, on the other hand, specifies the detail in the description of each artwork present in a thematic room. • Each thematic tour (artistic period) belongs to a category that collects different artistic periods; for example, the "Impressionism" tour belongs to the same category as the "Surrealism" and "Cubism" tours, which are in the more general class named "modern art". This is replicated for any artistic period. For the experiment, we defined three levels of relevance (high, medium, low) and two levels of accuracy (high, low). The user can explore the museum room by choosing the artwork they wish, and they can leave the museum at any time. The Heuristic for the Tour Selection Algorithm 1 describes the heuristic exploited by the agent in order to select the most appropriate section to visit. The algorithm takes as input the user's preferred artistic period, the periods of non-interest, and the level of accuracy chosen by the user. After obtaining the values of relevance, accuracy, and the category of the tour corresponding to the user's preferred artistic period, the algorithm checks multiple conditions. The first condition (Condition C 1 ) requires verifying if the same artistic period required by the user has maximum relevance from the museum curator's point of view and, in this case, if the accuracy of its description corresponds with that chosen by the user. If these two conditions are true, then the robot will recommend the visit of the corresponding tour. If just the accuracy condition is not satisfied, however, the algorithm chooses the period required by the user and presents it with a level of accuracy different from that indicated. The accuracy will be the one believed by the museum curators (Condition C 2 ). If condition C 2 is not verified either, then the algorithm investigates the tours corresponding to the artistic periods that the user has not discarded (P M ). If there is a tour with a high level of relevance, which belongs to the same category as the user's preferred artistic period and which requires a level of accuracy equal to that chosen by the user, then the robot will recommend the visit of the corresponding tour (Condition C 3 ). If not even this condition is verifiable, then the algorithm will try to select a tour with a high level of relevance, which belongs to the same category as the user's preferred artistic period, regardless of the level of accuracy it requires; the accuracy will be the one believed by the museum curators (Condition C 4 ). Condition C 5 instead occurs when, having not found any tour to recommend in the same class in which the required artistic period was contained, there is a tour that corresponds to an artistic period belonging to the next or previous category to that of the user's preferred artistic period and has a level of relevance immediately following to that of the user preferred artistic period. Finally, if even C 5 is not respected, then the algorithm selects a random tour among those corresponding to the artistic periods not discarded by the user (Condition C 6 ). Algorithm 1 Artistic Period Selection Algorithm. Input: p F , P D , Acc u 1: procedure HEURISTIC FOR SELECTION 2: Experimental Procedure A total of 26 participants were recruited for this pilot study. The sample was composed of 6 females and 20 males, aged between 25 and 75 years old. The subjects were not necessarily robotics experts and did not have to deal with robots in daily life. Each participant carried out an entire interaction with the robot (trial), aims to take part in a tour of the virtual museum corresponding to a specific artistic period, and is aware of the fact that the tour will be chosen by the robot that manages the virtual museum, who will choose the most suitable tour. Each trial develops in the following phases: 1. Starting interaction: the robot introduces itself to the user, describing its role and the virtual museum it manages. 2. User artistic profiling: the robot proposes a series of questions to the user, which aim to investigate their artistic interests in terms of her favorite artistic periods and artistic periods of no interest. In this phase, the interaction is supported by a GUI through which the user can express their artistic preferences, and the robot can collect useful data to profile the user. In addition to defining the artistic periods of interest and non-interest of the user, the robot asks the user with what degree of accuracy they intend to visit the section. 3. Tour visit: once the user profile has been established, the robot exploits the heuristic defined in Section 4.2 to select the tour on behalf of the user. Once the selection has been made, the robot activates the corresponding tour in the virtual museum and leaves the control to the user, who can visit the room, selecting the artworks inside. 4. End museum tour: the user can leave the recommended tour and, therefore, the museum. Once this happens, the robot returns to interact with the user, asking them questions. These questions, which belong to a short survey, are used to investigate how satisfied the user is with the visit. We have decided to adopt a five-level scale to encode the user responses, where value 1 is the worst case, and 5 is the best one. In particular, the survey's questions that the user had to answer are the following: • Q1: How satisfied were you with the duration of the visit? • Q2: How satisfied were you with the quality of the artworks? • Q3: How satisfied were you with the number of the artworks? • Q4: How surprised were you with the artistic period recommended by the robot compared to the artistic period initially chosen by you? • Q5: How satisfied are you with the robot's recommendation given the artistic period initially chosen by you? Results The pilot study has been designed with the goal of answering the following research questions (RQ): • RQ1: How risky/acceptable is the critical help compared to the literal help? Does the heuristic proposed help to make this help much more acceptable? • RQ2: Given the risks that the critical help determines, in what situations and how much critical help can be useful? Here we report the results obtained in the pilot study. We divided the results into users who have received critical help from the agent (the preferred artistic period chosen by the user does not match with the tour recommended by the robot), summarized in Table 1, and those who have received literal help (the preferred artistic period selected by the user coincides with the tour recommended by the robot), summarized in Table 2. We can observe that 15 users received critical help, while 11 received literal help. We are interested in investigating the answers to questions Q4 and Q5; these questions are designed in order to understand what the impact is of the robot's ability to propose to the user a tour different from what they expected. In this experiment, questions Q1, Q2 and Q3 are not deeply analyzed, but they have been asked to contextualize the user and to ensure the user could focus on specific questions before answering questions Q4 and Q5. In this way, we try to get the user's attention to the contents of the tour and, therefore, focus on these and not only on the quality of the interaction with the robot (and on the modes of critical or literal help). Table 1. This table reports the answers to the questions in the survey proposed by the robot after it provides critical help to the user. In these cases, the robot recommends a tour slightly different from the artistic period the user indicated as preferred in the history of art. In order to answer RQ1, we ran an independent samples t-test. From the parametric analysis of the answers to question Q5, reported in Table 3, we observed that users who received a tour recommendation consistent with the initial choice of their preferred artistic period (literal help) show a level of satisfaction higher on average than that of the users to whom the robot has proposed the tour referred to an artistic period different from the one initially chosen. This demonstrates how critical help implies the risk of leaving the user, at least partially, dissatisfied because the robot expressly violated their requests. However, although the difference between the averages in the two cases is significant (D = 1.36), the mean value of satisfaction referred to critical help (M = 3) demonstrates how this type of help does not raise a low level of satisfaction. Especially if we consider that no justification has been provided by the robot for its behavior in contradicting the user's requests. Indeed, value 3 in the scale used for the survey corresponds to a medium level of satisfaction. We recall that the heuristic for the tour selection is designed so that if the robot does not find a tour that matches the artistic period chosen by the user, it tries to recommend a tour corresponding to an artistic period belonging to the same category of that selected by the user (e.g., impressionism, surrealism, and romanticism all belong to modern art). Table 3. Independent t-test conducted in order to answer RQ1. Please notice that there is a significant difference between the means of the two groups: the p-value is p = 0.0103). Group Literal Much more interesting is the analysis regarding the critical help. To answer RQ2, we focus only on the group of users who have received critical help (Table 1). Figure 5 shows how, among 15 participants who received critical help, only 4 evaluated the tour recommended by the robot as unsatisfactory, while 4 users evaluated it with a medium satisfaction value, 6 users evaluated it as satisfying, and 1 of them evaluated it as strongly satisfying. A total of 73.3% of the participants that have received critical help evaluated it positively. This result is particularly relevant if we consider that the visitors did not have any prior notice about the possibility that their request could be changed by modifying the artistic period chosen by them. As we have seen, this change finds justification from the will of the museum curators to offer the visit of highly relevant artistic periods (given that considering the collection owned by the museum, the artistic period offered has greater relevance than the one chosen by the visitor) and thus to favor the goals of the user. However, this information is not communicated to the visitor, and it is difficult to be directly deducted by them. However, the user's satisfaction is particularly high. The surprise effect (encoded by the answer to Q4) confirms this unexpected choice of the robot in the face of a different explicit request. If this change could be explained, probably the number of those who have given a negative judgment about the robot's suggestion would lower further. Surely, a lot also depends on the collections owned by the museum and their value or presentation, as well as the artistic flair of the user. In any case, the awareness of a choice made by the robot closer to the user's artistic taste would certainly play a positive role in the final satisfaction of the user. Experiment Limitations Here we discuss some of the limitations that this pilot study could have. First of all, we can observe that the number of users considered in the pilot study is low, and this can be a limitation. In any case, despite the low number of users, the results we expected have statistical significance. Given the statistical significance of some of the results obtained, in future works, we will consider a larger sample than the one used in this pilot study. A further limitation can be related to the fact that we do not consider the artistic habits and expertise of the users involved in the experiment. We have not investigated this variable, nor have we made the robot investigate it. This can be considered a confounding variable that represents a bias in our pilot study. Another possible bias present in the pilot study can be related to the fact that we do not consider the participant's prior interaction with robots, their comfort, and willingness to interact and accept the introduction of robots in society. These variables can have an impact on how participants perceive the robots they interact with. These influences are often hard to control and can be a source of contamination that influences the results. We tried to mitigate this bias by constructing a questionnaire made up of multiple questions, the first of which are not necessary in order to investigate the impact of the robot's behavior on user satisfaction. Indeed, as mentioned in Section 5, in the experiment, questions Q1, Q2, and Q3 are not deeply analyzed, but they have been asked to contextualize the user and to ensure the user could focus on specific questions before answering questions Q4 and Q5. In this way, we try to get the user's attention to the contents of the tour and, therefore, focus on these and not only on the quality of the interaction with the robot. Because of the preliminary aspect of the study, we built the questionnaire with ad-hoc questions, which do not refer to standardized tools. This can represent a limit in the comparison of the results with other similar works. With this preliminary work, we wanted to investigate the impact of a particular type of help that a robot can provide to a user and which can lead to a substantial change in the user's request with the objective of taking account of needs that the same user cannot or has not been able to assess. The results show some significance, and we will try to minimize the bias by following much more standard methodological approaches [30][31][32]. Conclusions In this paper, we present a computational cognitive model that provides an artificial agent with the capability of personalizing a museum tour with respect to the goals and interests of the users that intend to visit the museum. The model does not only consider the mental states of the user related to their artistic interests, but it also takes into account the constraints and goals related to the curators that have designed the exhibitions hosted by the museum. In this way, the artificial agent assumes the role of a mediator between the user and curators, with the goal of offering an experience that is as satisfying as possible for the user. The negotiation process that emerges between users' mental states and the constraints/goals of the exhibition curators can lead the artificial agent to suggest, in some cases, a museum tour that is very close to the user's artistic interests (literal help); in other cases, the agent can suggest a tour that diverges from the user's more explicit interests, but which always tries to satisfy the interests/goals that, although not explicitly declared, may be attributable to them (critical help). This form of help, based on the consolidated theory of Adjustable Social Autonomy [24], has the main goal of keeping the level of user satisfaction high, making a choice that is as suitable as possible for the user but which also takes into account constraints that could determine low levels of user satisfaction. Naturally, a change in the user's requests without negotiation involves the risk of dissatisfaction in the user. However, it remains useful to evaluate how an alternative choice of the robot, for the protection of user goals/interests, can be accepted by the user. We conducted a Human-Robot interaction pilot study with 26 real participants in order to investigate the potential of the cognitive model. The participants interacted with the humanoid robot Nao, which played the role of a museum assistant in a virtual museum, and it had the goal of providing a museum exhibition to the user. At the end of each interaction, the robot proposed a short survey to the user, with the aim of investigating different dimensions of their satisfaction with respect to the presented exhibition. The exploratory study has shown promising results. In fact, despite the fact that the literal help, compared to the critical help, results were more satisfactory for users; in most cases where users have received critical help, they have positively evaluated the museum tour recommended by the robot. This result is particularly relevant by virtue of the fact that users do not know the reasons that led to a choice different from the one they expected. Despite this, even though they were surprised by the tour recommended, they maintained high levels of satisfaction after the visit to the exhibition. Future Works First of all, our goal is to follow up on this pilot study in order to systematize the preliminary results obtained and to give consistency to the research questions we investigated. Another relevant future work will be to focus on explainability. In particular, we want to design other experiments in order to evaluate different dimensions of user satisfaction every time the robot provides an explanation of the reasons that led it to recommend that specific tour to the user. We are convinced that providing an explanation of the reasons that led the robot, for example, to suggest a museum tour different than the one the user expects, has a decisive impact on the user's acceptance of the critical help, which tries to satisfy the results requested by the user but adapts the request to a context that may be unfavorable compared to the initial request. Finally, we intend to extend the computational model by integrating other levels of help, as provided by the delegation and adoption theory, and to test their impact through other HRI experiments in real cultural heritage scenarios. Author Contributions: Conceptualization, methodology, validation, formal analysis, investigation, resources, data curation, writing-review and editing: F.C. and R.F.; software, writing-original draft preparation, visualization: F.C. All authors have read and agreed to the published version of the manuscript.
9,588.6
2022-08-17T00:00:00.000
[ "Computer Science" ]
Viability of selected agro-waste as case hardening materials for cutting tools – A review. . Machining is an indispensable part of production technology with cutting tool. with cutting tool playing key roles in its operations. The demand for more efficient cutting tool increases continuously with technology. Most of the cutting tools in use are imported and the cost of replacement is high. This problem had resulted in series of studies into the development of cutting tools from indigenous materials, particularly agrowastes. This study was a review of the various tool development approaches using various agrowaste types. The challenges and recorded successes as well as gaps in knowledge have also been identified. Summarily, the use of cutting tools developed from steel types that have been case-hardened by agro wastes is a viable alternative that can be explored for use in machining technology. Introduction Multiple-point cutting tools are mechanical tools that are used to shape materials by eliminating extra material from the workpiece.The tool's cutting blades are positioned such that they make simultaneous contact with the workpiece, resulting in many chips in one pass.multiple point cutting tool design can vary based on the application, material being machined, and intended output [2].Multiple-point cutting tools are a crucial part of the manufacturing sector and are used to shape, cut, and machine a wide range of materials.These implements have many cutting blades or points that operate in unison to create the required shape or surface quality.The value of multiple point cutting instruments in the manufacturing industry cannot be overstated.They are used to create parts and products with high precision, accuracy, and quality, which are crucial in many sectors such as aerospace, automotive, medical, and defense [3].Different point cutting apparatuses play a imperative part in cutting edge machining operations, giving effective and exact fabric evacuation from work pieces.These specialized instruments are planned with different cutting edges orchestrated in a particular design, permitting for progressed efficiency, upgraded cutting execution, and expanded instrument life [4].The utilization of numerous cutting edges disperses the cutting stack, diminishing the strain on person edges and empowering speedier fabric expulsion rates.This in-depth see at numerous point cutting instruments will investigate their different sorts, applications, and points of interest [5].Multiple point cutting tools have been used to make complicated components with great precision and accuracy in sectors such as automotive, aerospace, and medicine.These tools are favored over single-point cutting tools because they remove material more quickly, resulting in higher productivity and reduced machining time.Furthermore, multiple point cutting tools feature many edges that may be resharpened or changed, improving their longevity and minimizing the need for frequent tool replacements [6].One essential advantage of different point cutting instruments is their capacity to improve efficiency.By having different cutting edges, these devices can lock in with the workpiece at the same time, coming about in higher fabric evacuation rates and decreased machining times [7].However, a few challenges with multiple point cutting tools includes heat production during machining (which can result in premature tool wear and failure), difficulty of chip evacuation when cutting materials that create lengthy and stringy chips (resulting in chip accumulation and workpiece damage) and cost of producing and maintaining multiple point cutting tools, which can be more expensive than single-point cutting tools [8][9][10].This study therefore reviews the various attempts at producing multiple point cutting tools from agro wastes to mitigate these highlighted challenges. Machinability of Materials & Cutting Tools Machining is the process of removal of excess material from a parent type to form a precise shape.The rating of the level of ease of machining a material is defined as its machinability.This is assessed using various parameters such as rate of material removal, surface finish, chip type formed e.t.c [8].A lot of factors could influence the machinability of a material.This factors rages from the properties of the material, cutting condition to the cutting tool used.The cutting tool used particularly could influence the surface finish, cost of production and ultimately productivity of any manufacturing industry, hence cutting tools play a vital role in the manufacturing industries. Single-point and Multi-point Cutting Tools A single point cutting tool has a single primary cutting edge and can execute material removal operation in a in just one motion.Predominantly, these tool types may possess more than one cutting point, but they can only execute one cutting activity per time.They most often have a main cutting point referred to as the principal cutting edge while other points are referred to as auxiliary cutting edges [10].They are affordable and easy to design or use.However, they have shorter life span because they are prone to high cutting temperatures and thermal damage.They generally therefore result in reduced productivity (Machininig and Machine tools-book).Multi-point cutting tools are however different as they have an advantage of longer tool life while ensuring a higher material removal rate and an increase in productivity.They are generally therefore the preferred tool type even though they come costlier than the single-point types [10]. Agro wastes and Importance The importance of agricultural wastes as an enhancer for steel has recently received much more attention by researchers with several studies carried out to ascertain the viability of various agro waste types in serving this purpose.This is because the agricultural sectors generate a large amount of residue each year.If such trash is disposed of incorrectly and without following proper procedures, it can cause environmental contamination as well as harm to animal and human health [11][12].Animal waste, farm waste, waste from food production, harmful waste, and hazardous waste are all types of agro-waste [13].Although, a few endearing properties of agricultural wastes that makes them desirable have already been established, however high carbon content, significantly stands out among them all [14]. Agro waste types employed A. Coconut Shell Coconut (Cocos nucifera L.) is an evergreen plant that bears fruit continuously for over 65 years.[5][6].Coconuts are grown on more than 10 million hectares in 92 countries throughout the world.Indonesia, the Philippines, and India account for around 75% of global coconut production, with India ranking second [7][8].Ultimate analysis results show that there is a high carbon content value (52.6%) available in the coconut shell.This value is opined may enhance cutting tool hardness when used for case hardening.However, large amount of oxygen (53.1%) present as well may prove to be counter-productive as it may increase the possibility of flammability of the cutting tool [8]. B. Eggshell Eggshells are waste materials from agricultural products created by chick hatcheries, bakeries, and fast-food restaurants, among others, that can litter the environment, causing environmental difficulties or pollution that must be handled properly [15].It is scientifically shown that eggshell is mostly formed of calcium compounds, which are extremely similar to cement.According to the literature, eggshell ash predominantly comprises lime, calcium, and protein and may be utilized as an alternative raw material in the manufacturing of wall tile material, concrete, cement paste, and other products [16].Eggshell also contributes to the building sector by lowering construction costs and landfill waste while providing high performance in concrete characteristics and durability.Eggshells account for approximately 11 percent of the entire weight of an egg.It is mostly composed of calcium carbonate (as calcite), with trace amounts of magnesium carbonate, calcium phosphate, organic compounds, and water [17].The presence of calcium in abundance in the eggshell (Table 3) provided for more strength for any mix or compound the shells are added [17][18]. . , 01 According to the results of the investigation, uncooked rice husk contains a significant quantity of carbon (20.63%).This large presence of carbon could be harnessed for strengthening materials via case-hardening (Table 4).Atomic Absorbtion spectrometry findings reveal that rice husk ash includes a significant percentage of K2O and Na2O (Table 5).These contaminants, together with black particles, are linked to the carbon content of rice husk ash, as well as the raw rice husk's dark color.The examination of uncooked rice husk Conclusion The viability of coconut shell, egg shell and rice husk as case hardening agents for cutting tools has been reviewed in the study.The strengths of each agro waste has been discussed with their possible contributions if considered as a case hardening material have been highlighted.It is believed that the use of these materials for this purpose will not only encourage waste to wealth possibilities, but will also facilitate stronger cutting tool production, thereby ensuring better productivity and improved surface finish for machined materials. Table 3 : [17]ysis of Eggshell powder[17]Over half of the world's population relies on rice (Oryza sativa) as a staple diet (Ayub and Changani, 2018).In 2018, China was the world's leading producer with 209.6 million tons, followed by India and Indonesia with 177.65 and 54.6 million tons, respectively(Afolalu, 2022; Statista, 2021).The majority of these rice hulls are thrown or burned in open places, resulting in energy waste as well as land and environmental damage (Ghosh, 2013).As a result, attempts have been made to use rice hulls as an additive in a variety of materials and applications, including fuel for energy production, fillers in polymers and rubbers, catalytic support, adsorbent, and the creation of silicates and silicon materials (Shiva, 2020).
2,102.8
2023-01-01T00:00:00.000
[ "Materials Science", "Business" ]
In Memory of Two Pioneers in the Complement Field—Sir Peter J. Lachmann, 1931–2020 & Robert B. Sim, 1951–2021 With deep sadness, we have to accept that two cornerstones, not just of British immunology, but also world-famous scientists in the field of complement research, passed away within the margin of a few weeks: Peter Julius Lachmann on 26 December 2020 (Figure 1), and Robert Braidwood Sim (known to everyone as Bob) on 6 February 2021 (Figure 2) [...]. With deep sadness, we have to accept that two cornerstones, not just of British immunology, but also world-famous scientists in the field of complement research, passed away within the margin of a few weeks: Peter Julius Lachmann on 26 December 2020 (Figure 1), and Robert Braidwood Sim (known to everyone as Bob) on 6 February 2021 ( Figure 2). Both missed a big birthday: Peter his 90th and Bob his 70th. Both MRC Units were centres of excellence at national and international levels and many complementologists had the opportunity to visit these places as invited seminar speakers, With deep sadness, we have to accept that two cornerstones, not just of British immunology, but also world-famous scientists in the field of complement research, passed away within the margin of a few weeks: Peter Julius Lachmann on 26 December 2020 (Figure 1), and Robert Braidwood Sim (known to everyone as Bob) on 6 February 2021 ( Figure 2). Both missed a big birthday: Peter his 90th and Bob his 70th. Units were centres of excellence at national and international levels and many complementologists had the opportunity to visit these places as invited seminar speakers, PhD students, post-docs, or visiting scientists, e.g., on sabbatical. Both institutions promoted complement research with pioneering contributions to the field and became safe havens and melting points fostering collaborations on a world-wide stage. Due to the age gap, Peter acted as external examiner for Bob's Oxford D.Phil in 1976. Peter and Bob were very different personalities, with Peter being an excellent scholar gifted with a very fast working analytical mind and an outstanding memory and the ability to brighten up any audience with his sharp and humorous comments, while Bob was a silent and very forgiving thinker who found kind words for every student who lost his track in a seminar presentation or scientific discussion and provided a dignified escape route for those who had become bogged down in a difficult point of conjecture. How different they were as personalities and how similar they were in advancing our field. They worked on different proteins in the complement system and, thus, there are only two publications in PubMed where both of them were co-authors on the same paper (both with senior author Wilhelm Schwaeble). Both have made seminal contributions especially related to complement and both were awarded with the gold medal for lifetime achievements by the European Complement Network, Peter at the very first occasion in 1997, and Bob in 2013. Peter was knighted and Bob received an honorary doctorate by the University of the Republic of Uruguay, Montevideo. Peter was born in Berlin and his family moved in time to the UK. He was a medic by training and in the early days analysed conglutinin, leading to the identification and characterisation of conglutinogen activating factor, now known as Factor I, which was also the major component of his last papers. Other subjects were the dysregulation of C3 activation in disease and the "C3 tickover hypothesis," a seminal finding explaining the role of the important alternative pathway. He was also a pioneer of the terminal cascade, elucidating the reactive lysis mechanism with Ron Thompson in his early days and the discovery and description of CD59 decades later. He was biological secretary of the Royal Society and clearly had many other scientific fields of engagement, bee keeping among these. Bob, a Scotsman, was a biochemist by training and graduated top of the year in 1973, coming with a B.Sc. from Edinburgh to Oxford to study for a D.Phil. He remained in Oxford throughout his career apart from a two-year fellowship to work in Grenoble with Maurice Colomb and Gerard Arlaud. His main interests were the structure-functions relationships of complement proteins and their interaction with viruses, bacteria, or parasites. His favourite molecule was clearly factor H, for which he and his group provided the full amino acid sequence, the assignment of the gene to chromosome 1q, characterisation of truncated forms, polymorphisms, and local synthesis. Of special note was the strong collaboration with researchers from Uruguay on the characterisation of the role of factor H in the immune evasion of echinococcus. Bob was also very interested in the myriad relationships between the various homeostatic mechanisms in blood plasma. At work and before his official retirement (he never retired in practice), Peter was more the senior administrator and supervisor, but never tired of disseminating extremely useful ideas when you met him in his office as his PhD student. One of these star hours is detailed elsewhere (https://pubmed.ncbi.nlm.nih.gov/33573029/ (accessed on 29 January 2021)). However, when he fished an antique (and not to be used by anybody else apparatus from the uppermost shelf), once a month or so, everybody was smiling and everybody was taking cover, except for his brave technician, Rodney Oldroyd, who performed these experiments with him. Later, with his lab at the Vet School, northwest of Cambridge and very near to his house, he was freed from too much administrative work and really went back to bench again, together with Barbara Fernie and David Seilly, until the last day. Bob, in contrast, never really left the bench. Being only separated by a usually open door, he was always at the centre and heartbeat of his research group. When a visiting scientist arrived, he would not tell her/him to read the method in this or that book, but he would sit next to them in the lab and with his characteristic left handwriting he would hand out an individualized handwritten protocol, which usually started with "Make up 15 mL of broth and add . . . ". I still have these unique personal protocols. Wilhelm Schwaeble shares this memory: Bob standing at the chest freezer of his lab with a sheet of paper in his hand providing anyone that he deemed worthy with the most valuable samples and preparations, like Father Christmas readily fetching the nicest and most delicate presents out of his huge bag. Both have had strong women at their side, Dr. Sylvia Lachmann and Prof. Edith Sim, both with their own research fields and both likely to be their spouses' most astute advisors. Our thoughts are with their families, but especially with Sylvia and Edith. Peter and Bob, you will be vividly remembered and greatly missed. Reinhard Würzner Guest Section Editor President of the European Complement Network (ECN) Visiting scientist in Oxford with Bob Sim (March-May 1990) PhD student in Cambridge with Peter Lachmann (1990)(1991)(1992)(1993) Conflicts of Interest: The author declares no conflict of interest.
1,626.8
2021-03-01T00:00:00.000
[ "Physics" ]
CHY formula and MHV amplitudes In this paper, we study the relation between the Cachazo-He-Yuan (CHY) formula and the maximal-helicity-violating (MHV) amplitudes of Yang-Mills and gravity in four dimensions. We prove that only one special rational solution of the scattering equations found by Weinzierl support the MHV amplitudes. Namely, localized at this solution, the integrated CHY formula reproduces the Parke-Taylor formula for Yang-Mills amplitudes as well as the Hodges formula for gravitational amplitudes. This is achieved by developing techniques, in a manifestly M\"obius covariant formalism, to explicitly compute relevant reduced Pfaffians/determinants. We observe and prove two interesting properties (or identities), which facilitate the computations. We also check that all the other $(n-3)!-1$ solutions to the scattering equations do not support the MHV amplitudes, and prove analytically that this is indeed true for the other special rational solution proposed by Weinzierl, that actually supports the anti-MHV amplitudes. However, although the CHY formalism is highly compact, it is really hard to obtain explicitly the analytic results expressed in terms of Lorentz invariant variables (for example, s ab ) for scattering amplitudes. In a sense, this is because neither the solutions to the scattering equations nor the relation between the CHY formula and Feynman diagrams is easily available. Among the efforts on solving scattering equations, (as far as we know) solutions in four dimensions are studied first in the work [13]. Shortly after, Weinzierl proposed two special rational solutions in four dimensions in terms of spinor variables [15] (for details, see eq. (2.15) and (2.16) and the discussion there): which were conjectured to correspond to MHV and anti-MHV amplitudes in [13] and [9]. There has been no explicit proof of the statement that the integrated CHY formula of these two solutions exactly reproduce the famous Parke-Taylor formula [37,38] for MHV (and anti-MHV) amplitudes. On the other hand, though the relationship between Feynman rules and CHY integrations was already established for scalar amplitudes [23,24], it has not been able to derive the Parke-Taylor formula for generic MHV (and anti-MHV) tree-level Yang-Mills amplitudes following this line of thoughts. Moreover, it is not apparent at all to see that the Hodges formula [39] for gravity MHV (and anti-MHV) amplitudes at tree level is also supported by these two solutions. In this paper, we fill the gap by explicitly demonstrating that the special solution (1.1) supports the Parke-Taylor formula for MHV Yang-Mills amplitudes as well as the Hodges formula for MHV gravitational amplitudes, with an arbitrary number of external gluons/gravitons. Similarly, the solution (1.2) supports the anti-MHV amplitudes for Yang-Mills and gravity. To show this, our proof proceeds as follows: • The original integrated CHY formula expresses amplitudes by summing over terms localized at different solutions to the scattering equations. We first consider only the term contributed by the special rational solution (1.1), and the reduced Pfaffian of Ψ for a fixed-helicity MHV configuration (1 − , 2 − , 3 + , . . . , n + ), from which we can prove that the Parke-Taylor formula for the color-ordered MHV Yang-Mills amplitude, as well as the Hodges formula for MHV gravitational amplitude, are reproduced. Two interesting properties of the reduced Pfaffian make the proof tractable: -Property-1 The reduced Pfaffian of Ψ at the MHV configuration can be expanded in terms of determinants of reduced C matrices with three columns and three rows deleted. This property relies on the MHV configuration, but is independent of which solution we choose. -Property-2 Both the reduced Pfaffian of Ψ and the reduced determinant of Φ localized at the solution (1.1) can be expressed in terms of the Hodges formula for gravitational amplitude. • We then extend our discussion to general color-ordered MHV amplitudes (with the two negative helicities at arbitrary positions). This is achieved by extending the two properties to more general cases, which can also be understood by considering the Kleiss-Kuijf (KK) relation [40]. • Finally, one needs to check that any solution other than eq. (1.1) leads to a zero reduced determinant of C (with three rows and colums deleted as in Property-2). For the other Weinzierl solution (1.2), this can be proved analytically. There are two interesting observations in this approach that deserve more attention: • In property-2, the building blocks det (Φ) and Pf (Φ) are expressed in terms of the gravitational amplitudeM n (12 . . . n), while only the pre-factors in front of it can be changed by an SL(2, C) transformation. Thus the SL(2, C) invariance of both the color-ordered Yang-Mills MHV amplitude and the gravity MHV amplitude becomes manifest. • The vanishing of the reduced determinant of C actually imposes constraints on solutions to the scattering equations. With these constraints, one can distinguish the solution (1.1) contributing to MHV amplitudes from the other (n − 3)! − 1 solutions. Similar statement is also true for the solution (1.2) that supports anti-MHV amplitudes. We hope that such classification can be extended to other solutions. This paper is organized as follows. Section 2 presents a review of the CHY formula, Parke-Taylor formula and Hodges formula, and defines our notations. In section 3, we prove that the special rational solution given by Weinzierl reproduces the Parke-Taylor formula for MHV Yang-Mills amplitudes and the Hodges formula for MHV gravitational amplitudes. This is achieved by a Möbius covariant calculation. In section 4, we check that other solutions do not contribute to the MHV configuration. Especially, this is analytically proved for the other Weinzierl rational solution (1.2). We then propose a set of complex polynomial equations that distinguishes the special solution (1.1) that supports MHV amplitudes from the others. Finally, we devote section 5 to a summary of our results and discussions on possible extensions. Some useful properties of the spinor helicity formalism and details of the proof are given in appendix A and B respectively. Preparation: a review of the CHY, Parke-Taylor and Hodges formula In this section, we present a warm-up review of some useful details of the general CHY formula (2.1) for scattering amplitudes, the Parke-Taylor formula (2.18) for MHV Yang-Mills amplitudes, as well as the Hodges formula (2.19) for MHV gravitational amplitudes, all in four dimensions. CHY formula CHY proposed in a series of papers [1][2][3] that any n-point tree amplitude A n (1, 2, . . . , n) in arbitrary dimensions can be expressed by the following equation: The building blocks of eq. (2.1) are discussed in the following: • Scattering equations The scattering equations for n massless particles, which are imposed by delta functions in eq. is also a solution. We can thus use this freedom to fix three arbitrarily chosen z's to three arbitrary positions on the Riemann sphere, say, (z p , z q , z r ) = (σ p , σ q , σ r ). The first consequence is that there are only n−3 independent equations in (2.2). It can be proved by a semi-analytical inductive method that the number of solutions is (n − 3)! in any dimension [1]. The second consequence is that the integration over z p , z q and z r actually encodes the SL(2, C) redundancy. Using a Fadeev-Popov like trick, we can divide out the volume of the SL(2, C) group in eq. (2.1): In eq. (2.1), after integrating the z variables over the permutation invariant delta-function the scattering amplitudes can be expressed by the following form where the factor perm(pqr) is the signature of the permutation that moves the standard ordering (1, 2, . . . n) to the ordering (p, q, r, . . .), with (. . .) always keeping the ascending order. If both (ijk) and (pqr) are in the ascending order, we have perm(ijk) perm(pqr) = (−1) i+j+k+p+q+r . Φ is an n × n matrix given by and Φ i,j,k p,q,r is the matrix obtained by removing the (i, j, k)-th row and the (p, q, r)-th column from Φ. Its determinant, det Φ i,j,k p,q,r , is nothing but the Jacobian associated with the delta-functions in eq. (2.5). If we define the reduced determinant det (Φ) to be the amplitude (2.6) can then be expressed simply as . This form suggests that to compute the amplitudes, we need to know all the solutions to the scattering equations and sum over the contributions from all of them. • The integrand for gauge theory and gravity Finally, the integrand I n for color-ordered gauge amplitudes is set to while for gravitational amplitudes, we use where and together give the polarizations of the external gravitons. The reduced Pfaffian in eq. (2.10) and eq. (2.11) is proportional to the Pfaffian of Ψ with both the (i, j)-th row and (i, j)-th column removed: where 1 ≤ i < j ≤ n. Here the 2n × 2n antisymmetric matrix Ψ is given by where k, denote the momenta and polarization vectors of external particles. The matrices A, B and C are defined by (2.14) • Two rational solutions in four dimensions In four dimensions, one can express light-like 4-vectors in term of spinors. Two of the solutions to the scattering equations have been found to be rational functions of spinor variables [15]: The others solutions are expected to be more complicated algebraic functions of spinor variables. The spinor convention we have adopted in this work is given in appendix A. When writing down this solution, we have implicitly fixed part of the SL(2, C) freedom by choosing for all the solutions. The arbitrary spinor | χ represents the remaining SL(2, C) freedom, and we are going use a formalism that is manifestly covariant under this freedom. It is not difficult to generalize to a formalism that is totally Möbius covariant, which is mentioned in section 3.3. In the following sections, we will show that the relevant quantities like reduced Pfaffian/determinant are of a factorized form, in which the χ-dependent and χ-independent factors can be separately identified. It turns out that in the final expressions for physical MHV amplitudes, the χ-dependent factors, which represents part of the SL(2, C) freedom, are all canceled, making the invariance under this freedom manifest. Thus, despite the appearance of the χ-dependence in the intermediate steps, our calculation is actually SL(2, C) covariant, once properly generalized. If we set | χ = |n , we will return to the original form presented in [15] and have σ n = ∞. For the two solutions in eq. (2.15) and eq. (2.16), we have the following expressions for σ ab = σ a − σ b : which will be used frequently later. Parke-Taylor formula The color-ordered tree-level Yang-Mills MHV amplitude A MHV n with two negative-helicity gluons x and y (1 ≤ x < y ≤ n) is given by Parke-Taylor formula [37,38] Hodges formula The gravitational reduced MHV superamplitudeM n (12 . . . n) for the N = 7 formulation 1 of N = 8 supergravity can be expressed by the Hodges formula [39] where the c symbol is c abc = c abc = 1 ab bc ca . (2.20) Here, we use the notation of [2], in which (i, j, k) denotes the deleted rows and (p, q, r) denotes the deleted columns, while [39] uses the opposite convention. The φ matrix is define by We note that φ aa is invariant if we change the spinor |1 or | χ into any |θ that is not collinear with |a . As an example, for a = n, we multiply an into both the numerator and denominator It can also be shown thatM n (12 . . . n) is independent of any choice of (i, j, k) and (p, q, r) [39]. Using Hodges formula, one can write down the n-point MHV gravitational amplitude M n immediately where only the gravitons x and y have negative helicity. MHV Yang-Mills and gravity amplitudes from CHY formula Having prepared the useful properties of the CHY formula for this paper, let us first consider the relation between the CHY formula and the Parke-Taylor formula of MHV amplitudes in four dimensions. Without loss of generality, we start with the color-ordered MHV amplitude A MHV n (1 − , 2 − , 3 + , . . . , n + ) where 1 and 2 are the two negative helicity gluons. The Parke-Taylor formula for this amplitude is given by To relate the Parke-Taylor formula (3.1) with the CHY formula in four dimensions, we should write the external polarizations by the spinor-helicity formalism [38]. In appendix A, we have also included a short review of this formalism. It has been shown that the reduced Pfaffian is independent of the gauge choice (namely, the Ward identity holds). In spinor-helicity formalism, one can choose reference momentum for each external gluon to fix the gauge. For the MHV configuration, we can choose the momentum k n of the gluon n as the reference momentum of the two negative helicity gluons 1 and 2. The reference momentum of positive helicity gluons 3, . . . , n is chosen as k 1 . Thus the polarizations of our external gluons are written as In this section, we are going to prove that using only the rational solution given in eq. (2.15), we can derive both Parke-Tylor formula and Hodges formula. We first substitute the external polarizations (3.2) into the integrated CHY formula (2.6) and then show in detail (in appendix B) that the Parke-Taylor formula (3.1) can really emerges with only the rational solution given in eq. (2.15). Then we sketch the calculation that generic MHV amplitudes with negative helicity gluons being at arbitrary positions can also emerge from eq. (2.15). In both cases, we find that Pf (Ψ) and det (Φ) are proportional to the reduced gravitational amplitudeM n (12 . . . n) defined in eq. (2.19), which makes the derivation of the MHV gravity amplitude using the CHY formula very straightforward. Before we start, we need to clarify some terminology. If we say, for example, row-(i) of part C, we mean the i-th row of the original matrix C. This is convenient since we constantly delete rows and columns and as a result it is cumbersome to track the position of a specific row in the new matrix after several such manipulations. Now let us prove that the Parke-Taylor formula (2.18) with x = 1, y = 2 is reproduced by the CHY integral (2.6) localized at the rational solution (2.15). Before presenting our proof, we first list two interesting intermediate results: Property-1 The reduced Pfaffian Pf (Ψ) in MHV configuration can be expressed by the following expansion in terms of the determinant of C 1,2,m 1,n−1,n matrices where C i,j,k p,q,r is the matrix C with the row-(i, j, k) and column-(p, q, r) deleted, and the overall sign is controlled by This property relies on the MHV configuration but is independent of the solutions for the scattering equations. Property-2 Once we substitute in eq. (2.15), Pf (Ψ), det (Φ) and the MHV-like factor σ 12 . . . σ n1 will have the following compact forms Finally, the symbol F χ and P χ are defined as Given eq. (3.4), we can see clearly that the CHY formula with the special solution (2.15) can repreduce the correct Parke-Taylor formula (2.18) for the Yang-Mills MHV amplitude (with a trivial overall factor): 8) and the Hodges formula (2.23) for the gravitational MHV amplitude: General MHV amplitudes In this part, we write down the generalized eq. (3.3) and eq. (3.4) for general MHV amplitudes, in which the negative helicity particles can occupy arbitrary positions. If particles at position x and y (x < y such that 1 ≤ x ≤ n − 1) have negative helicity while all the others are positive, we have where j can be any number except for x and n, and the final result does not depend on this choice. In this calculation, we use the polarization vectors If we plug the special solution (2.15) in, we get On the other hand, det (Φ) and [σ 12 . . . σ n1 ] remain the same as in eq. (3.4) since they depend only on kinematics but not helicity configurations. It is thus straightforward to see that the CHY formula gives the desired general MHV Yang-Mills and gravitational amplitudes The derivation of eq. (3.10) and eq. (3.12) follows closely to those elaborated in section B.1 and B.2, except that one needs to be more careful on the order of the indices involved. The fact that the special rational solution (2.15) also gives support to general MHV Yang-Mills amplitudes with arbitrary two negative helicity gluons can also be seen directly from the KK relations for color-ordered Yang-Mills amplitudes [40] and the Parke-Taylor like factors under a given solution [42]. Similarly, by using only eq. (2.16), we can get general anti-MHV amplitudes. All we need to do is to exchange all the angular spinor brackets with the corresponding square spinor brackets, and vice versa. Here the arbitrary choice in the spinors |θ , |η and |χ represents the full SL(2, C) freedom. Thus our calculation explicitly verifies that the MHV amplitudes resulting from the integrated CHY formula is invariant under the Möbius transformations acting on the solution (2.15). Moreover, we have shown that the SL(2, C) dependent pieces factorize out of the gauge invariant building blocks of physical amplitudes, and they cancel with each other if we use the CHY recipe for both gauge and gravity amplitudes. Summary Now we summarize what we have done in this section. The main conclusion is that the special solution (2.15) supports general MHV gauge and gravitational amplitudes. If we permute the negative helicity particles around, eq. (2.15) will always return the correct amplitudes, without the help of other solutions. An immediate question one may ask is what roles do the other (n − 3)! − 1 solutions play in the MHV case? Actually, it has been proposed in [43] that all the other solutions do not contribute to the MHV amplitudes. Using the machinery worked out in this section, we can give an algebraic characterization between eq. (2.15) and the other solutions. This is the main subject of the next section. Other solutions at MHV and Non-MHV First, we note that similar result as in Sec. 3 can also be proved for anti-MHV amplitudes using the other special solution (2.16). All we need to do is to exchange angular and square brackets. As to other solutions at MHV, it is not difficult to check numerically (we have checked up to 9-point) that if we plug any solution other than eq. (2.15) into eq. (3.10), we get Pf (Ψ) = 0, due to the fact that det C x,y,m x,j,n = 0 . (4.1) However, we can study this problem from another direction, namely, we can solve from the independent set of eq. (4.1) all the solutions to the scattering equation except for the special one (2.15). In other words, if we put together eq. (4.1) and the scattering equation (2.2), the solution set will be those of eq. (2.2) that do not contribute to the MHV amplitudes. We hope that this is the first step towards understanding and classifying the Eulerian number pattern of the solution set [43]. In the following, we call the solution (2.15) the MHV solution and (2.16) the anti-MHV solution. All the others are thus called non-MHV solutions, since they only contribute to certain non-MHV amplitudes. Independent set of characteristic equations Once we change x, y, m and j in eq. (4.1), we get a new equation that should be satisfied by the non-MHV solutions. However, such a system of equations is redundant, out of which we need to extract a complete and independent set. For a given x, which is the position of the first negative helicity particle, and the gauge choice (3.11), the entries of matrix C can be written as The range of the indices in D is the same as that of C. We find that the following quantity only depends on x The proof is very similar to that of Pf (Ψ) be independent of i and j given in [2]. In terms of this new quantity, eq. (3.10) can be rewritten as Next, we show that there is no σ x contained in D x . Indeed, since both the row-(x) and column-(x) is deleted, the only place σ x can appear is in the diagonal entry D aa , as 1/σ ax . However, because of the xl in the numerator, the coefficient of 1/σ ax is actually zero. Consequently forms a complete and independent system of polynomial equations for our n − 3 unknown σ's. For n = 5, there is only one solution to eq. (4.7), which is exactly eq. (2.16). We have numerically studied a number of cases up to n = 9, and find that the (n − 3)! − 1 non-MHV solutions of the scattering equation all satisfy eq. (4.7). On the other hand, eq. (4.7) contains additional solutions other than those of the scattering equation. We have confirmed this fact numerically at n = 6. As an example, it is easy to show analytically that the anti-MHV solution (2.16) indeed satisfies D x = 0 for arbitrary n. In this case, we have After we delete two more rows and columns, and extract [a χ ] from row-(a) and [b χ ] from row-(b), we will get a matrix whose entries are identical, and it must have zero determinant. Thus we have proved that the solution (2.16) leads to D x = 0 and makes no contribution to MHV amplitudes. For the other (n − 3)! − 2 non-MHV solutions, at this moment one can use only numerical methods, since no analytic expression for any of them is known in the literature. Geometrically, the scattering equation (2.2) represents a set of (n − 3) hyper-surfaces in the space of n − 3 complex variables (locally C n−3 ) while the (n − 3)! solutions are just the intersection points of these hyper-surfaces. Meanwhile, eq. (4.7) defines another set of (n − 3) hyper-surfaces and their intersection points always have (n − 3)! − 1 in common with the ones given by eq. (2.2). The algebraic geometric property of these two sets of equations needs to be further studied. Non-MHV solutions and Non-MHV amplitudes It has been indicated in [43] that there is an Eulerian number partition pattern in the (n − 3)! solutions. Namely, N k MHV amplitudes are only supported by A(n − 3, k) solutions, where A(n − 3, k) is the k-th Eulerian number of index n − 3. We have also numerically checked this fact up to n = 9. Our approach described above in Sec. 3 may potentially be generalized to non-MHV amplitudes, despite the fact that there is no compact analytic expression known for any of the other non-MHV solutions: Working out similar characteristic equations for non-MHV solutions is still promising. We leave this to our future work. Conclusions and discussions In this paper, we have proved in the CHY formalism that the special rational solution (2.15) of the scattering equations leads to the Parke-Taylor formula for MHV Yang-Mills amplitudes with an arbitrary number of external gluons, as well as the Hodges formula for MHV gravity amplitudes. This is achieved by developing techniques to compute relevant reduced Pfaffians/determinants in a manifestly Möbius covariant formalism. Two useful properties have been introduced and proved, which make the Möbius invariance of the formalism manifest. Then the fact that another known special solution (2.16) supports only the anti-MHV amplitudes follows immediately. By numerical check, we pointed out that all other solutions of the scattering equations do not contribute to the MHV amplitudes at all. Moreover, algebraic conditions satisfied by the (n − 3)! − 1 non-MHV solutions, which do not contribute to the MHV amplitudes, have been established. We leave further study on amplitudes beyond MHV and anti-MHV to future work. The correspondence we have established in this paper between the MHV solutions of scattering equations and the MHV Yang-Mills/gravity amplitudes has a profound physical implication: Namely in the CHY formalism for Yang-Mills and gravity, the solutions of scattering equations, involving only external momenta, know mysteriously about the external helicity/polarization configurations of the scattering amplitudes. Recently, Lam and Yao developed a systematic method [44,45] to evaluate CHY integrations for npoint amplitudes. Their method can be applied to any fixed helicity configuration with manifest Möbius invariance. Many examples with a small value of n were explicitly calculated in [44], with results in the MHV cases consistent with the Parke-Taylor formula. However, it is not easy to see the correspondence between solutions of scattering equations and helicity configurations in their way. Nevertheless, this work initiates a new approach to study the CHY formula, especially for non-MHV or non-anti-MHV configurations, for which no analytic solution of scattering equations is known. The connection between the approach by Lam and Yao [44,45] and the current work is an interesting topic and deserves further study. A Spinor-helicity formalism In this section, we briefly introduce the spinor helicity formalism [38] and show the conventions we have employed in our calculation. The metric we use is g µν = (1, −1, −1, −1). The usual Dirac u and v spinor can be defined as such that the Dirac conjugate is: Here ξ and η are Weyl spinors. The dotted and undotted indices are converted through the conjugation † while they are raised and lowered by: u and v satisfy the Dirac equation: In the massless limit, s labels the helicity and we have the special solution: for the momentum p µ = (E, E sin θ cos φ, E sin θ sin φ, E cos θ). Other nonzero two-spinors are related to them by The normalization √ 2E agrees with the one used in [41]. We define the new angular and square bracket notation for the spinors: Then using this notation, a light like four-vector can be expressed in terms of the spinors as and the Mandelstam variable s ij can be expressed as 4) In the following subsections, we prove the two properties given in eq. (3.3) and eq. (3.4). In the derivation, if we say, for example, B part of a matrix, we mean those entries that belong to the original B sub-matrix in Ψ. B.1 Proof of property-1 Now let us prove eq. (3.3) by recursively expanding the Pfaffian using the formula for a 2N × 2N anti-symmetric matrix X = (x ij ). The proof consists of the following steps: (i) The structure of Ψ As our first step, we substitute the polarizations given by eq. (3.2) into the B and C matrices defined by eq. (2.14). Under the choice of reference momenta, the only nonzero i · j in the B matrix are Thus only the second row and the second column contain nonzero entries. Under our chioce of gauge eq. (3.2), we also have − 1,2 · k n = 0, Then the last n − 2 entries of the first column (row) as well as the first two entries of the last column (row) in C (−C T ) matrix have to be zero. Hence the general structure of the matrix Ψ has the form shown by figure 1. (ii) The expansion of Pf (Ψ) To calculate Pf (Ψ), we choose to delete the (n − 1)-th and n-th row and column, which leads to Pf (Ψ) = −1 σ n−1,n Pf Ψ n−1,n n−1,n ≡ −1 σ n−1,n Pf( Ψ) . We now expand Pf( Ψ) with respect its n-th row, which is row-(2) of B and C, and obtain It is not difficult to see that all the sub-Pfaffians Pf Ψ n,b n,b with 1 ≤ b ≤ n − 2 are zero. In this case the Ψ n,b n,b has a zero B part, which is still n × n dimensional. However, the nonzero off-diagonal C part has dimension (n − 1) × (n − 3), as illustrated in Fig. 2a. Then by elementary transformations, we can always make two rows of the C part zero, such that we get a matrix with two entire rows zero, which has zero determinant. Since elementary transformations do not change determinant, we must have We then re-express Pf( Ψ) by The summation starts from m = 3 since B 21 ∼ − 2 · − 1 vanishes due to our choice of gauge and We apply the reduction process in (iii) on ψ m , and expand it with respect to its (n − 1)-th row, which is row-(1) of C Pf(ψ m ) = has a nonzero determinant, we always find an elementary transformation that makes the A part of ψ m zero. For example, to make the first row of the A part zero, we need to find x 3 , · · · , x m−1 , x m+1 , · · · , x n from the following set of equations Now that we have n − 3 unknowns with n − 3 equations, we can always find a solution when C 1,2,m 1,n−1,n has nonzero determinant. Then multiplying each row in C 1,2,m 1,n−1,n by the corresponding x, adding it to the first row of A, we can make the first row of the A part zero. Continuing this operation to all rows of the A part, we can thus make the entire block zero. Therefore, both situations can be captured in the following equation Feynman diagram analysis. 2 Although eq. (3.3) have been derived from properties of Pfaffians structures, it is worth pointing out that the reductions (i)-(v) can be understood more physically from Feynman diagrams. In the amplitudes calculated from usual Feynman diagrams, polarization for any external gluon must be contracted with either another polarization or an external momentum. In n-gluon tree diagrams, the number of polarizations is n and the number of vertices in a Feynman diagram should be at most n − 2. Thus we at least have one factor of i · j . As already mentioned in (i), the nonzero i · j can only be . Since the polarization − 2 can only appear once in one diagram, we only have one nonzero factor of the type i · j for each Feynman diagram. Thus MHV partial amplitude can be written as a summation of the terms proportional to − 2 · + b , which agrees with eq. (B.7). Meanwhile, all the other n − 2 polarizations have to be contracted with n − 2 external momenta. Next we study whether we can have k i · k j in the summand. The most possible contributing diagrams are those constructed by only cubic vertices, each of which contributes a factor of the form k µ η ρσ to each summand. An n-gluon tree diagram at most contains n − 2 vertices such that the vertices contribute 3(n − 2) Lorentz indices. However, since we have 2(n − 3) propagators contracting with vertices, thus the total number of external Lorentz indices is n. Then the n external polarizations can either contract with k µ or η ρσ . • If all the (n − 2) momenta k contract with external polarizations (i.e., there is no k i · k j ), there must be two polarizations left and have to contract with each other. This case is allowed because we do have nonzero contractions 2 · b (3 ≤ b ≤ n − 1) available. • If there exists a factor k i · k j , we should have at least two less k's contract with polarizations. Thus we have at least two more polarizations contract with each other via η ρσ , which should vanish since there are no more nonzero i · j . Thus for MHV case with our gauge choice (3.2), Feyman diagram can only contribute term that contains just one signle factor ( − 2 · b ) (2 < b ≤ n − 1) and (n − 2) factors of the type · k. In the CHY language, it means that Pf (Ψ) should not contain any entries in the A matrix, which also agrees with eq. (B.8). B.2 Proof of property-2 We now turn to prove the three relations in eq. (3.4). • From eq. (B.13) and eq. (B.14), we know that each entry of the Φ matrix contains a factor F 2 χ such that We should thus have in all n − 3 rows containing this common factor. By extracting all of them out of the determinant, we get an overall factor (F χ ) 2(n−3) . (B.15) • As shown by eq. (B.13), each Φ ab (a = b) contains a factor a χ 2 b χ 2 . It thus is tempting to extract a χ 2 out of each row and b χ 2 out of each column, but the obstacle is in Φ aa , which seem to contain only a χ 2 instead of a χ 4 , as in eq. (B.14). Now let us show that a Φ aa (a = n) secretly contains one more a χ 2 if we apply the Schouten identity properly. First, we define the following quantities for convenience This is correct for 2 ≤ a ≤ n, which is adequate for our purpose since the first line has already been deleted. Now we can extract one an 2 from each row with 3 ≤ a = m ≤ n, and one bn 2 from each column with 2 ≤ b ≤ n − 2. Then we have a factor The reduced determinant det (Φ). It is straightforward to find that after using eq. (2.15), we have σ 12 σ 2m σ m1 σ 1,n−1 σ n−1,n σ n1 = 1 F The MHV-like factor σ 12 σ 23 . . . σ n1 . Using eq. (2.15), eq. (3.6) and eq. (3.7), it is not difficult to find that σ 12 σ 23 . . . σ n1 = 1 F χ n D n (P χ ) 2 , (B.28) which gives eq. (3.4b). The determinant of C 1,2,m 1,n−1,n . Now we study det C 1,2,m 1,n−1,n , where the row-(1, 2, m) and the column-(1, n − 1, n) in C have been removed. Inserting the polarizations (3.2) into the C matrix defined by eq. (2.14), we get For 3 ≤ a ≤ n and 2 ≤ b ≤ n. In C, we can extract a common factor 1/ 1a from each row, and another common factor 1b from each column. These two common factors will almost cancel each other outside the determinant while the reminant is 12 1m 1, n − 1 1n due to the mismatch between the range of a and b. After doing this, we find that C reduces to a form that is identical to the Hodges matrix φ (see eq. (2.21)). Therefore we have det C 1,2,m 1,n−1,n = (−1) n−3 12 1m 1, n − 1 1n det φ 1,2,m 1,n−1,n . Note that this equation is independent of the gauge choice of the polarizations. This quantity is nothing but the D 1 defined in eq. (4.5) with the special solution (2.15) plugged in.
8,265.6
2016-03-26T00:00:00.000
[ "Physics" ]
Methods to Classify Bainite in Wire Rod Steel The microstructural description in bainitic steel is commonly ambiguous, and the interpretations of results that originate from applied methods are usually user dependent. In consequence, a manifold description of bainite makes it difficult to reveal structure–property relationships. This is why a novel classification and quantification routine for bainitic microstructures in wire rod steel is presented. The classification is based on electron probe microanalysis (EPMA), electron backscatter diffraction (EBSD), and nanohardness of the same local area. Microstructural constituents with different characteristics (carbon concentration, misorientation, and nanohardness) are classified into low‐, intermediate‐, and high‐temperature morphologies. After the classification, quantification is conducted by combining scanning electron microscopy (SEM) analysis, dilatometry, and X‐ray diffraction (XRD). The bainite quantification reveals a homogeneous microstructure for cooling along the lower bainite regime. Increasing the manganese content causes a lower sensitivity to change in the cooling parameters. Combining nanohardness and EBSD is suitable to describe microscopic microstructural properties, whereas the quantification of bainite is eligible to explain differences in the microscopic microstructural properties. The used approach avoids overcomplexity and manifold terminologies of bainitic microstructures that are commonly found in the literature. material. For instance, continuous cooled wire rod steel is commonly exposed to external cooling effects (by air or forced air cooling in a Stelmor conveyor [10] ). This causes a temperature gradient in the radial direction from the core to the outer surface depending on several factors, for example, the wire rod diameter and the ring density on the cooling conveyor. [11] Consequently, a microstructural and hardness gradient can be observed in the same direction. In this case, the experimental setup requires adjustments to deliver more representative results, instead of relying only on high-resolution techniques. In this work, different methods are combined to get a more representative analysis of bainite in wire rod steel. The applied alloying concept, referred to as carbide-free bainite, [12][13][14] yields an incomplete transformation for the applied cooling parameters with retained austenite as secondary phase. This alloying concept is known for an outstanding combination of strength and ductility. [5] The retained austenite occurs either as films separating bainitic lath or as blocks between sheaves of bainite. [15] At room temperature, the secondary phase is meta-stable. This means that external events can trigger a transformation of austenite to martensite, known as the transformation-induced plasticity (TRIP) effect. [16] Austenite blocks commonly inherit a carbon gradient toward the core. Cooling below the M s temperature partly transforms these blocks into martensite, while the outer rim remains austenitic to some extent. [13] Therefore, the mixture of martensite and austenite is commonly referred to as a martensite-austenite (M-A) island. [17] 2. Experimental Section Laboratory Melts In total, three microalloyed steels were produced as laboratory melts alloyed with substantial differences in manganese content ( Table 1). The steels contained a medium carbon content with 0.19-0.23 and >1 wt% Si to delay carbide precipitation in austenite. [18] Manganese of 1.5-2.5 wt% was used to adjust the hardenability. [19] This concept is commonly applied to generate carbide-free bainite. A combined utilization of boron and titanium delays the ferrite/pearlite transformation to widen the process window for bainite. [20] Molybdenum has the same effect with delaying diffusion-based transformations. In addition, molybdenum impedes manganese segregation to the grain boundaries. [21] In consequence, grain boundary embrittlement can be avoided. Niobium was used to control the grain coarsening during wire hot rolling. The samples originated from 140 Â 140 mm 2 sized ingots forged at 1200 C to billets with a cross-section of 60 Â 60 mm 2 . Samples were cut in the longitudinal direction of the billet with a rectangular geometry of 20 Â 20 Â 65 mm 3 , and a second batch of samples with a flat sample geometry of 4 Â 7 Â 1.3 mm 3 was prepared for dilatometer experiments. The first batch with larger samples was used in a thermomechanical treatment simulator (TTS) to apply hot deformation and cooling according to predefined parameters. Thermomechanical Treatment All the samples passed through the same austenitization treatment with initial heating of 3 K s À1 to 1200 C. This temperature was held for 10 min until cooling with 1 K s À1 to a hot deformation step at 900 C that initiated compression of the sample (φ ¼ 0.3,φ ¼ 10 s À1 ). In the following, three different cooling regimes according to the cooling conveyor in a wire rod mill were defined to generate different mixtures of bainite. The cooling parameters in regime I consisted of fast initial cooling with 5 K s À1 to 400 C and subsequent cooling in the bainite phase field of 0.3 K s À1 . Slower cooling of 2 K s À1 from 900 to 500 C with simulated air cooling in the bainite phase field of 1 K s À1 is denoted as regime II. Regime III was cooled as regime II, but the cooling rate in the bainite phase field was reduced to 0.3 K s À1 . Cooling rates of regimes I-III were confirmed for feasibility by prior tests on the cooling conveyor of the wire rod mill at ArcelorMittal Duisburg Long Products. A hot deformation step at 900 C simulated the final rolling step but it has to be noted that actual area reduction and deformation rate during processing in a wire rod mill are beyond the limits of the TTS in the laboratory. For instance, a lower degree of deformation and a lower rate of deformation in the TTS are expected to promote a larger prior austenite grain size relative to industrial processing. Applied Methods for Bainite Classification The quantification of phase fractions in the three steels was conducted in two steps: an identification (classification) phase and a quantification step ( Figure 1). In the first step, morphological features were described by a comprehensive analysis of different experiments to extract information in the same local area. The heat-treated samples were further processed to extract secondary samples for metallographic preparation and mechanical testing. Samples were ground and polished to a surface finish of 1 μm. Oxide polishing suspension (OPS) made of colloidal silica with a 0.25 μm particle size was used to obtain a surface with minimal roughness. This final step was chosen instead of electropolishing to satisfy the surface requirements of electron probe microanalysis (EPMA) and nanohardness measurements despite a small risk of a possible TRIP effect initiated by mechanical preparation. Microhardness indents were used as markers to locate the same spot for the different experiments. EPMA measurements in a Schottky field-emission gun electron microprobe, JEOL JXA-8530 F (JEOL Ltd., Tokyo, Japan), revealed segregation of alloying elements during cooling with the predefined regimes. This method reveals, for example, carbides or stable retained austenite films in regions that incorporate high carbon concentrations. The instrument was operated at 15 kV accelerating voltage and at a beam current of 100 nA. Map and line scans were recorded with a step size of 100 and 300 nm, respectively. Hydrocarbon cracking adsorbed on the surface of the sample is known to cause carbon contamination of the measured surface area. This problem was minimized by conducting an acquisition procedure prior to the measurements to obtain reliable carbon concentrations. [22] Subsequent electron backscatter diffraction (EBSD) measurements provided crystallographic information of the morphological features. The experiment was conducted at 20 kV and a step size of 80 nm in a field emission scanning electron microscope (type Zeiss Σigma) with an EBSD detector. Nanoindentation was performed with a Berkovich diamond tip in an iNano nanoindenter (Nanomechanics Inc., TN), with the area function of the tip calibrated prior to indentation on fused silica. [23] The area of interest was probed by a 10 Â 10 grid (in total 100 indents) with 10 μm indentation spacing. The experiments were conducted to a target depth of 100 nm in all cases such that the plastic zones of the individual indents did not overlap. The results of these three methods-EPMA, EBSD, and nanoindentation-were combined to elaborate a classification scheme. In the second step, the classification system was applied as guideline for the scanning electron microscopy (SEM) analysis. Instead of the line-intercept method, the phase fractions were extracted from SEM micrographs using the "measure" tool in ImageJ software (adjusted line-intercept method). But instead of marking phases on a pattern of lines, phases were cut out by polygons to also account for the space that was otherwise missed in between the lines of the line intercept method. For the SEM analysis, samples were etched with 2% nitric acid solution (nital) to reveal the microstructure. For all samples, a central spot on the cross-section of the TTS samples was probed under SEM. For each condition, the metallographic procedure was repeated, with electropolishing in A2 solution (40 V for 10 s) as a surface finish, instead of using OPS. These samples were analyzed in X-ray diffraction (XRD) under Co Kα1 radiation to obtain the fraction of retained austenite. The samples were rotated along both the ϕ and ψ axes to minimize texture effects on the phase quantification. In addition, the cooling regimes were applied in dilatometer experiments. The length change-temperature data were transformed via the lever rule into a transformed fraction over temperature plots. This provided the transformation start and finish temperature. The resulting data were combined with the phase fractions obtained from the SEM analysis. Ultimately, the phase fractions can be described by the temperature range of the phase transformation. Mechanical Testing The TTS samples were processed by electrical discharge machining (EDM) to obtain secondary samples for tensile and Charpy Vnotch testing. For the tensile test, two cylindrical specimens were machined per condition with a cylindrical shape of 5 Â 25 mm 2 between the screw threads. The test was conducted using the tensile test machine Zwick 4204 at a constant speed of 0.4 mm min À1 (strain rate of 0.00025 s À1 ). From another TTS sample per condition, three subsized Charpy V-notch samples were machined with a rectangular cross-section of 2.5 Â 10 mm 2 and 55 mm in length. Both tests were conducted at room temperature. Results A combination of carbon map (EPMA), force-depth behavior (nanohardness), band contrast, and misorienation boundaries (EBSD) was used for steel 1 in different cooling regimes. Regime III produces a high degree of inhomogeneity. Therefore, a larger surface area was analyzed to attain more representative results ( Figure 2). In the granular-type morphology, large areas of carbon depletion are visible, whereas a uniform carbon distribution can be observed in the shape of islands with a large scattering in size. Again, the band contrast reveals a link of highly distorted areas and high carbon concentration. Larger islands with high carbon concentration contain a carbon gradient from the border to the core of the island. The grain boundaries show only a few high-angle boundaries but dominantly low-angle boundaries. Regime I and regime II were analyzed accordingly (for the images, the reader is referred to the Supported Information of the online version). In regime I, carbon-enriched areas were observed along carbon-depleted lath, while some grains showed a homogeneous carbon distribution. The carbon-enriched areas coincide with regions of low distortion (dark gray scale values in the band contrast). Otherwise, carbon depletion can be linked to light gray scale values and therefore, microstructural constituents with low distortion. High-angle boundaries are prevalent in regime I. Consequently, the microstructure of regime I contains a homogeneous lath-type structure with a small fraction of highly distorted areas, typical for morphologies transformed at lower temperatures (i.e., fresh martensite). In regime II, the carbon map reveals areas of carbon separation by diffusion and regions of uniformly distributed carbon. The former corresponds to areas of low distortion in the band contrast with embedded block-type retained austenite with high carbon concentration. Areas of uniform carbon distribution appear in dark gray scale values as an indicator for transformation products that originated from low temperatures. The coarser carbon-depleted areas result in an increase of the number of low-angle boundaries at the cost of high-angle boundaries. In addition to EPMA maps, line scans were obtained to monitor the elemental changes across phase boundaries for steel 1 in regimes I-III ( Figure 3). In regime I, the carbon distribution corresponds either to moderate fluctuations or regions of high carbon concentration with high distortions. Regime II contains in addition carbon-depleted areas next to high-carbon areas with up to 0.6 wt% C. Alternating carbon depletion and high carbon concentration can be observed in regime III, with maximum concentrations of up to 0.9 wt% C. For other alloying elements, no diffusion can be identified from the EPMA results, as it can be seen from regime II. Nanohardness in regime I reflects the homogenous carbon distribution by a narrow hardness distribution with a mean hardness and standard deviation of 4.44 AE 0.74 GPa (Figure 4). For regimes II and III, peak broadening can be detected with 4.69 AE 1.16 GPa in regime II and 4.55 AE 1.47 GPa in regime III. The right column of Figure 4 shows examples of microstructural categories (with fcc austenite in green) overlapped with the corresponding nanohardness indent. The indents are classified into different microstructural constituents for steel 1, separated after the cooling regime. The lowest hardness was measured for bainitic ferrite, in regimes II and III, with a corresponding mean value of 2.64 AE 0.81 GPa for regime III. The absence of granular bainite in regime I results in a higher hardness for carbon-supersaturated lath of bainitic ferrite with a mean value of 3.88 AE 0.04 GPa. Dark gray scale values in the band contrast represent fresh martensite. These regions yield a high nanohardness above 5 GPa, as it can be seen for regime III with 5.53 AE 0.52 GPa. Retained austenite as block or film contributes to an intermediate range of hardness. During loading, a "pop-in" in the early stage of loading is seen as the result of the homogeneous nucleation of dislocations under the indenter tip when no sources exist due to a damage-free surface (the interpretation of pop-ins will be further specified in the discussion). During unloading, an "elbow" behavior can be observed. In the literature, "pop-out" or elbow behavior can be correlated with an expansion of volume below the indenter tip due to a phase transition. [24] In case of an extensive or very rapid expansion, pop-out is commonly seen for high loading rates and high maximum loads, while for a lower degree of expansion, rather an elbow-like behavior can be observed for low loading rates and lower maximum loads during indenter release. Pop-outs and elbows are both signs of indentation-induced transformation. Pop-in behavior during loading was observed in retained austenite of steel 1 in regime I ( Figure 5). Another indent that coincided with retained austenite yielded a gradient change of slope during unloading (elbow behavior). Additional information can be extracted by the spatial correlation of nanohardness with the observed morphology. It becomes clear that dark gray scale values correspond to martensite and Green regions indicate fcc austenite; dark areas indicate highly distorted martensite (scale bar applies for all images; red, blue, and black will be used again in Figure 5 to assign the three microstructural constituents). bright gray values to bainitic ferrite. In addition to retained austenite, the former shows occasionally an elbow behavior as an indicator of martensite transformation. Only bainitic ferrite shows an absence of such discontinuities. Furthermore, cooling regimes II and III resulted in a significant scatter of load-displacement curves in each microstructural class, whereas the curves almost overlap in regime I. The previous findings were used to elaborate a classification scheme. Afterward, the classification was applied on steel 1 in different regimes and on three different steels with different hardenability levels. For the quantification of microstructures, three classes are defined as follows: 1) high-temperature, 2) intermediate-temperature, and 3) low-temperature morphologies, according to differences in the morphology, carbon distribution, distortion, and hardness ( Table 2). In addition, dilatometer experiments were conducted to track the evolving phase transformation after interrupting the cooling at different temperatures. In the SEM micrographs after interrupting the cooling of regime II (Figure 6), these regions can be differentiated by the etching response. No or low etching response corresponds to low-temperature morphologies, whereas coarse regions with pronounced topographic effect relate to the first class. In case of a clearly visible lath shape, the region is marked as intermediate morphology. The results of the phase fraction analysis for different cooling conditions and three different steel compositions are shown in Figure 7. Each pie chart contains the retained austenite fraction obtained from XRD. The fraction of retained austenite lies in the range of 5.6% in regime II of steel 3 and 14.7% for regime III of steel 1. A change in cooling regime causes from regime I to regime III an increase in high-temperature morphologies (in brown) on the cost of intermediate morphologies. Regime III displays a mixture of low-and high-temperature morphologies with a low fraction of intermediate morphologies. The effect of manganese was analyzed in regime I and II. In both regimes, the fraction of low-temperature morphologies (in blue) increases. This increase in regime I reduces the amount Figure 8. a-c) Effect of cooling regime in steel shows primarily intermediate-temperature morphologies in regime I (0.3 K s À1 from 400 C), whereas in regime II (1 K s À1 from 500 C) and regime III (0.3 K s À1 from 500 C) high-temperature morphologies dominate the microstructural composition. d-f ) Increasing the manganese content from 1.5 to 2 and 2.5 wt% causes in regime II higher fractions of low-temperature morphologies, but at the cost of high-temperature morphologies. The same tendency was observed in regime I. The phase fractions are assigned to the dilatometer data (transformed fraction over temperature) for cooling in regimes I-III, as shown in Figure 8a-c. The curves were obtained from the length change data via the lever rule. Cooling in regime I generates a homogeneous microstructure originated from a narrow temperature range. At higher transformation temperatures, the microstructure is composed of different morphologies formed at a broader temperature range. A steep transition of high-to lowtemperature morphologies can be seen in regime III. It has to be noted that the phase fractions add up to 100%, indicated by the y-axis. But the diagram refers to the transformation start (0%) and finish temperatures (100%) in the shown temperature range. Retained austenite is not considered in the curve as it is assumed stable at room temperature. Thus, the overall phase fractions are reduced by the amount of retained austenite. The effect of hardenability by increased manganese content is plotted in Figure 8d-f. The fractions of high-and intermediatetemperature morphologies (in green and blue) reduces and diminishes at 2.5 wt% Mn in steel 3. For this steel, regimes I and II visually almost overlap with a high fraction of lowtemperature morphologies. In regime I, increasing the manganese content correlates to increased fractions of low-temperature morphologies on the cost of intermediate-temperature morphologies. Table 3 provides an overview of transformed volume fractions and the according transformation start temperatures. The effect of manganese in regime I in steel 1, 2, and 3 shows a clear decrease of intermediate start temperatures, while the lowtemperature morphologies begin to transform in the same range of %370 C. The effect of manganese is more severe in regime II by decreasing the transformation temperatures significantly and by shifting the transformation curve to lower temperatures. The results of mechanical testing in different cooling regimes of steel 1 are summarized in Table 4. Regime I stands out with high strength of 953 MPa [yield stress (YS)] and 1274 MPa [ultimate tensile strength (UTS)]. This causes a high yield ratio of 0.75. In addition, regime I yields the highest impact energy among the tested conditions. Regimes II and III cause an early onset of plastic deformation and lower tensile strength and a deterioration of impact energy. The ductility can be increased by regime III with 11.6% uniform elongation (UEl) and 14.7% total elongation (TEl), respectively. The mechanical properties of steel 2 and 3 with different manganese contents are separated into regimes I and II. In the former, the increase of manganese causes higher strength but lower impact energy. Uniform elongation and total elongation are increased in steel 2, while a manganese content of 2.5 wt% causes a deterioration of ductility. In regime II, a broader range in yield stress (YS: 785-1035 MPa) and tensile strength (UTS: 1157-1552 MPa) can be observed. Elongation values appear again higher in steel 2, whereas the impact energy increases slightly with increasing manganese content. Microstructure-Property Relationship The impact of the phase fractions on the mechanical properties at room temperature depends on the cooling parameters and the chemical composition. Table 3. Bainite fractions in steel 1 (regimes I-III), steel 2 (regimes I-II), and steel 3 (regimes I-II) with transformation start temperatures of high-, intermediate-, and low-temperature morphologies (last column). in regime I, two classes can be observed, while high-temperature morphologies occur in regime II. An increase of manganese causes lower transformation temperatures, with a more severe effect in regime II than in regime I. Effect of Cooling Regime Slow cooling in the lower bainite phase field beginning from 400 C (regime I) produces primarily one type of bainite transformed in a narrow temperature range with 4% low-temperature morphologies. The low amount of brittle morphologies surrounded by a lath-shape morphology with high-angle misorientations causes a high yield point during tensile testing. The absence of granular morphologies in this regime provides a relatively high tensile strength. In this context, regime I showed the highest impact energy among the tested steels and the EBSD results revealed a high degree of high-angle misorientations as effective barriers against crack propagation. [6] At higher temperatures, a simulated air-cooling route beginning from 500 C (regime II) produces a broader temperature window and subsequently a mixture of morphologies dominated by coarse high-temperature morphologies. This type of bainite contains coarse bainitic ferrite with low-angle boundaries. The low strength of this particular morphology causes deterioration of the yield stress. Together with a decrease of tensile strength, a lower yield ratio of 0.68 was observed, as an indicator of an increasingly inhomogeneous microstructure. The mixture of morphologies seems disadvantageous for ductility. Compared to regime I, this regime obtains slightly more retained austenite but the increase of low-temperature morphologies with uniform carbon distribution provides less deformability during tensile testing. The high degree of coarse morphologies with low-angle boundaries can be linked to low resistance against impact loads. A further decrease of the cooling rate in the upper bainite phase field at 500 C (regime III) introduces more retained austenite (in agreement with the retained austenite measurements in Table 5) and high-temperature morphologies at the cost of the lath-type bainite from intermediate temperatures. With a high amount of granular bainitic ferrite, yielding already occurs at 709 MPa. This microstructure represents the highest degree of observed microstructural inhomogeneity, which manifests in a low yield ratio of 0.63. The broad scattering of nanohardness confirms this tendency. The lack of intermediate-temperature morphologies yields a hardness gradient from low-temperature to high-temperature constituents. Under loading, this heterogeneity appears critical for crack nucleation. Moreover, the observed low-angle misorientations are responsible for the deterioration of impact energy. On the other hand, regime III benefits from a high degree of soft bainitic ferrite and retained austenite to provide better ductility. Effect of Manganese Content The hardenability level can be controlled by the manganese content. In regime I, the increase of manganese from 1.5 to 2.5 wt% produces a microstructure dominated by low-temperature morphologies. The high nanohardness in this microstructural constituent correlates to a higher overall strength (UTS ¼ 1486 MPa). In contrast, the microstructure is not suitable to provide high ductility and impact toughness. The former can be explained by a lack of softer phases with a reduced fraction of retained austenite and bainitic ferrite. Low temperature morphologies such as fresh martensite can be regarded as brittle and therefore, the impact toughness deteriorates. With increasing manganese content, the difference in properties from regime I to II becomes less significant. For instance, the difference in yield strength between regime I and II for steel 1 is significantly higher, with 168 MPa compared to 33 MPa in steel 3. This makes steel 3 less sensitive to changes in the cooling parameters. Thus, steel 3 is more suitable for larger wire diameters, which are usually prone to axial temperature gradients and microstructural heterogeneities. Comparison of Characterization Methods Different methods were used to characterize the microstructure of wire rod steels. Each method contributes differently to the final phase quantification. For instance, in high-temperature morphologies, carbon maps by EPMA provide a clear distribution of carbon on the sample surface in addition to the precise measurements of carbon concentrations via line scans of carbon. In contrast, transformation at lower temperatures causes lath-type morphologies with a lath width on the submicron level. This exceeds the resolution of EPMA to resolve fine differences in carbon distribution. On the other hand, EPMA records line scans that can be correlated with a misorientation analysis by EBSD. Although the absolute carbon concentration in lath-type regions seems underestimated due to an overlap of bainitic ferrite and retained austenite films, EPMA is a powerful tool in extension to the averaged information of carbon concentration (provided by XRD). Nanohardness is able to obtain mechanical properties of a single microstructural morphology. TEM observation in silicon steels proved that the elbows and pop-outs occurring during unloading can be linked to a phase transformation. [25] The volume expansion by transformation causes an uplift of the indenter tip. In case of pop-out, this uplift takes place abruptly, promoted by higher maximum loads and increased (un-)loading rates. Otherwise the elbow-like gradual change in the unloading curve is the result of a phase transformation with a lower volume expansion, for example, at lower maximum loads and loading rates. And in fact, the discontinuities were observed in the Table 5. Retained austenite (RA) fraction, lattice constant (a γ ) and carbon concentration (C γ ) in steel 1(regimes I-III), steel 2 (regimes I-II), and steel 3 (regimes I-II) obtained by XRD measurements. Cooling in regime III yields a high fraction of retained austenite with lower stability (low C γ ), while an increase of manganese decreases the fraction of retained austenite and C γ . The carbon concentration was calculated according to the equation after Dyson and Holmes. [ vicinity of martensite and the retained austenite-i.e., where a TRIP effect would be expected-and not near the bainitic ferrite. Discontinuities are seen in the indentation load-displacement curves both in the loading data (as pop-ins) and in the unloading data (as elbows). In the literature, pop-ins during the testing of similar steels have been linked to a phase change via a TRIP effect, [26][27][28] and in some cases this has also been confirmed by TEM. [27,28] However, such pop-ins are also commonly associated with the homogenous nucleation of dislocations underneath the sharp indenter tip, [29][30][31] and therefore in this work it cannot be unambiguously said to which mechanism these features belong. However, the additional "elbow-ing" in the unloading curve has also been shown to be associated with a phase transformation. [29,32] This, in conjunction with the fact that these discontinuities were observed in the vicinity of martensite and the retained austenite-i.e., where a TRIP effect would be expected -and not near the bainitic ferrite, therefore, strongly implies they are a result of a TRIP effect also occurring here. A more pronounced elbow was observed in the martensitic morphology of regime III compared to the retained austenite. Thus, it can be assumed that retained austenite facilitates dislocation movements during phase transformation, which can be seen by a damped elbow in contrast to an indentation-induced transformation with adjacent martensite. In summary, nanohardness is a powerful tool to gain further insights into the microstructural homogeneity, given a sufficient number of indents. But the full potential of nanohardness is reached by coupling this technique with EBSD and EPMA. The local information of the local carbon concentration, nanohardness, and misorienation provides complementary information on the microstructural constituents. Conclusion The carbide-free bainite concept was tested on three steels with different manganese contents and three different cooling regimes, according to the process window of a wire rod cooling conveyor. Different characterization methods were combined to reveal the effect of varied cooling parameters and hardenability level on the microstructure. In summary, the findings are as follows. 1) The change of cooling regime causes significant differences in the bainite morphology. The comprehensive use of EBSD, EPMA, and nanohardness is suitable to monitor and categorize these differences. Merging the results of the adjusted line-intercept method, XRD, and dilatometry yields a quantification of the microstructural features. 2) A classification based on EPMA or nanohardness alone is not sufficient. Only a combination of these techniques with EBSD provides enough information for classifying bainite in wire rod steel. The results were used to classify the microstructure into three classes: low-, intermediate-, and high-temperature morphologies. 3) The cooling rate in the bainite phase field has an impact on the homogeneity of the final microstructure. At 400 C, a relatively low cooling rate to 0.3 K s À1 causes a homogeneous microstructure primarily composed of intermediate-temperature morphologies. In contrast, simulated air cooling of 1 K s À1 at 500 C causes a mixture of all three microstructural classes. Cooling from 500 C with 0.3 K s À1 develops a granular microstructure with primarily low-and high-temperature morphologies with a steep hardness gradient. 4) Nanohardness measurements are suitable to reveal microscopic properties of microstructural features. The retained austenite shows linkages to elbows in the load-displacement curves during unloading as an indicator of martensite transformation beneath the indenter tip, whereas pop-ins during loading cannot be unambiguously interpreted. 5) Regarding the macroscopic mechanical properties, high-temperature morphologies contain relatively high amounts of ferrite and retained austenite, which in turn have a beneficial effect on ductility. A steep hardness gradient for a mixture of low-and high-temperature morphologies facilitates crack propagation and thus, yields low impact energies. 6) An increase of manganese content from 1.5 to 2.5 wt% has an impact on the resulting microstructure and properties. For 2.5 wt% Mn, changes in the cooling regime become more negligible. Thus, manganese causes a lower sensitivity to the cooling regime. This makes higher manganese contents attractive for larger wire diameters, which are prone to temperature gradients in axial direction during cooling.
7,085.4
2020-09-30T00:00:00.000
[ "Materials Science" ]
Human eccrine sweat gland epithelial cultures express ductal characteristics. 1. Isolated human eccrine sweat glands were cultured in vitro. Cells were harvested and plated onto permeable supports to form confluent cell sheets, area 0.2 cm2. These were used to study the electrogenic transepithelial transport of ions by measurement of short‐circuit current (SCC). Epithelial sheets had a basal SCC of 5.89 +/‐ 0.62 microA cm‐2 (n = 33) and a transepithelial resistance of 74.1 +/‐ 5.6 omega cm2 (n = 33). The transepithelial potential difference varied between ‐0.2 and ‐1.8 mV with a mean value of ‐0.71 +/‐ 0.09 mV (n = 33). 2. The basal current was abolished by addition of 10 microM‐amiloride to the apical bathing solution. The concentration of amiloride which inhibited basal SCC by 50% (EC50) was 0.4 microM. Cultures prepared from the secretory coil of sweat glands, rather than from whole glands, were similarly sensitive to amiloride (EC50 = 0.8 microM). 3. Lysylbradykinin (LBK), carbachol, isoprenaline, prostaglandin E2 (PGE2) and A23187 all increased SCC in cultures from whole glands. LBK responses were obtained with basolateral and not with apical application. Furthermore LBK actions were not substantially altered by cyclo‐oxygenase inhibition but showed marked desensitization upon repeated application. Sheet cultures prepared from sweat gland coils also showed SCC responses to both carbachol and LBK. Forskolin, an activator of adenylate cyclase, did not alter SCC in either type of preparation. 4. Replacement of chloride and of chloride and bicarbonate in the bathing solution did not cause attenuation of the responses to LBK or carbachol in whole‐gland sheet cultures. Furthermore responses were unaffected by piretanide or acetazolamide. These results were taken to indicate that anion secretion was not the basis for the SCC responses. 5. Responses to LBK and carbachol were significantly reduced by amiloride (10 microM), this effect being reversible. No responses to LBK or carbachol were seen when N‐methyl‐D‐glucamine (NMDG) was used to replace sodium, whereas reintroduction of sodium ions restored responsiveness to these agents. 6. The SCC responses to the muscarinic agonist carbachol and to LBK appear to be due to stimulation of amiloride‐sensitive, electrogenic sodium absorption in whole‐gland sheet cultures. Further it would appear that, in culture, the pleuripotential capacity of the cells is revealed since both whole‐gland and secretory coil cultures exhibit some properties usually associated in vivo with duct cells. Many mammalian epithelia show electrogenic chloride secretion both in response to carbachol and LBK but also in response to activation of adenylate cyclase with forskolin.(ABSTRACT TRUNCATED AT 400 WORDS) INTRODUCTION The human eccrine sweat gland secretes an ultra-filtrate of plasma-like precursor fluid from the secretory coil and then actively resorbs sodium and chloride as the primary secretion passes along the duct. In this way a hypotonic surface sweat is produced (Schultz, 1969). Recently, several reports of in vitro culture of sweat gland ducts have been reported, (Pedersen, 1984;Collie, Buchwald, Harper & Riordan, 1985;Lee, Carpenter, Coaker & Kealey, 1986). In vitro culture offers the advantages of access to the apical surface and experimental manipulation in ways reserved for sheet-like epithelia. Our aim here was to subculture cells from explants of sweat glands onto permeable supports for continuous short-circuit current recording. This technique would allow protocols for examining cellular physiological mechanisms of ion transport to be developed. Furthermore the relative simplicity of the technique allows precise quantitative data to be collected, in contrast to the more difficult procedures necessary with perfused glands. We show that lysylbradykinin, a decapeptide produced by kallikrein from kininogen, stimulates electrogenic sodium absorption. We are unaware of another mammalian epithelium in which electrogenic sodium absorption is stimulated by kinin. Indeed most mammalian epithelia secrete chloride in response to this peptide (Cuthbert & Margolius, 1982;Manning, Snyder, Kachur, Miller & Field, 1982). We also show that reabsorptive duct-like properties are expressed irrespective of whether the origin of the explant is from whole gland or secretory coil. The culture of cells from whole glands therefore represents the simplest system available to study reabsorption in human sweat gland epithelia. Importantly, the new information given here forms a background against which the behaviour of the cystic fibrosis epithelia can be compared, since sweat glands display the primary defect of the disease (Quinton, 1983;Sato & Sato, 1984;Welsh, McCann & Dearborn, 1987). SWEAT GLAND EPITHELIA To pick out the glands from the resulting suspension, ten drops of 0 05% Neutral Red were added to each dish. Within 2 min each gland became highlighted from the background of fat and collagen by the appearance of a diffuse red colour in the tubules of the secretory coil and a thin dark red line in the reabsorptive duct (Quinton & Tormey, 1976). The glands were visible at 20 times magnification and could be picked out from other debris with fine forceps and transferred to buffer without Neutral Red. Fifty to sixty glands were collected into a sterile 25 cm2 Petri dish containing buffer. Primary culture of whole glands Glands were transferred to William's E medium containing collagenase type II (2 mg ml-') and fetal bovine serum (5%) and incubated with 5% CO2 in air at 37°C for 30 min. Afterwards glands were transferred to William's E medium containing only bovine serum and incubated for 60 min. Tissue culture flasks, 25 cm2, with 1 ml of William's E medium containing L-glutamine (1 mM), penicillin-streptomycin (100 U ml-', 100 jug ml-'), bovine insulin (10 ,ug ml-'), hydrocortisone (10 ng ml-'), transferrin (10 jug ml-'), epidermal growth factor (EGF, 20 ng ml-') and trace element mix (0 5 %) were equilibrated with 5 % CO2 in air at 37 'C. Glands were explanted into these flasks using A5 insect pins glued to the end of glass handles. It was essential to keep the glands moist in the first 24 h during which time attachment occurred. If more than 1 ml medium was used the tissues floated off the plastic surface. After 24 h a further 3-4 ml of medium were added to each flask and incubation continued. The medium was changed twice weekly. The methods used for isolation and culture of the glands was modified from that described by Lee et al. (1986). Epithelial cells began to grow out from the explant after 2 days. After 2-3 weeks the outgrowth reached 1-2 cm diameter. Cells were removed from the flasks by dissociation with versene and trypsin (Cuthbert, Egleme, Greenwood, Hickman, Kirkland & MacVinish, 1987) and placed on Millipore filters (HAWP, 045 um). The filters were coated either with collagen (0-25% in 0-2% acetic acid) or basement membrane Matrigel (diluted 1:1 with William's E medium) and allowed to dry. A silicone washer with a 0-2 cm2 hole was attached to the centre of each filter using Silastic 734 RTV adhesive creating a small well into which cells could be seeded. These units were sterilized by UV irradiation. Aliquots of 100 ul of cell suspension containing 3 x 105 cells at a viability of 95 % (Trypan Blue exclusion test) were pipetted into each well. Fetal calf serum (4 %) was added to the medium at this stage to promote cell attachment to the filters. Serum-containing medium was withdrawn after 2 days, especially to reduce the risk of fibroblast outgrowth. Cultures were ready for use after 5-8 days. In some instances primary cultures were grown from only the secretory coil part of the gland. To do this glands were microdissected as described in Lee et al. (1986), and cultured as described above with the omission of the 2 day exposure to fetal calf serum. Short-circuit current recording in epithelial monolayers The Millipore washer cell monolayer complexes were clamped between the two halves of a Ussing chamber. The epithelial area (0-2 cm2) was held in the centre of a window (area 0-6 cm2) of the double chamber. Edge damage was avoided as the chamber halves abutted the silicone washer. Monolayers were short circuited using a W-P dual-voltage clamp (model DVC 100) with the capacity to compensate for the fluid resistance between the tips of the potential electrodes. Shortcircuit current (SCC) records were displayed on a pen recorder. Measurements of transepithelial potential difference and calculation of transepithelial resistance were made according to previous descriptions (Cuthbert, George & MacVinish, 1985). Each side of the tissue was bathed in 20 ml of Krebs-Henseleit (KH) solution maintained at 37 'C by a heat exchanger. Both sides of the chamber were continuously bubbled with 95% 02-5% CO2 which maintained pH at 7-4. Histology Monolayers were stained with Haemotoxylin-Eosin (Baker & Silverton, 1976). Electron microscopy was carried out with a Phillip's 300 transmission electron microscope using standard fixation and sectioning techniques. Chloride-bicarbonate-free KH solution contained these replacements and in addition HEPES-Tris (10 mM), in place of NaHCO3. This solution was gassed with100% 02. For sodium-free KH solution the replacement was either with choline chloride (117 mM) or N-methyl-D-glucamine (NMDG, 117 mM) neutralized to pH 7-4 with HCl and with HEPES-Tris (10 mM), as replacement for NaHCO3. Substituted KH solutions were checked for tonicity using a freezing-point osmometer and tonicity corrected by adding sucrose where necessary. When preparations were not to be mounted in KH solution the compensation for fluid resistance was made in the modified solution. Substituted KH solutions were always used to bathe both sides of the preparations. Patterns of growth Epithelial cells migrated from whole-gland explants after about 2 days in culture. They continued to divide for up to 3 weeks forming a confluent circular monolayer around the explant. Some cell division continued after this time but multilayered structures were then formed. At around 25 days in culture cells became senescent, vacuoles developed and division ceased. Figure IA and B shows the appearance of cultures at 4 and 18 days respectively. Only 40% of the explanted glands grew in culture; the reason for the failure of the others is unknown. Previous studies with cytokeratin antibodies have shown primary cells grown in this system are of epithelial lineage (Lee et al. 1986). Sheet culture formation on permeable supports Epithelial sheet cultures were grown on permeable supports as described in the Methods. It was essential to prepare cultures from explants grown for 18 days or less, that is when the cells were actively dividing. Cells obtained from cultures older than this gave structures with zero transepithelial potential and low transepithelial resistance; these were unsuitable for electrophysiological investigation. Satisfactory cultures required seeding with cells at a high density (1-3 x 105 cells per well); sparsely seeded wells failed to attain confluence. It was not possible to visualize the cells growing on Millipore filters in the living state; consequently the cultures were In other instances experimental sheet cultures were examined by electron microscopy (Fig. 1C, D and E). This revealed that the cultures were multilayered. Not all the layers were composed of living cells, senescent or dead layers forming the outermost or innermost structures. Apical microvilli were apparent in the outermost living cell layer, but no evidence of tight junctions was seen. Desmosomes were a prominent feature of the cultures. -100+O077 (15) 6-00+O0 77 (14) Collagen 34-1 + 3-6 (9)* -O037 + 011 (9)* 2-62 + 0-36 (8)* Measurements were carried out on three different batches of cultures, each batch containing both collagen and Matrigel-coated Millipore filters. The LBK values were the peak increases in SCC in response to basolateral addition of 01,IM-LBK. *P < 0 01, unpaired Student's t test. R, resistance; PD, potential difference. The number of separate cultures examined is given in parentheses. In early experiments we investigated whether collagen or Matrigel was the better substrate for promoting sheet formation. The same epithelial cell suspensions were used to form sheets on the two substrates. Measurements were made of potential difference, resistance and the SCC responses to LBK, which we had shown in preliminary experiments caused a rapid, but often transient, increase in SCC. The results, given in Table 1, show that Matrigel gives cultures in which all three parameters were increased. Therefore Matrigel was used as the substrate for all in the experiments given in the remainder of this report. Responses to hormones and autacoids From preliminary results it was clear that the cultures prepared from human sweat glands did not have a high resistance and, in addition, they proved to be extremely fragile. Changing the bathing solution sometimes dislodged the monolayer from the supporting membrane. With these restrictions measurements of ion flux with isotopes to determine the nature of the transported species was not possible. Reliance was placed on ion substitution and designing experiments in ways to avoid, or at least to minimize, the number of solution changes. As the basal SCC in these preparations was relatively small, a number of agents with either known or postulated actions on sweat formation were used to probe the effects of ion substitution upon SCC responses. Several substances were found to cause increases in SCC and typical responses are 662 SWEAT GLAND EPITHELIA 5 min For LBK CCh Iso Fig. 2. Examples of SCC responses from a culture prepared from whole gland, 7 days after seeding (resistance, R = 80 Q cm2, potential difference, PD = -1-2 mV). Compounds were added basolaterally, except forskolin which was added to both sides. Concentrations were: LBK, 0-1/ISM; carbachol (CCh), 10/SM; isoprenaline (Iso), 10 ,tm; and forskolin illustrated in Figs 2 and 3. LBK, carbachol, isoprenaline and A23187 all caused SCC increases. In some 60 % of preparations the rapid increase in current to LBK was followed by oscillations (Fig. 2), which declined to baseline within 10-15 min. In the other preparations the response was a smooth increase and decrease in current (Fig. 3). No responses to LBK were seen when the peptide was added to the apical face of the monolayers. In twenty-four separate experiments the response to LBK (0-1 ,tM) was 6-65 + 0-58 #aA cm2. In a given preparation which showed oscillatory responses to LBK, other agents also caused oscillatory-type responses after the initial response to LBK had subsided. Similarly preparations showing a smooth response to LBK also responded smoothly to other agents (Fig. 3). In twenty separate preparations carbachol (10 ,M) caused responses of 7-54 + 1-03 PsA cm-2 of which fourteen (60 %) showed oscillations during the plateau phase. Only minute responses were obtained when carbachol was added to the apical bathing solution. It was not possible to discover if these responses were due to the presence of cholinoceptors on the apical face or due to penetration of the agent to the basolateral side. Low concentrations of atropine (10 nM) caused an immediate return of carbachol-stimulated SCC to the basal value (Fig. 3). In three preparations pre-incubated with atropine (10 nM), responses were obtained to LBK but not to carbachol. In twenty-five preparations in the absence of atropine, responses were always obtained to both carbachol and LBK. Isoprenaline caused an increase of 5 40 + 1-03 juA cm-2 in nine separate experiments (Fig. 2). Oscillations were seen in four of these preparations. However in some batches of preparations formed from a single-cell suspension no responses were seen to isoprenaline, while those to carbachol or LBK were normal. We must conclude that the response to isoprenaline is variable in a way which makes investigations problematical. Responses to apical addition of isoprenaline were small, as with carbachol. A23187 (1 ,LM) increased SCC more slowly than the other agents (Fig. 3). The peak increase in SCC was 5X70 + 0X71 ,uA cm-2 in eight experiments and in half of these preparations oscillations were seen in the plateau phase. In four preparations forskolin (10 ,tM), an activator of adenylate cyclase, was applied to both sides of preparations but was without effect (Fig. 2). Neither did this agent affect the response to LBK or carbachol. In six preparations application of PGE2 (10 ,tM) to the basolateral face caused SCC to increase by 5-91 +05 gA cm2. Using lower concentrations (1 ,SM) it was shown that these responses were maximal. To examine whether eicosanoids were mediating the effect of LBK, responses to LBK were measured in the presence and absence of the prostaglandin synthesis inhibitor piroxicam (10 uM). In three pairs of tissues piroxicam had no effect on basal SCC per se and also failed to influence the subsequent LBK response (Fig. 4). This indicates that endogenous prostaglandin production is unlikely to be of more than minimal importance to either the maintenance of SCC or in the genesis of the LBK response. On different occasions the effect of piretanide (a Na+-K+-Clco-transport inhibitor) and acetazolamide (a carbonic anhydrase inhibitor) on either basal or stimulated SCC was tried (e.g. Fig. 3). No effects of these agents were found except a minor stimulation of SCC following addition of piretanide. Ionic basis of the short-circuit current responses The pattern of responses of sweat gland cell sheet cultures resembles anion secretory mechanisms in other epithelial systems such as primary cultures of canine tracheal epithelium (Welsh, 1985), MDCK epithelial monolayers (Brown & Simmons, 1981), and rat colon (Cuthbert & Margolius, 1982). Unexpected characteristics found here were the failure of sweat gland monolayers to respond to forskolin and the lack of effect of blockers of Na+-K+-Clco-transport and of carbonic anhydrase (Fig. 3). Additionally, the basal current was sensitive to amiloride which suggested that electrogenic sodium absorption was, at least in part, responsible for basal SCC. To discover the ionic basis for the SCC responses experiments were carried out in which individual ions were substituted and responses compared to those in normal KH solution. Throughout we used LBK (0-1 #M) and carbachol (10 /M) as stimulating agents. Figure 5 shows and -18 mV). Drug concentrations were: LBK, 0-1/UM; carbachol (CCh), 10/M; and amiloride (Amil), 10,Mm. The latter was added to the apical side only while the other two were added to the basolateral bath. Amiloride was added either before (upper) or after (lower) the agonists. Horizontal lines indicates zero SCC. gluconate was substituted for chloride. This manoeuvre did not prevent a response to either agent and Table 2 shows that the responses in this solution were not different from those obtained in normal KH. In some epithelia anion secretory mechanisms can employ bicarbonate when chloride is absent (Grasl & Turnheim, 1984;Cuthbert & Hickman, 1985). Although acetazolamide (045 mM) had no effect during the plateau response to carbachol in chloride-free KH solution, experiments were also made in which both chloride and bicarbonate were substituted in the bathing solution. Table 2 and Fig. 6 illustrate that responses to LBK and carbachol were not reduced by the virtually complete removal of permeant anions. In a final series of substitution experiments sodium ions were replaced either with choline or NMDG. The likely importance of sodium was already a possibility since using chloride-and bicarbonate-free KH solution we found that amiloride pretreatment significantly inhibited subsequent responses to LBK and carbachol (P Resistance and potential difference measurements were: 125 Ql cm2 and -15 mV (upper record) and 100 Q2 cm2 and -1-3 mV (lower record). Both records illustrate the effect of LBK (041 M) and carbachol (CCh, 10/uM) before and after amiloride with washing (W) in between. In one experiment amiloride was added first before LBK and CCh while in the other the order was reversed. Only amiloride was added to the apical bathing solution; all other agents were added to the basolateral side. Amiloride (Amil) and atropine (Atr) were used at 10juM and 10 nm respectively. Horizontal lines indicate zero SCC. < 0 01) (Table 3). A typical example is shown in Fig. 6. Further evidence that LBK or carbachol caused stimulation of electrogenic sodium absorption could be obtained in single preparations which were robust enough to allow washing. Figure 7 shows two examples of this. Amiloride almost abolished the responses to LBK and carbachol, the response to the latter being restored after the blocker was removed. Responses to LBK were not restorable as exposure to the peptide causes long-duration desensitization, as found for other epithelia (Cuthbert et al. 1987 unstable in this medium. Additionally choline caused problems in relation to assessing the actions of carbachol, because of the weak muscarinic activity of the former. NMDG proved to be a satisfactory sodium substitute and results of five experiments with cultures from a single batch of cells are shown in Fig. 8. All had SCCs close to zero and LBK and carbachol had no effect in the absence of sodium. Addition of isotonic saline to give a final sodium concentration of 25 mm caused an immediate increase in SCC which was sensitive to amiloride (A). If sodium was added first the epithelium responded to both agents, even though the sodium concentration was only 25 mm (B). A further variation was to add sodium after one and before the other of the stimulating agents (C and D). Again responses were obtained only in the presence of sodium. Finally, prior addition of amiloride prevented the increase or SCC following the addition of sodium (E). 668 SWEAT GLAND EPITHELIA All five experiments were conducted at day 6 after seeding with cells from the same batch and therefore they may be presumed to represent tissues with very similar properties. Of the twenty additions made in the five experiments illustrated on Fig. 8, the responses were, without exception, consistent with the hypothesis that LBK and carbachol promote electrogenic sodium absorption in this tissue. In several other experiments the effect of addition of ouabain (1 /1M), basolaterally, on SCC was observed. The SCC declined slowly to a low value over 15-20 min, at which time amiloride caused no substantial further change in current. This finding is consistent with a model for active sodium transport as discussed later. Properties of secretory coil sheets in culture Microdissected secretory coils have been used to produce epithelial cultures using the same protocols as for whole glands: These preparations too had a basal SCC sensitive to amiloride, indicative of electrogenic sodium absorption. Concentration-response relationships for amiloride have been established for preparations produced from secretory coil and from whole glands. Similar values were obtained for the concentrations producing 50 % of the maximal inhibition (EC50), namely 0-8 fM for secretory coil preparations and 0 4 /M for whole-gland-derived monolayers (Fig. 9). The secretory coil preparations also responded to LBK and carbachol. DISCUSSION Three other groups have recently developed methods for growing sweat gland epithelial cells in culture using different methods for the assessment of transport function. Collie et al. (1985) and Lee et al. (1986) used hemicyst formation as an indication of transport function, while Pedersen & Larsen (1986) cultured cells on dialysis membranes in order to measure transepithelial movement. More recently Pedersen, Larsen, Hainau & Brandt (1987) have reported SCC measurements from cultured sweat gland ducts. As far as we are aware no previous studies of D. J.BRA YDEN, A. W. CUTHBERT AND C. M. LEE continuously short-circuited epithelia have been made with sheet cultures of whole glands or coils. The choice of tissue area (0-2 cm2) was a compromise between obtaining reasonable currents while economizing with limited amounts of cell suspensions. While the changes in SCC in response to various agonists can be measured with great accuracy (10 nA) the measurement of basal SCC is more problematical. The DVC 100 voltage clamp indicates transepithelial potential difference to within 0-1 mV, representing the maximum possible error in this parameter. As the mean value for the resistance was 75n cm2 then the basal SCC may be in error by 0-25,uA. Throughout we have indicated on the records the position of zero SCC for reference purposes. However amiloride did not always reduce SCC to zero, there often being residual currents of the magnitude given above. The values for transepithelial potential and resistance given here are both substantially smaller than those found by Pedersen, Brant & Hainau (1985), who obtained values of -20 to -30 mV. These are greater than the values of -7 to -10 mV found for perfused isolated absorptive ducts (Quinton, 1983;Bijman & Quinton, 1987). A possible reason for the difference is that Pedersen et al. (1985) cultured only the absorptive duct. With regard to the resistance values we were not able to record any change in transepithelial conductance following amiloride, suggesting that our measurements of resistance were dominated by the intercellular pathways, part of which must relate to the mating between the edge of the culture and the silicone washer; this has not been measured. From a structural study of the tight junctions in human sweat glands (Briggman, Bank, Bigelow, Graves & Spicer, 1981), together with the relation between strand number and epithelial resistance (Claude, 1978), a ductal resistance of 300 Q cm2 was predicted while Pedersen et al. (1985) measured values of 5001000 Q cm2. From cable analysis measurements with perfused human sweat ducts a value of only 10 Q cm2 was obtained which increased after the luminal application of amiloride (Bijman & Fromter, 1986). Additionally, we were not able to observe any tight junctions, although the apical edges of the cells were closely opposed, with the intervening space filled with a light staining material (Fig. 1E). Finally, a major problem with cultured sheets is that small, non-confluent areas, which cannot be viewed microscopically on the supports, short circuit the transepithelial potential difference as well as increase conductance. Nevertheless, in spite of the uncertainties about resistance, sheet cultures do provide the simplest system yet available for quantitative analysis, having many advantages over perfused segments. The multilayered structure of the cultures is reminiscent of the sodium-absorbing epithelia of amphibian skin, as well as of the human epidermis. In the former function is the responsibility of the first reactive layer (Ussing & Windhager, 1964) so that the syncytial structure behaves as a functional monolayer. It is likely from a variety of evidence (see Sato, 1977) that sweat glands are surrounded by nerve endings releasing both acteylcholine and catecholamines; further it is well known that acetylcholine and catecholamines increase sweating. From the use of simple in vivo sweat collection and analysis techniques a view of sweat gland function has emerged. A primary fluid is generated in the secretory coil, the composition of which is then modified as the sweat passes along the absorptive 670 SWEAT GLAND EPITHELIA duct. Theoretical modelling of these processes has been attempted (Schwartz & Thaysen, 1955;Slegers, 1967), particularly to relate sweat rate to the sodium concentration of the sweat. Important parameters are the Michaelis constant (Km) for sodium of the absorptive process and the maximal absorptive capacity. Whether or not sodium absorption in the duct is limited by apical entry of sodium ions or the availability of basolateral pumps is unknown; however, our data show that the absorptive process in cultured sweat gland epithelia can be stimulated by humoral agents. This does not imply that such control necessarily occurs in vivo. In regard to adrenergic innervation of sweat glands there is evidence that catecholamines affect secretion (Sato, 1977) and that they can also modify the chloride permeability of sweat duct monolayers (Pedersen & Larsen, 1986). In contrast, Quinton (1987), using microperfused sweat ducts, suggested that isoprenaline may stimulate sodium absorption. In our system responses to the fiadrenoceptor agonist, isoprenaline, have been variable, including no response when other agonists were effective, as also found by Quinton (1987). It has not been possible, therefore, to investigate these responses systematically. Effects in SCC would not necessarily be seen if the effect of catecholamines were only on counter-ion permeability. However, as forskolin too was ineffective a possible explanation is that the responses to isoprenaline represent modest a-adrenoceptor stimulation working through similar second-messenger systems as does carbachol. Of the responses we have been able to investigate systematically, that is to carbachol and LBK, there can be little doubt that they represent stimulation of an electrogenic sodium-absorbing process. Briefly the evidence is failure to be modified by chloride and bicarbonate removal, inhibition by sodium substitution and .1hibition by amiloride. Taken together with the sensitivity of the basal SCC to amiloride and the near-zero SCCs obtained in sodium-free medium we have no evidence to suggest that the SCC in these monolayers is due to anything other than an inwardly directed sodium absorbtion. Therefore, the monolayers developed with our conditions exemplify a classical sodium-absorbing epithelium (Ussing & Zerahn, 1951) using energy derived from a sodium-potassium ATPase located on the basolateral membranes. The location of ouabain binding sites on the basolateral membranes of intact sweat ducts (Quinton & Tormey, 1976) and the effect of ouabain on SCC reported here add further support to the classical model, as indeed suggested by others (Bijman & Quinton, 1984). It is of particular interest that our tissues were derived from whole glands. Their properties have been compared, in a limited way, with those derived only from the secretory coil. Both types of preparation show some of the characteristics of ductal epithelium indicating the pleuripotential character of secretory cells in culture. Cultures made only from duct also are similar, showing a cholinoceptor-mediated SCC which was amiloride sensitive (Pedersen et al. 1987). The effects of carbachol were shown to be due to an action at muscarinic receptors from the high potency of atropine as a blocking agent. In other epithelia activation of muscarinic receptors leads to raised intracellular calcium concentrations ([Ca2+]i) through activation of a G protein and hydrolysis of phosphatidylinositol (Streb, Irvine, Berridge & Schultz, 1983). Here we have shown that the calcium ionophore, A23187, can also increase amiloride-sensitive SCC. Sato & Sato (1981) using the isolated monkey sweat gland found that secretion caused by methacholine and isoprenaline was calcium dependent. However results presented here differ in an important way from the examples given above in that carbachol stimulates sodium absorption and not a secretory process. We know of no such other examples in mammalian epithelia, although muscarinic agonists increase sodium absorption in frog skin epithelium (Cuthbert & Wilson, 1981) probably by adenylate cyclase activation and eicosanoid formation. Although from a functional viewpoint it would appear appropriate to increase ductal reabsorption at the same time as secretory activity was increased as a means of reducing salt loss while promoting efficient sweating, there is as yet no information to suggest that the absorptive process is under cholinergic control in vivo. As far as it is possible to compare the transepithelial transport of ions in cultured sweat glands, whether they be derived from whole gland, secretory coil or duct, they appear to behave very similarly, showing an amiloride-sensitive SCC which can be stimulated by cholinoceptor agonists. This means it is not possible to distinguish between cultures which are characteristically ductal or show ductal characteristics. For example, Bell, Jones & Quinton (1987) showed that some cells in coil cultures are sensitive both to amiloride and methacholine, while duct cultures are sensitive only to amiloride. In contrast, Pedersen et al. (1987) showed that duct cultures are sensitive to cholinoceptor agents. At this stage, therefore, it is not possible to know if in culture ductal cells take on the characteristics of secretory coil cells or vice versa or, indeed, if epithelial cells of whatever type revert to a more primitive form in culture. Nevertheless, cultured cells have been important for the investigation of cellular transport mechanisms in human sweat glands and have led to insightful observations in relation to cystic fibrosis. This is the first report of a definitive effect of LBK on the sweating process. Transient effects of bradykinin on sweat formation in the horse, donkey and cow were attributed to an effect upon the contraction of myoepithelial cells (Johnson, 1975), an effect which cannot be relevant in this situation. Gordon & Schwarz (1971) found that the high sodium concentration in the first drops of sweat formed at low secretion rates, characteristic of stimulation by many sudorific agents, could be corrected by bradykinin and cyclic AMP. Arginine esterases which can generate kinin have been found in eccrine sweat, one of which is a typical kallikrein (Fraki, Jensen & Hopsu-Havu, 1970) and a deficiency in arginine esterases in cystic fibrosis is reported (Rao, Posner & Nadler, 1972). Typically, kinins stimulate electrogenic anion secretion of either chloride (Cuthbert & Margolius, 1982;Cuthbert et al. 1987) or of bicarbonate (Baird & Margolius, 1987). These effects may be dependent (Cuthbert & Margolius, 1982) or independent (Cuthbert et al. 1987) of eicosanoid formation. In sweat gland cultures the effect of LBK was not modified by cyclo-oxygenase inhibition. As with carbachol the effect of LBK was to increase electrogenic sodium absorption. At this stage there is no information about the second messengers involved although in other cultured epithelial cells bradykinin induces increased phosphatidylinositol turnover, again suggesting that raised [Ca21]i is crucial for the effect (Shayman & Morrison, 1985). In cultured MDCK cells an increase in intracellular calcium has been directly demonstrated in response to bradykinin (Pidikiti, Gamero, Gamero & Hassid, 1985) 672 SWEAT GLAND EPITHELIA which leads to the opening of calcium-sensitive K+ channels in the basolateral membranes (Paulmichl, Friedrich & Lang, 1987). Carbachol and LBK, agents which have been most intensively studied here, have their receptors on the basolateral faces only of the epithelial cells. This seems entirely appropriate for muscarinic receptors sensing neuronally released acetylcholine. It is unknown where kinins might be generated in relation to the sweat glands although it might be expected that kininogen is present in tissue fluids. In both cultures from whole glands and secretory coil the potency of amiloride is similar and comparable to its activity in other epithelial systems (Cuthbert & Spayne, 1983;Cuthbert & Fanelli, 1987). It is now necessary to repeat the experiments reported here using cultures prepared from cystic fibrosis tissue to examine for important differences. The simple cellular system we describe may become a useful tool for assessing agents in the search for a therapeutic strategy for cystic fibrosis. This work was supported by the Cystic Fibrosis Research Fund. D. J. B. is in receipt of a British Council Scholarship. We are grateful to Mr J. Wallwork and his colleagues at Papworth Hospital for providing tissues.
7,882
1988-11-01T00:00:00.000
[ "Biology", "Medicine" ]
Phase transitions of multivalent proteins can promote clustering of membrane receptors Clustering of proteins into micrometer-sized structures at membranes is observed in many signaling pathways. Most models of clustering are specific to particular systems, and relationships between physical properties of the clusters and their molecular components are not well understood. We report biochemical reconstitution on supported lipid bilayers of protein clusters containing the adhesion receptor Nephrin and its cytoplasmic partners, Nck and N-WASP. With Nephrin attached to the bilayer, multivalent interactions enable these proteins to polymerize on the membrane surface and undergo two-dimensional phase separation, producing micrometer-sized clusters. Dynamics and thermodynamics of the clusters are modulated by the valencies and affinities of the interacting species. In the presence of the Arp2/3 complex, the clusters assemble actin filaments, suggesting that clustering of regulatory factors could promote local actin assembly at membranes. Interactions between multivalent proteins could be a general mechanism for cytoplasmic adaptor proteins to organize membrane receptors into micrometer-scale signaling zones. DOI: http://dx.doi.org/10.7554/eLife.04123.001 Introduction Numerous membrane proteins have been observed to organize into supramolecular clusters upon extracellular ligand binding and/or cell-cell adhesion. Examples include cadherins (Yap et al., 1997), Eph receptors (Nikolov et al., 2013;Seiradake et al., 2013), immune receptors (Goldstein and Perelson, 1984), apoptotic signaling receptors (Henkler et al., 2005), chemotaxis receptors (Li et al., 2011), GPI anchored proteins (Varma and Mayor, 1998) and components of T cell signaling pathways (Balagopalan et al., 2013). A variety of mechanisms have been proposed to account for this higherorder organization. The extracellular domains of cadherins and Eph receptors have been postulated to interact laterally in homotypic fashion within the plasma membrane to produce large-scale assemblies at sites of cell-cell adhesion (Himanen et al., 2007;Wu et al., 2011;Seiradake et al., 2013). Modeling studies have suggested that binding of divalent antibodies to the extracellular domain of trivalent Fcε receptors could lead to large networks, which could account for Fcε receptor puncta observed in cells (Goldstein and Perelson, 1984). An analogous mechanism has been proposed for intracellular interactions of the oligomeric receptor, Fas, with its oligomeric adaptor protein FADD (Scott et al., 2009;Wang et al., 2010;Wu, 2013) to produce clusters of the receptors hundreds of nanometers in size (Siegel et al., 2004). Similarly, dimeric bacterial chemoreceptors such as Tsr are linked together by their downstream partners CheA and CheW, forming trimers of dimers resulting in a highly ordered and conserved hexagonal array that is suggested to be the basic unit of polar clusters (Briegel et al., 2012(Briegel et al., , 2014. GPI-anchored proteins and lipid-anchored Ras have been shown to organize into dynamic clusters of 4-7 molecules through transient interactions with lipids and the cortical actin-myosin network (Plowman et al., 2005;Goswami et al., 2008). Such clusters of GPI-anchored proteins are also believed to play an important role in creation of dynamic nanometer scale cholesterol-rich lipid domains, which further contribute to organization of the plasma membrane (Sharma et al., 2004; eLife digest The membrane that surrounds a cell is made up of a mixture of lipid molecules and proteins. Membrane proteins perform a wide range of roles, including transmitting signals into, and out of, cells and helping neighboring cells to stick together. To perform these tasks, these proteins commonly need to bind to other molecules-collectively known as ligands-that are found either inside or outside the cell. Membrane proteins are able to move around within the membrane, and in many systems, ligand binding causes the membrane proteins to cluster together. Although this clustering has been seen in many different systems, no general principles that describe how clustering occurs had been found. Now, Banjade and Rosen have constructed an artificial cell membrane to investigate the clustering of a membrane protein called Nephrin, which is essential for kidneys to function correctly. When it is activated, Nephrin interacts with protein ligands called Nck and N-WASP that are found inside cells and helps filaments of a protein called actin to form. These filaments perform a number of roles including enabling cells to adhere to each other and to move. In Banjade and Rosen's artificial system, when a critical concentration of ligands was exceeded, clusters of Nephrin, Nck and N-WASP suddenly formed. This suggests that the clusters form through a physical process known as 'phase separation'. Banjade and Rosen found that this critical concentration depends on how strongly the proteins interact and the number of sites they possess to bind each other. Within the clusters, the three proteins formed large polymer chains. The clusters were mobile and, over time, small clusters coalesced into larger clusters. Even though the clusters persisted for hours, individual proteins did not stay in a given cluster for long and instead continuously exchanged back-and-forth between the cluster and its surroundings. When actin and another protein complex that interacts with N-WASP were added to the artificial membrane system, actin filaments began to form at the protein clusters. Banjade and Rosen suggest that such clusters act as 'signaling zones' that coordinate the construction of the actin filaments. Regions that are also found in many other signaling proteins mediate the interactions between Nephrin, Nck and N-WASP. Banjade and Rosen therefore suggest that phase separation and protein polymer formation could explain how many different types of membrane proteins form clusters. as actin nucleation promoting factors in the WASP family, are also known to form micrometer sized clusters at the plasma membrane (Yamaguchi et al., 2005;Weiner et al., 2007;Gomez and Billadeau, 2009). These observations suggest that one function of receptor clustering may be to control the localization, structure, and/or dynamics of actin filament networks. We recently demonstrated that interactions between multivalent proteins and their multivalent ligands can lead to macroscopic phase separation. This occurs concomitant with assembly of the proteins into large polymers, through a sol-gel transition, as observed in many other multivalent systems in polymer science . In three-dimensional solution, this process produces phase separated protein polymers that organize into dynamic micron sized liquid droplets. These droplets are formed in a sharp transition as protein concentration in solution is increased. The critical concentration for droplet formation depends on valency and affinity of interacting species, and the proteins are highly concentrated within the droplets. We have studied this phenomenon in a variety of model multivalent systems, involving both protein-protein and protein-RNA interactions, and also in an actin regulatory signaling pathway involving the adhesion receptor, Nephrin, and its intracellular targets Nck and N-WASP (Jones et al., 2006). In the latter, phase separation can be controlled by multivalent phosphorylation of Nephrin and results in enhanced signaling activity of N-WASP. These previous studies were performed in three-dimensional solution. But in vivo Nephrin is an integral membrane protein; therefore its cytoplasmic tail is attached to membranes (Welsh and Saleem, 2010). The behavior of multivalent-multivalent interaction systems in such a two-dimensional arrangement remained unresolved. In this study, we show that multivalency-induced polymerization and phase separation can also occur in two-dimensional systems, generating micrometer-size protein clusters at membranes. When phosphorylated Nephrin is attached to supported lipid bilayers of DOPC, addition of Nck and N-WASP induce formation of micron-sized concentrated puncta containing all three proteins. Puncta form abruptly when a critical concentration of Nck/N-WASP is reached and are highly dynamic. The critical concentration is appreciably lower for two-dimensional puncta formation than for three-dimensional droplet formation, and it depends on the phosphotyrosine and SH3 domain valencies of p-Nephrin and Nck, respectively, and also on the affinity of the Nck SH2 domain for p-Nephrin. These data suggest that puncta formation is driven by polymerization of the proteins in a plane adjacent to the membrane. In the presence of actin and the N-WASP target, the Arp2/3 complex, puncta formation causes focal actin assembly. Our biochemical approach has allowed us to control the clustering process and discover key parameters that control puncta formation. Our study demonstrates that specific protein-protein interactions result in the formation of macroscopic clusters without the necessity of lipid segregation or actin-myosin assembly. This clustering can be defined as phase separation of proteins on the surface of a membrane. Our observations here and previously suggest that polymerization and phase separation of multivalent macromolecules may represent a general mechanism to produce two-and three-dimensional dynamic and highly concentrated micron-sized structures in cells. Membrane-bound p-Nephrin clusters through a phase transition upon addition of Nck and N-WASP Nephrin is a transmembrane protein expressed in the foot processes of kidney podocyte cells, where its extracellular domain is a critical component of the slit diaphragm, the final element of the kidney's glomerular filtration barrier (Welsh and Saleem, 2010). The integrity of the slit diaphragm requires intracellular assembly of actin filaments downstream of the Nephrin cytoplasmic tail (Jones et al., 2006). When Nephrin is crosslinked by antibodies, its cytoplasmic tail can be phosphorylated by the Src family kinase, Fyn (Jones et al., 2006;Verma et al., 2006). Three phosphotyrosines (pTyrs) in the tail bind the SH2 domain of the Nck adaptor protein, which in turn uses its three SH3 domains to bind multiple proline-rich motifs (PRMs) in the actin regulatory protein, N-WASP. N-WASP then recruits and promotes activation of the Arp2/3 complex, which generates branched actin filament networks through nucleating new actin polymers. Mutations that disrupt this pathway in humans and mice result in disorganization of the slit diaphragm and defects in the glomerular filter that cause proteinuria (Jones et al., 2006(Jones et al., , 2009. We previously reported that mixing Nck, N-WASP, and the phosphorylated cytoplasmic tail of Nephrin in solution produced phase separated liquid droplets . This observation suggested that if the Nephrin tail was attached to a membrane, as it is in vivo, Nck and N-WASP might induce it to condense into membrane clusters ( Figure 1A). To test this hypothesis, we began by generating the triply phosphorylated cytoplasmic tail of Nephrin (amino acids 1174-1223, phosphorylated at Tyr1176, Tyr1193, and Tyr1217, and mutated from Tyr to Phe at residues 1183 and 1210, sites not believed to bind Nck (Jones et al., 2006;Verma et al., 2006); called p-Nephrin hereafter). The construct contained a His 8 tag at its N-terminus, followed by a (Gly-Gly-Ser) 5 linker containing a cysteine, which was covalently labeled with maleimide Alexa488 fluorophore. We attached p-Nephrin to supported bilayers of DOPC lipid, doped with 1% of a nickel-chelating lipid (Ni 2+ -NTA-DOGS). Through this approach we could control and quantify the surface density of p-Nephrin as detailed in the 'Materials and methods' section (Galush et al., 2008). Membrane-bound p-Nephrin is homogeneous and fluid on supported bilayers, as demonstrated by total internal reflection fluorescence microscopy (TIRFM) and rapid fluorescence recovery after photobleaching (FRAP, exponential recovery time constant τ = 1.3 s) ( Figure 1B, and Figure 1-figure supplement 1A). Addition of 1 µM Nck causes no change in the distribution of p-Nephrin on the membrane, despite clear association of Nck with the bilayer (Figure 1-figure supplement 1B). Similarly, 1 µM of an N-WASP construct containing the basic proline-rich and VCA regions of the protein (residues [183][184][185][186][187][188][189][190][191][192][193] does not change the p-Nephrin distribution. However, addition of 1 µM Nck and 1 µM N-WASP together causes p-Nephrin to organize into micron-sized clusters ( Figure 1B, Video 1). Unphosphorylated Nephrin remains uniformly distributed under these conditions (not shown), indicating that clustering requires binding the Nck SH2 domain to pTyr sites on Nephrin. Labeling of Nck or N-WASP with fluorophores (Alexa 568 or Alexa 647, respectively) shows that the clusters contain all three protein components ( Figure 1-figure supplement 1C). Quantitative analysis indicated that the clustered regions contain up to fourfold higher density of p-Nephrin than the surrounding regions of the bilayer ( Figure 1C). Note that much higher concentrations of Nck and N-WASP (∼40 µM and ∼15 µM, respectively ) are required to form phase-separated droplets in solution than to induce p-Nephrin clustering on membranes. Thus, clustering does not involve adhesion of pre-existing three dimensional Nck/ N-WASP droplets to membrane-bound p-Nephrin, but rather de novo assembly of the proteins together on the bilayer surface. Further, the DOPC/ DOGS lipids in our experiments do not phaseseparate, indicating that clustering is independent of lipid phase separation. To understand the concentration dependence of cluster formation, we fixed the concentration of N-WASP at 500 nM and the density of p-Nephrin at 2700 ± 200 molecules/μm 2 (see density control and measurement in 'Materials and methods', also Figure 2-figure supplement 1, 2) and added increasing concentrations of Nck. We used two measures to determine the onset of clustering. First, we used a thresholding approach to identify and quantify bright regions of the membrane, which we define as clusters. As detailed in 'Materials and methods', two different thresholding procedures gave virtually identical results in this approach. After thresholding, we calculated the fraction of total membrane fluorescence intensity that is present in the clusters. As a second independent approach, we determined the variance of the fluorescence signal across the bilayer image, which also increases as bright regions form. Using either approach, we found that p-Nephrin clusters appear in a highly nonlinear fashion as Nck concentration in solution increases. Clusters are essentially absent at low concentrations of Nck but form quite sharply once a critical concentration is reached (∼200 nM, Figure 2A). We note that the sharp increase in variance and the coincidence of the critical concentration measured by both methods speak against the possibility that small clusters are forming in a more gradual fashion but are too dim to be recognized by the thresholding approach. The average density of p-Nephrin on the membrane does not change during the titration (Figure 2-figure supplement 2). We define the concentration at which fractional intensity and variance begin increasing as the clustering concentration. The highly cooperative nature of the cluster formation on bilayers is reminiscent of the sharp phase transitions observed in forming p-Nephrin/Nck/N-WASP liquid droplets in three-dimensional solutions . The clusters are distributed randomly (Gaussian distribution) across the membrane ( Figure 2B), consistent with a stochastic assembly process, where the clusters are nucleated and grow independent of one other (Dill and Bromberg, 2003). The clusters also show a broad range of sizes that can be fit well to an exponential distribution ( Figure 2C), similar to that observed for stochastically assembled chemotaxis receptors in bacteria (Greenfield et al., 2009). These properties suggest a stochastic process of cluster formation in our system. In contrast, clusters of GPI-anchored proteins in cells do not show a Gaussian spatial distribution nor a broad size distribution, indicating their active control by the cortical actin cytoskeleton (Goswami et al., 2008). When experiments are performed at ∼fivefold higher initial density of p-Nephrin on the membrane, the morphology of the clusters changes significantly. Distinct puncta are no longer observed, and the clustered regions span the entire field of view ( Figure 2D). These data are consistent with low-and high-density p-Nephrin phase separating via nucleation and spinodal decomposition mechanisms, respectively (Dill and Bromberg, 2003), as observed in non-biological phase separating systems in material science (Zinke-Allmang et al., 1992). Together, these data strongly suggest that the clustering of p-Nephrin occurs through a phase transition of the molecules on the surface of the membrane in response to binding of Nck and N-WASP. We next examined the dynamic behaviors of the p-Nephrin clusters. Individual clusters are irregularly shaped, indicating that they possess low line tension. On short timescales, the edges of clusters show substantial fluctuations, extending and retracting in seconds (Video 2). On timescales of minutes, these fluctuations lead to coalescence of small clusters into increasingly larger structures ( Figure 3A, Videos 1, 2). We also rarely observe apparent fission events, where a larger cluster seems to split into two smaller structures (Video 2). These behaviors suggest that p-Nephrin clusters are fluid-like. The size distribution of the clusters depends on the initial p-Nephrin density and time after Nck/N-WASP addition, reflecting variable contributions of nucleation, growth through monomer addition and coalescence, and Ostwald ripening throughout the process (Zinke-Allmang et al., 1992). At lower density (2500 molecules/µm 2 ) the distribution is exponential at all times we examined (Figure 3-figure supplement 1), while at higher density (4000 molecules/µm 2 ) the distribution Video 1. Addition of Nck and N-WASP to p-Nephrin produces macroscopic clusters on supported bilayers. Time-lapse images taken immediately after adding 1 μM Nck and 1 μM N-WASP to p-Nephrin Alexa488. Images were captured every minute. DOI: 10.7554/eLife.04123.005 follows a power law (Figure 3-figure supplement 2). At a given time after Nck/N-WASP addition, higher density produces a larger average cluster size and correspondingly a larger fraction of total area covered by the clusters (Figure 3-figure supplement 3A,B, respectively), most likely due to the larger degree of coalescence at higher cluster densities. A detailed mechanistic understanding of these behaviors will be goal of future efforts. We next used fluorescence recovery after photobleaching (FRAP) to examine the dynamics of the three proteins that compose the clusters. In individual experiments, we labeled either p-Nephrin, Nck, or N-WASP with Alexa488 and examined FRAP behavior of the labeled component. Within the clusters, each of the proteins recovers nearly fully in tens to hundreds of seconds ( Figure 3B). Thus, even though the clusters themselves are persistent for hours, the individual components exchange with the surroundings on time-scales of seconds to minutes. The recovery profiles can all be fit to a doubleexponential but do not fit to a single-exponential (see 'Materials and methods' for F-test statistics). N-WASP shows recovery time constants of τ-fast = 2.6 s (37%) and τ-slow = 43 s (63%). Nck recovers with τ-fast = 1.6 s (49%) and τ-slow = 72 s (51%). p-Nephrin recovers with τ-fast = 86 s (76%) and τ-slow = 526 s (24%). In the non-clustered regions, p-nephrin recovery can be fit well to a singleexponential, with τ = 31 s, similar to the fast phase in the clusters but appreciably slower than recovery in the absence of Nck/N-WASP, where τ = 1.3 s (Figure 1-figure supplement 1A). In independent Figure 2. Nephrin clusters are created via a two-dimensional phase-transition. (A) Fractional intensity in clusters (blue symbols, left ordinate) and signal variance (red symbols, right ordinate) of p-Nephrin fluorescence on a DOPC bilayer as a function of Nck concentration for 500 nM N-WASP and total p-Nephrin density of ∼2700 molecules/µm 2 . (B) Relative frequency with which a given number of clusters are found within 93 randomly selected 56 × 56 µm regions of a bilayer formed using ∼2500 molecules/µm 2 Alexa488-labeled p-Nephrin, 1 µM Nck, and 1 µM N-WASP. (C) Size distribution of clusters formed using ∼2500 molecules/µm 2 Alexa488-labeled p-Nephrin, 1 µM Nck, and 1 µM N-WASP. (D) Puncta formed using 1 µM Nck, 1 µM N-WASP, and low (left) or 4.7-fold higher (right) density of p-Nephrin. Images were autocontrasted for clarity. DOI: 10.7554/eLife.04123.006 The following figure supplements are available for figure 2: experiments we found that the dissociation of p-nephrin from the membrane occurs much more slowly than these rates (τ = 2080 s, Figure 2-figure supplement 1D), indicating that the FRAP recovery of the protein is largely due to two-dimensional diffusion within the bilayer. By contrast, Nck and N-WASP likely recover through a combination of diffusion in the plane of the bilayer as well as binding and dissociation from the membrane. We recognize that the kinetic processes here must represent the convolution of multiple molecular processes, given the complex oligomeric/polymeric nature of the clusters (see below). Nevertheless, the data suggest that both the clustered regions and non-clustered regions contain small assemblies that slow p-nephrin dynamics relative to its free diffusion in the bilayer. The clustered region likely contains additional assemblies that are larger and have greater degrees of crosslinking that appreciably slow dynamics further. Together, our data show that upon recruitment of Nck and N-WASP, membrane-bound p-nephrin undergoes a sharp thermodynamically controlled phase transition to produce dense dynamic puncta on the membrane. Phase separation occurs through polymerization of p-nephrin, Nck and N-WASP Our previous data suggested that three dimensional phase separation in the p-nephrin/Nck/N-WASP system occurred concomitantly with a sol-gel transition, producing macroscopic non-covalent polymers within the liquid phase boundary. Evidence for polymerization came in part from studies of the dependence of critical concentration and dynamics on the valencies and affinities of the interacting species. To examine whether such polymerization is also occurring in the two-dimensional system, we initially compared the critical concentrations of singly-, doubly-, and triply-phosphorylated nephrin (Nephrin1pY, Nephrin2pY, and p-Nephrin, respectively; see 'Materials and methods' for specific phosphorylation sites). Previous studies showed that the three nephrin pTyr sites have essentially identical affinities for the Nck SH2 domain (Blasutig et al., 2008). Thus, these constructs differ largely in pTyr valency, rather than inherent affinity for Nck. At a membrane density of 1000 molecules/μm 2 and in the presence of 500 nM N-WASP, p-Nephrin begins to show clusters at 200-300 nM Nck. Under the same conditions, Nephrin2pY and Nephrin1pY do not cluster even at Nck concentrations greater than 10 µM ( Figure 4A) nor with their own densities increased to 3000 molecules/μm 2 . If the concentrations of N-WASP and Nck are increased to 2 µM and 5 µM, respectively, Nephrin2pY produces clusters ( Figure 4-figure supplement 1). However, even at 5 µM N-WASP and 10 µM Nck, Nephrin1pY does not cluster (Figure 4-figure supplement 1). Thus, the valency of nephrin phosphorylation can control the critical concentration for puncta formation, as in the three-dimensional phase separation of this system . We also performed analogous studies of the SH3 valency of Nck. Since the different SH3 domains of Nck have different affinities for the individual PRM sites in N-WASP (Qiong Wu, unpublished observations), we generated a series of Nck analogs, containing one, two, or three repeats of the second SH3 domain of the protein plus the natural SH2 domain [(SH3) 1 , (SH3) 2 , and (SH3) 3 , respectively]. The SH3 domains were separated by the natural linker between the first and second SH3 domains. At 500 nM N-WASP, the trivalent molecule (SH3) 3 induces clustering at 200 nM (SH3 module concentration), whereas the di-valent (SH3) 2 and monovalent (SH3) 1 molecules do not cluster even at concentrations above 10 µM of the SH3 module concentrations ( Figure 4B). Increasing N-WASP concentration to 5 µM and (SH3) 2 concentration to 5 µM (SH3 module concentration) produced clusters, whereas clusters were absent even with 5 µM N-WASP and 5 µM (SH3) 1 Video 2. Clusters are dynamic. Time-lapse of clusters made from 1 μM Nck and 1 μM N-WASP with p-Nephrin Alexa488 on the membrane. Images were captured every 30 s. In addition to fusion events, the clusters also occasionally undergo fission. DOI: 10.7554/eLife.04123.009 ( Figure 4-figure supplement 1). These data demonstrate the strong dependence of clustering on valency of the interacting species. To determine the effect of SH2-pTyr affinity on the clustering concentrations, we replaced the three pTyr motifs of Nephrin with three repeats of the pTyr motif of the bacterial protein TIR (p-TIR) (Campellone et al., 2002). The binding affinity of the p-Nephrin motif to the SH2 domain of Nck is 370 nM, as determined by isothermal titration calorimetry ( Figure 5-figure supplement 1). For the p-TIR motif the affinity to the SH2 domain is 40 nM. In the presence of p-TIR, at a density of 2000 molecules/μm 2 , the clustering concentration of the trivalent SH3 protein, (SH3) 3 , is 100 nM, as opposed to 200 nM for p-Nephrin ( Figure 5A). The higher affinity interaction also slows the recovery of Nck, as FRAP data demonstrate ( Figure 5B). Fitting to a double-exponential, Nck shows recovery rate constants of τ-fast = 6.5 s (46%) and τ-slow = 89.5 s (54%) when the clusters of p-TIR/Nck/N-WASP were photobleached. However, Nck shows recovery rates of τ-fast = 1.6 s (49%) and τ-slow = 73.2 s (51%) when the clusters of p-Nephrin/Nck/N-WASP were photobleached. The data would be consistent with τ-fast being governed by processes based on dissociation of Nck from pTyr sites on Nephrin/TIR (which are likely slower in the high affinity system) and τ-slow being governed by diffusion of large assemblies in the membrane (which are expected to be similar in the two cases). Together, the data show that both the clustering concentrations and the dynamics of the clusters can be affected by molecular affinities, as expected of a crosslinked polymer network. Additionally when a higher-affinity monovalent pTyr peptide is added in solution, the clusters dissipate. In the presence of clusters made from 1 μM (SH3) 3 , 500 nM N-WASP and p-nephrin, we added singly phosphorylated TIR peptide (without a His tag) at 10 μM concentration (Video 3, Figure 6A). The clusters disappear within minutes after the addition of the monovalent peptide. The dissolution of the clusters occurs sharply, over a time-span of ∼2 min, starting ∼7 min after peptide addition ( Figure 6B). When TIR is titrated from 100 nM to 100 μM, the fractional intensity of the clusters also decreases sharply above 10 μM ( Figure 6C). These data suggest that the disassembly of the clusters (similar to the formation) is also cooperative. The favorability of higher valency and higher affinity on clustering, as well as the disruption of the clusters by a mono-valent molecule, suggest that as in the three dimensional droplets the two dimensional clusters form through polymerization (a sol-gel transition) of p-Nephrin, Nck and N-WASP. p-Nephrin/Nck/N-WASP clusters promote Arp2/3 complex-dependent actin assembly We next asked whether the p-Nephrin/Nck/N-WASP clusters can direct actin assembly by the Arp2/3 complex at membranes. We added monomeric actin (10% rhodamine labeled) to the solution above preformed clusters in the presence or absence of the Arp2/3 complex, under conditions that favor actin polymerization. Immediately after addition a small amount of actin, likely monomers, is recruited to the clusters in a relatively uniform fashion (Figure 7-figure supplement 1A). After a lag of ∼6-15 min (see below), actin filaments then assemble on the clusters over a time course of approximately 100 min, as visualized by phalloidin 647 staining (Figure 7-figure supplement 1A). In the absence of the Arp2/3 complex, actin filaments are formed only sparsely in the field of view (Figure 7-figure supplement 1B). In the presence of the Arp2/3 complex, actin appears on the clusters much more rapidly and to a much greater degree ( Figure 7A, Figure 7-figure supplement 1B). The lag time between the initial weak recruitment of actin and the appearance of robust actin fluorescence (presumably representing filaments) varies substantially between clusters ( Figure 7A,B). Some clusters show increased actin after only 6 min, while others remain devoid of additional actin until 10-15 min into the reaction. This behavior appears to be stochastic, and the lag time does not show any obvious correlation with size or density of the Nephrin clusters ( Figure 7B). Regardless of when filament assembly begins on a cluster, once it does begin, actin intensity rapidly increases, typically reaching a plateau in less than 10 min ( Figure 7C). This behavior likely reflects strong positive feedback due to activation of the Arp2/3 complex by actin filaments (Machesky et al., 1999). As the reaction proceeds, the morphology of the Nephrin clusters changes dramatically, without appreciable changes on overall intensity (except the slow decrease due to photobleaching). Actin fluorescence remains coincident with Nephrin throughout this process, indicating that the signaling molecules are reorganized by the assembling filaments. Shortly after the appearance of actin on a cluster, the structure changes from having relatively rounded edges to having many thin hair-like projections from its periphery. These projections coalesce over time to give the puncta star-like morphologies. For reasons we cannot currently explain, between 42 and 45 min, well after all of the clusters have recruited significant actin, there is a dramatic change in cluster morphology, such that the puncta appear to shatter into a large number of short linear structures (Figure 7-figure supplement 2). This change in Nephrin morphology coincides with a sharp increase in the total actin localized to the TIRF field/membrane but no change in total Nephrin fluorescence. These data demonstrate the p-Nephrin/Nck/N-WASP clusters can effectively promote actin filament assembly through the Arp2/3 complex (which is presumably recruited to the membrane through binding N-WASP). As the filaments assemble, they cause substantial changes in the morphology of the clusters. This feedback between actin and the signaling proteins that promote its assembly can control the micron-scale morphology of the entire pathway. Discussion Polymerization and concomitant phase separation as a general mechanism to create membrane clusters We have shown here that membrane-bound phosphorylated Nephrin can form micron-size clusters through interactions with Nck and N-WASP. The clusters form through a thermodynamic phase transition that is driven by oligomerization/polymerization of the proteins through their modular binding domains. The occurrence of a phase transition is supported by the sharpness with which clusters appear as Nck concentration is increased and the temporal sharpness of cluster disappearance after a monovalent competitor is added. The importance of polymerization/oligomerization is supported by the valency-and affinity-dependence of the critical concentration and also by the dissolution of clusters by a monovalent competitor. The clusters appear to be polymers/oligomers of the three proteins, as evidenced by the dependence of FRAP rate on the affinity of the Nck SH2 domain for the pTyr sites on Nephrin. The clusters assemble actin through the Arp2/3 complex and can themselves be dynamically remodeled by the resultant filament network. Our work demonstrates that, as in three-dimensional systems, multivalent polymerization and phase separation can control micron-scale spatial organization (and likely biochemical activity) of signaling pathways. This process may contribute generally to the organization of signaling receptors. The cytoplasmic tails of many receptors are rapidly phosphorylated on multiple tyrosine residues upon stimulation by extracellular ligands (Roche et al., 1996;Hunter, 2000;Palmer et al., 2002;Schlessinger, 2000;Houtman et al., 2006;Kaushansky et al., 2008;Wagner et al., 2013). This often occurs concomitant with concentration of the receptors into micron-sized puncta (Douglass and Vale, 2005;Salaita et al., 2010). Where examined, these puncta persist over many minutes, but exchange molecules in seconds with the surroundings, similar to the p-Nephrin/Nck/N-WASP puncta we have generated here. Further, many of these receptors have been shown, through combinations of biochemistry and genetics, to use the pTyr modifications to engage signaling networks composed of proteins with multiple modular binding domains, often (but not exclusively) combinations of SH2 domains, SH3 domains, PRMs, and additional pTyr sites. Examples of processes controlled by such pathways include T cell activation (Lee et al., 2003;Dustin et al., 2010), invadopodia formation (Oser et al., 2010;Bergman et al., 2014), myoblast fusion (Abmayr and Pavlath, 2012), neurite self-avoidance (Chen and Maniatis, 2013), and cell-matrix interactions through focal adhesions (Hoffmann et al., 2014). The molecules that control these processes have the capacity to function analogous to the Nephrin/Nck/N-WASP system studied here. We hypothesize that coupled polymerization and phase separation may contribute to the formation of macroscopic puncta in these systems and others that are composed similarly. We note that polymerization does not strictly require multivalency at the level of an individual receptor tail. At high densities, a receptor containing a single motif behaves effectively in multivalent fashion and can then cluster through interactions with multivalent ligands. For example, proteins with multiple PDZ domains interact with voltage-gated Kv1.4 channels, which are found in clusters at the cell-surface (Burke et al., 1999). Further, membrane receptors are often oligomeric in nature. For example, EGF receptors have been reported to form pre-formed oligomers in the absence of ligand (Clayton et al., 2008), effectively increasing the valency of their cytoplasmic tails. Thus, there are a variety of ways that the basic concept of multivalent polymerization and phase separation could be manifested in specific signaling systems. This behavior appears to be particularly prominent in actin regulatory pathways. These often contain the adaptor proteins Nck or Crk/CrkL, linked upstream to pTyr-containing proteins and downstream to proline-rich proteins including members of the WASP family (Buday et al., 2002;Antoku et al., 2008;Noy et al., 2012;Chaki and Rivera, 2013;Kaipa et al., 2013). In this regard, it is notable that over half of the 29 known ligands of the Nck SH2 domain contain two or more (up to 16 in p130 CAS) predicted and/or demonstrated Nck binding sites (Lettau et al., 2009). Further, almost all WASP proteins have large proline-rich regions with multiple SH3-binding PRMs (Padrick and Rosen, 2010). The only exception is WASH, which has a small proline-rich region. However, WASH is constitutively associated with the Fam21 protein, which has a large disordered tail that contains 21 so-called LFa peptide motifs that can bind the membrane-associated retromer complex (Derivery et al., 2009;Gomez and Billadeau, 2009;Jia et al., 2010Jia et al., , 2012. Thus WASH may have a conceptually similar but molecularly distinct mechanism of assembling into large structures. This general behavior suggests that clustering may play an important role in spatial and temporal control of actin dynamics. Consistent with this idea, several groups have demonstrated that increased density of WASP proteins corresponds to increased activity towards the Arp2/3 complex. Pantaloni and colleagues have shown that the rate of actin-based motility of N-WASP-coated beads increases non-linearly with increasing WASP density (Wiesner et al., 2003). Similarly, Ditlev et al. have shown through modeling and cell-based experiments that actin assembly activity scales with the square of N-WASP density at the plasma membrane (Ditlev et al., 2012). Finally, Padrick et al. showed that a natural consequence of the 2:1 stoichiometry of the WASP:Arp2/3 complex during filament nucleation is that actin assembly activity should increase with the size of WASP clusters and thus the density of WASP at membranes (Padrick et al., 2008). These observations, together with our data here, suggest that clustering of receptors and their proximal adaptors may provide a mechanism of concentrating WASP proteins into high-density puncta and thus increasing their activation of the Arp2/3 complex, providing local bursts of actin filament assembly. The mechanism we have described is not exclusive of, and in fact is expected to act cooperatively with, many other mechanisms that have been proposed to explain receptor clustering. Interactions between extracellular domains, as proposed for cadherins and Eph receptors (Himanen et al., 2007;Wu et al., 2011;Seiradake et al., 2013), will be thermodynamically coupled to assembly of intracellular oligomers/polymers. Similarly, interactions of receptor transmembrane regions with specific lipids, which can promote concentration of receptors into nano-domains enriched in those lipids, should also be thermodynamically coupled to the clustering of receptor cytoplasmic tails (Lingwood and Simons, 2010). Further, ATP-dependent clustering of the cortical acto-myosin system, which promotes oligomerization of GPI-anchored proteins (Sharma et al., 2004;Goswami et al., 2008), could also promote assembly of a phase separated multivalent network if any components of that network can bind to the cytoskeleton. In this case, the dynamic rearrangements of the acto-myosin system could also control the properties of the signaling clusters (e.g., cluster size and/or lifetime of clusters) and maintain them away from equilibrium. It is important to note that while weak interactions between extracellular domains or between transmembrane regions and lipids or between receptors and the cytoskeleton may not on their own produce significant oligomerization of receptors, these energies could have substantial effects when combined with energies of clustering. Phrased differently, these other interactions could have strong effects on the critical concentrations (or the degree of receptor phosphorylation) needed for phase separation/clustering through adaptor-based intracellular interactions. For any particular system, or for a single system under different conditions, these various mechanisms are likely to be used to different degrees to promote the organization of receptors into macroscopic structures. Functional implications of clustering through multivalent phase separation The ability of membrane receptors to cluster through multivalent phase separation could have a number of functional implications in cells. The process will generate a sharp switch between different states, which will depend on the concentrations of at least two (and possibly several) species, as well as the degree of receptor phosphorylation in pTyr-dependent cases. Thus, the switch could be tightly controlled, either through relatively slow changes in protein concentration or more rapidly through changes in receptor phosphorylation or oligomerization by extracellular ligands. The phase-separated state will have different density, composition, and dynamics from the surrounding regions of the membrane, each of which could have functional consequences. In the case of actin regulatory systems, we and others have shown that because WASP proteins bind (and activate) Arp2/3 complex in 2:1 fashion, increasing density of WASP proteins leads to non-linear increases in actin assembly activity (Padrick et al., 2008;Ditlev et al., 2012). Thus, clustering should provide not only spatial organization of the actin filament network (decreasing spatial noise [Grecco et al., 2011]) but also increased biochemical signaling activity. This should be true for any signaling system that requires multiple simultaneous or sequential events to generate downstream outputs. The enhancement due to clustering would be particularly strong for systems with positive feedback, as in Arp2/3 complex-actin pathways. In addition to the polymer components themselves, other proteins and lipids could be concentrated into or excluded from the phase-separated structure. This partitioning could be dictated by both specific interactions (e.g., a monovalent SH3 protein could be recruited to the p-Nephrin/Nck/N-WASP clusters by binding the PRMs of N-WASP) as well as non-specific electrostatic and/or hydrophobic interactions with the polymer matrix. The collection of these molecules would then produce a distinct biochemical environment from the surrounding regions, favoring or disfavoring certain reactions or afford specificity to signaling pathways. Since the clusters are temporally stable but readily exchange molecules with the surroundings, they could potentially act as sites of enzymatic modification and release. Finally, the structural and dynamic features of the polymer matrix could also influence the rates and/or specificities of reactions that occur within the clusters. Nephrin oligomerization as a mechanism to organize the slit diaphragm Recent data have shown that Nephrin is constitutively phosphorylated in the slit diaphragm between podocytes of the kidney (Jones et al., 2009;New et al., 2013). Previous data showed that the loss of Nck disrupts the filtration capacity of the diaphragm, concomitant with the loss of cortical actin filaments (Jones et al., 2006). These observations suggest that the pathway from p-Nephrin to actin, and by inference the polymeric network we have described here, is important in maintaining the slit diaphragm. The extracellular portion of Nephrin is composed of multiple IgG domains and FNIII domains. These have been suggested to self-associate, both in trans across the slit diaphragm and in cis within individual cells (Gerke et al., 2003). The latter should promote polymerization and phase separation of the actin pathway components. Thus, this system may be a case where interactions on both sides of the plasma membrane act cooperatively to produce a polymeric structure with both extracellular functions (the filtration barrier) and intracellular functions (signaling to actin). Conclusion In summary, we have shown that interactions between multivalent proteins at membranes can lead to concomitant polymerization and phase separation, generating micron-size clusters. Although only demonstrated here for the p-Nephrin/Nck/N-WASP system, the analogous construction of many signaling pathways suggests that this behavior could be quite general, and relevant to many biological processes. Polymerization and phase separation at membranes could impart spatial organization on these pathways and afford them strongly non-linear activities. Further work in vitro and in vivo will be necessary to determine the extent to which these effects are important in specific biological processes. Protein expression and purification, phosphorylation of nephrin Information on different constructs is provided in Table 1. Maltose binding protein (MBP)-tagged His 8 -Nephrin and its mutants were expressed in BL21(DE3)T1R cells at 18°C through overnight induction with 1 mM IPTG. Cells were collected by centrifugation and lysed by cell disruption (Emulsiflex-C5, Avestin, Ottowa, ON, Canada) in 20 mM Tris, pH 8, 20 mM imidazole, 150 mM NaCl, 5 mM βME, 0.01% NP-40, 10% glycerol, 1 mM PMSF, 1 μg/ml antipain, 1 mM benzamidine and 1 μg/ml leupeptin. The cleared lysate was applied to Ni-NTA agarose (Qiagen, Venlo, Netherlands), washed with the lysis buffer containing 300 mM NaCl and 50 mM imidazole, and eluted with the same buffer but containing 150 mM NaCl and 300 mM imidazole. The MBP was removed with TEV protease treatment at 4°C for 16 hr or at room-temperature for 2 hr. The protein was further purified using a Source 15Q column (GE Healthcare, Pittsburgh, PA), evolved with a gradient of 150 → 300 mM NaCl in 20 mM imidazole, pH 8, 1 mM EDTA, and 2 mM DTT, followed by an SD200 column (GE Healthcare) run in 25 mM Hepes, pH 7.5, 150 mM NaCl, 1 mM MgCl 2 , and 2 mM βME. Fractions containing His 8 -Nephrin were concentrated using an Amicon Ultra 3 K concentrator (Millipore, Billerica, MA) and flash frozen in aliquots at −80°C. Nephrin proteins were phosphorylated at 30°C with 20 nM Lck kinase overnight or with 500 nM Lck for 1 hr. The phosphorylation reaction was quenched with 10 mM EDTA. Kinase and incompletely phosphorylated Nephrin were removed using a source 15 Q column evolved with a gradient of 150 → 250 mM NaCl in 25 mM Hepes, pH 7, and 2 mM βME. The phosphorylated product was further purified using an SD200 column (GE Healthcare) and labeled at its single cysteine residue with maleimide-Alexa 488 fluorophore (Invitrogen, Carlsbad, CA). The labeled protein was separated from unreacted fluorophore using a Source 15 Q column and a Hi-trap desalting column (GE Healthcare). Phosphorylation at one, two, or three sites, for Nephrin1Y, Nephrin2Y, or Nephrin3Y (see Table 1), respectively, was confirmed using mass-spectrometry. GST-Nck and His 6 -N-WASP were expressed in BL21(DE3)T1R cells at 18°C through overnight induction with 1 mM IPTG. Cells expressing GST-Nck were collected by centrifugation and lysed by sonication in 20 mM Tris, pH 8, 200 mM NaCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF, 1 μg/ml antipain, 1 mM benzamidine, 1 μg/ml leupeptin, and 1 μg/ml pepstatin. The cleared lysate was applied to glutathione sepharose beads (GE) and washed with 10 column volumes of 200 mM NaCl, 20 mM Tris, pH 8, 1 mM DTT, and 1 mM EDTA. The GST tag was removed with TEV protease treatment on the beads at 4°C for 16 hr or at room-temperature for 2 hr. Cleaved Nck was collected by 20 column washes with 20 mM imidazole, pH 7, and 1 mM DTT and applied to a Source 15 Q column using a gradient of 0 → 200 mM NaCl in 20 mM imidazole, pH 7, 1 mM DTT. Fractions containing Nck were pooled, concentrated using an Amicon Ultra 30 K concentrator (Millipore), and passed through a Source 15 S column (GE), using a gradient of 0 → 200 mM NaCl in 20 mM imidazole, pH 7, 1 mM DTT. Fractions containing Nck were concentrated and run through an SD75 column (GE). Pooled fractions were concentrated and flash-frozen in 25 mM Hepes, pH 7.5, 150 mM NaCl, and 1 mM βME. The (SH3) 1 , (SH3) 2 , and (SH3) 3 proteins were purified in the same way but excluding the Source 15 S column. His 6 -N-WASP expressing cells were collected by centrifugation and lysed by cell disruption (Emulsiflex-C5, Avestin) in 20 mM imidazole, pH 7, 300 mM KCl, 5 mM βME, 0.01% NP-40, 1 mM PMSF, 1 μg/ml antipain, 1 mM benzamidine, and 1 μg/ml leupeptin. The cleared lysate was applied to Ni-NTA agarose (Qiagen), washed with 300 mM KCl, 50 mM imidazole, pH 7, 5 mM βME, and eluted with 100 mM KCl, 300 mM imidazole, pH 7, and 5 mM βME. The elute was further purified over a Source 15 Q column using a gradient of 250 → 450 mM NaCl in 20 mM imidazole, pH 7, and 1 mM DTT. The His 6 -tag was removed by TEV protease at 4°C for 16 hr or at room-temperature for 2 hr. Cleaved N-WASP was then applied to a Source 15 S column using a gradient of 110 → 410 mM NaCl in 20 mM imidazole, pH 7, 1 mM DTT. Fractions containing N-WASP were concentrated using an Amicon Ultra 10 K concentrator (Millipore), passed through an SD200 column, concentrated and flash-frozen in 25 mM Hepes, pH 7.5, 150 mM NaCl, and 1 mM βME. N-WASP (BPVCA with single cysteine) and Nck (cysteine-modified, see Table 1) were labeled with Alexa488/568/647. For labeling purposes, the pure protein after Source15S was desalted into a buffer without reducing agent (25 mM Hepes, pH 7, 150 mM NaCl) and reacted with a maleimideconjugated fluorophore for 2 hr at room temperature. The reaction was quenched with DTT and the fluorophore was removed using a Source15Q and SD75/Hi-trap desalting columns. Supported lipid bilayers Liposomes were prepared as follows. A mixture of 99% DOPC and 1% Ni 2+ -NTA DOGS (Avanti Polar Lipids, Alabaster, Alabama) was dried under argon and further dried under vacuum overnight. The dried mixture was hydrated with MilliQ water for 3 hr. Buffer (25 mM Hepes, pH 7.5, 150 mM NaCl, 1 mM MgCl 2 ) was added to the hydrated multi-lamellar vesicle solution. Small unilamellar vesicles (SUVs) were prepared by 21 passes through an extruder (Avanti) fitted with 80 nm and again seven times with a fresh 80 nm or 30 nm filter. In our hands, changing the filter and re-extruding produced more consistently homogeneous liposomes. SUVs made by this method were stored at 4°C and used within 2 days of extrusion. To make supported lipid bilayers, chambered glass coverslips (Lab-tek, Cat #155409) were cleaned with 50% isopropanol, washed with Milli-Q water, and then incubated for 2 hr in 6 M NaOH. We found that cleaning the glass and using it within the few hours after cleaning was important to get consistent fluidity of the supported bilayers. Therefore, all experiments were performed within 8 hr of cleaning the glass substrate. After extensive further washes with Milli-Q water, 150 µl of room temperature SUV solution containing 0.5 to 1 mg/ml lipid was added to the coverslips and incubated for 10 min. Unadsorbed vesicles were removed by a three-step wash totaling a 216-fold dilution. BSA, 0.1% (Sigma A3294, protease-free, St. Louis, MO) in 25 mM Hepes, pH 7.5, 150 mM NaCl, 1 mM MgCl 2 was used to block the surface for 45 min, yielding a total solution volume of 200 µl. The surface was washed again with 25 mM Hepes, pH 7.5, 150 mM NaCl, 1 mM MgCl 2 , and 0.1% BSA in two steps totaling a 36-fold dilution. His 8 -p-Nephrin was added to the bilayer at 100 nM and incubated for 1 hr and washed twice totaling a 36-fold dilution. This procedure yielded 200 µl solution above the bilayer containing 2.8 nM His 8 -p-Nephrin (assuming a negligible fraction of the total protein binds the bilayer). Subsequent experiments were performed after waiting 30 min to allow the His 8 attachment to the bilayer to stabilize (Figure 2figure supplement 1D). Precise control of the timing and dilution-factor of all wash steps was critical to obtaining consistent p-Nephrin densities on the bilayers (quantified as described below). All experiments were performed in 25 mM Hepes, pH 7.5, 150 mM NaCl, 1 mM MgCl 2 , 1 mM BME, and 0.1% BSA. Measurement of nephrin density on supported lipid bilayers The density of His 8 -p-Nephrin on the supported lipid bilayers was quantified as previously described (Galush et al., 2008;Salaita et al., 2010). Briefly, SUVs containing fluorescent lipid (OG-DHPE, Invitrogen) were made as described above and were used to generate a standard curve of OG-DHPE concentration vs fluorescence intensity on a Nikon Eclipse Ti microscope using a 20× objective focusing deep into the solution and away from the glass (Figure 2-figure supplement 1A). The slope of the standard curve was denoted as I-labeled SUV. Using the identical settings, a similar standard curve was made using His 8 -p-Nephrin-Alexa488 in solution, with slope I-labeled protein (Figure 2-figure supplement 1B). I-labeled protein was identical in the presence or absence of Ni-NTA-containing SUVs at 9.5 µM Ni-NTA concentration (minimum of 158-fold excess over His 8 -p-Nephrin), showing that the His 8 -p-Nephrin-Alexa488 fluorescence does not change upon binding lipid. The correction factor F, denoted by F = I-labeled protein/I-labeled SUV, represents the intrinsic brightness of and sensitivity of the microscope for His 8 -p-Nephrin-Alexa488 vs OG-DHPE. Since the OG and Alexa488 fluorophores have very similar excitation and emission spectra, F should be an instrument-independent parameter. The SUVs containing OG-DHPE were combined in different ratios with non-fluorescent SUVs to make supported bilayers with OG-DHPE densities between 0.05 and 0.4%. Assuming the surface area of the lipid head groups to be 69 Å 2 (Kucerka et al., 2005), this corresponded to OG-DHPE densities of 1430-11,440 molecules/µm 2 . A standard curve of bilayer fluorescence intensity on a Nikon Eclipse Ti microscope and a 100× objective vs fluorophore density was then generated from these bilayers. To obtain the density of His 8 -p-Nephrin-Alexa488 on the supported bilayers, the measured fluorescence intensity was first divided by F, and the result was analyzed with the standard curve of bilayers with OG-DHPE ( Figure 2-figure supplement 1C). We note that this approach assumes that F is the same on the SLB as when His 8 -p-Nephrin-Alexa488 and OG-DHPE are associated with SUVs in free solution. To examine the potential changes in Alexa488 fluorescence as a function of p-Nephrin density, we generated supported bilayers as above with 10-100% Alexa488-labeled p-Nephrin. Intensity remained linear up to ∼60% labeling. Initial measurements suggested that the density change in p-bephrin upon clustering is fourfold. Therefore, we used p-Nephrin labeled with 15% or less Alexa488 for all quantitative image analyses. Critical concentration measurements For critical concentration of clustering measurements, images were collected on a Nikon Eclipse Ti microscope equipped with an Andor iXon Ultra 897 EM-CCD camera, with a 100× objective in epi-fluorescence mode. Background was collected with supported bilayers containing non-fluorescent lipids and subtracted from all images before processing. Images were corrected for uneven illumination and detector sensitivity as previously described (Wu et al., 2008). Briefly, pixel intensities across a homogeneous bilayer containing p-NephrinA488 were normalized to the maximum intensity of the image to obtain pixel-by-pixel correction factors (in a 0 to 1 range). Experimental images were then corrected by dividing by these factors. Images were thresholded using the triangle algorithm in Image J. The fractional intensity of the clustered regions was then calculated by dividing the integrated intensity of the thresholded image by that of the non-thresholded image. Analyzing the clusters using the triangle algorithm or the Maximum Entropy algorithm yielded the same critical concentrations. Similar thresholding results were obtained using an iterative manual procedure to identify pixels with intensity greater than three standard deviations above the mean of the non-clustered regions. Thus, our calculation of fractional intensity in the clustered regions and our consequent determination of critical concentration are not dependent on the method used to identify clusters. Size distribution and spatial distribution analyses For the data in Figure 2B,C, 512 by 512 pixel images were taken at 93 randomly selected areas of a sample with clusters made using p-NephrinA488, 1 µM Nck, and 1 µM N-WASP. The images were background corrected as described above, flattened using the rolling-ball method in ImageJ, and thresholded using the triangle method. The clusters were binned according to size (excluding those at the image edges) and the distribution was fit to a single-exponential using Graph-pad Prism. The size distributions in Figure 3B were determined similarly from single images obtained at each time point. To analyze the spatial distribution of puncta, each thresholded image was divided into 25 boxes. In each box, the number of clusters was counted twice-excluding and including clusters at the edges. The average number of edge clusters was obtained from the difference in these values, averaged across all boxes in all images. To eliminate overcounting, for each box half of this value was subtracted from the number of clusters counted including edges. These data were plotted to obtain a frequency histogram using Graph-pad Prism and fit to a Gaussian distribution. Fluorescence recovery after photobleaching (FRAP) FRAP was performed using a Nikon Eclipse Ti microscope equipped with an Andor iXon Ultra EM-CCD camera. A circle of 1-µm diameter was initially photobleached and recovery followed for up to 1000 s. The images were corrected for drift using the Sift-Align plugin in ImageJ (Schneider et al., 2012). Background photobleaching was obtained by imaging under the same conditions, excluding the laser illumination used for photobleaching. Background corrected images were normalized to the intensities of the pre-bleached images and fit to either a single or a double-exponential using Graph-pad Prism. F-tests performed in Prism demonstrated that the double-exponential fits are most appropriate (p-values for all experiments were <0.0001, Table 2). In the FRAP experiments, a glucose-oxidase scavenger system with trolox was used to reduce photobleaching during the recovery period. His 8 -p-NephrinA488 dissociation from the membrane was monitored by the decrease in total fluorescence measured in TIRF mode following washes that afforded a final solution concentration of 2.8 nM (see 'Supported lipid bilayers' section above). To limit the effect of photobleaching, the images at each time point were taken at a different area of the bilayer. The data were fit to a single-exponential with time constant of 2080 s. For quantitative analysis, images were background corrected and thresholded as described above. In the p-Nephrin clusters, the average intensities of p-Nephrin and rhodamine-actin were measured for times up to 27 min. For each cluster, t 1/2 represents the time at which the average actin intensity reaches half its maximum value. Isothermal titration calorimetry ITC was performed using a VP-ITC 200 calorimeter (GE Healthcare). Before the experiment, the proteins were dialyzed in the same buffer (25 mM Hepes, pH 7.5, 150 mM NaCl, 1 mM MgCl 2 , and 2 mM TCEP). Nck at 150 μM in the syringe was titrated to either triply phosphorylated Nephrin or triply phosphorylated TIR. We assumed that all the three sites in Nephrin were of equal affinity. Isotherms were fit well using NITPIC (Keller et al., 2012) and Sedphat (Houtman et al., 2007), assuming that all three pTyr sites in p-Nephrin have equal affinity for Nck.
12,544
2014-10-16T00:00:00.000
[ "Biology", "Chemistry" ]
Focal pericoronary adipose tissue attenuation is related to plaque presence, plaque type, and stenosis severity in coronary CTA Objectives To investigate the association of pericoronary adipose tissue mean attenuation (PCATMA) with coronary artery disease (CAD) characteristics on coronary computed tomography angiography (CCTA). Methods We retrospectively investigated 165 symptomatic patients who underwent third-generation dual-source CCTA at 70kVp: 93 with and 72 without CAD (204 arteries with plaque, 291 without plaque). CCTA was evaluated for presence and characteristics of CAD per artery. PCATMA was measured proximally and across the most severe stenosis. Patient-level, proximal PCATMA was defined as the mean of the proximal PCATMA of the three main coronary arteries. Analyses were performed on patient and vessel level. Results Mean proximal PCATMA was −96.2 ± 7.1 HU and −95.6 ± 7.8HU for patients with and without CAD (p = 0.644). In arteries with plaque, proximal and lesion-specific PCATMA was similar (−96.1 ± 9.6 HU, −95.9 ± 11.2 HU, p = 0.608). Lesion-specific PCATMA of arteries with plaque (−94.7 HU) differed from proximal PCATMA of arteries without plaque (−97.2 HU, p = 0.015). Minimal stenosis showed higher lesion-specific PCATMA (−94.0 HU) than severe stenosis (−98.5 HU, p = 0.030). Lesion-specific PCATMA of non-calcified, mixed, and calcified plaque was −96.5 HU, −94.6 HU, and −89.9 HU (p = 0.004). Vessel-based total plaque, lipid-rich necrotic core, and calcified plaque burden showed a very weak to moderate correlation with proximal PCATMA. Conclusions Lesion-specific PCATMA was higher in arteries with plaque than proximal PCATMA in arteries without plaque. Lesion-specific PCATMA was higher in non-calcified and mixed plaques compared to calcified plaques, and in minimal stenosis compared to severe; proximal PCATMA did not show these relationships. This suggests that lesion-specific PCATMA is related to plaque development and vulnerability. Key Points • In symptomatic patients undergoing CCTA at 70 kVp, PCATMA was higher in coronary arteries with plaque than those without plaque. • PCATMA was higher for non-calcified and mixed plaques compared to calcified plaques, and for minimal stenosis compared to severe stenosis. • In contrast to PCATMA measurement of the proximal vessels, lesion-specific PCATMA showed clear relationships with plaque presence and stenosis degree. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07882-1. Introduction Coronary inflammation plays an important role in atherosclerosis development [1][2][3]. Detection and quantification of coronary inflammation could assist in early risk stratification of coronary artery disease (CAD) patients, possibly even before the development of coronary plaque [4]. Recently, a non-invasive biomarker for coronary inflammation was proposed: computed tomography angiography (CCTA) derived pericoronary adipose tissue mean attenuation (PCAT MA ) [5]. PCAT MA has shown value as a predictor for cardiac mortality [6]. Few studies, predominantly using the proximal right coronary artery (RCA) as a representative location for patient-level analysis, have shown a relationship of PCAT MA with CAD and atherosclerosis progression [5,[7][8][9]. CCTA-based plaque composition and stenosis severity give information about plaque vulnerability and hemodynamic significance, and can be used for prognostication [10][11][12][13]. A previous study showed a PCAT MA difference of 3-4HU in the proximal RCA between CAD and non-CAD patients [5]. However, they found no significant difference of RCA-based PCAT M A between noncalcified plaques (NCP) and mixed or calcified plaques (CP) in patients with high plaque burden. Another study demonstrated that increased NCP and total plaque burden were associated with higher PCAT MA [8]. Most studies measured PCAT MA at one proximal coronary location [5,6,8,14]. Compared to proximal PCAT MA , there may be a stronger relation of lesionspecific PCAT MA with plaque considering a hypothesized local effect of coronary inflammation. Three PCAT MA s t u d i e s ( 3 5 -1 9 9 p a t i e n t s ) u s e d a l e s i o n -b a s e d measurement method considering all three main coronary arteries [9,15,16]. One study showed that lesion-specific PCAT MA was higher around culprit lesions in acute coronary syndrome (ACS) patients compared to non-culprit lesions in ACS and CAD patients [15]. Another study revealed lesion-specific PCAT MA was significantly increased in patients with abnormal FFR [9]. However, lesion-specific PCAT MA failed to show a significant difference between patients with and without elevated highsensitivity C-reactive protein [16]. Currently, there is a lack of knowledge on the relationship between PCAT MA and plaque presence, plaque type, and stenosis severity. In addition, the majority of studies only investigated a single, proximally measured PCAT MA value (mostly RCA) to represent overall pericoronary attenuation but did not investigate a potentially more relevant, focal PCAT MA value across coronary plaque. The aim of this study was to evaluate the relationship of proximal and lesion-specific PCAT MA with coronary plaque presence, type, and severity. Study population This single-center, cross-sectional study was performed at the University Medical Center Groningen. The study was compliant with the Declaration of Helsinki and approved by the institutional ethical review board, who waived the need for informed consent. In total, 2621 patients underwent cardiac CTA for routine indications between January 2015 and November 2017. Of these patients, a random sample of 1280 patients was further characterized by gathering hospital record information on CT indication, demographics, and clinical risk factors, to be used in various CT analyses. In a previous analysis (Ma et al) [17], we studied a cohort of patients with a zero calcium score and no coronary plaque on CCTA ("normal patients"); from this population, we selected patients with CCTA at 70 kilovoltage peak (kVp) as a reference category for the current study (n = 72). From the 697 patients (out of 1280) who underwent CCTA because of angina, we randomly selected patients with CAD, defined as patients with plaque on their CCTA images, for the current analysis based on the following inclusion criteria: 1, age > 18 years; 2, CCTA performed at 70 kVp; 3, no coronary stents or coronary artery bypass grafts. Tube voltage was restricted to 70 kVp in view of known influence of kVp on PCAT MA [17]. In total, 171 patients (72 + 99) were included. Six CAD patients were excluded for the following reasons: anomalous origin of coronary artery (n = 2), insufficient image quality (n = 1), incomplete coronary image coverage (n = 3) (Fig. 1). A radiologist with 10-year experience in cardiac radiology performed the CCTA evaluation (R.M.). In case of doubt, a radiologist with 14 years of experience was consulted and consensus was obtained (R.V.). CCTA scan protocol CCTA imaging was performed according to the routine clinical protocol using third-generation dual-source CT (SOMATOM Force, Siemens Healthineers). First, a nonenhanced ECG-gated CT at a high pitch (tube voltage 120 kVp, reference tube current 64 mAs, reconstructed slice thickness 3.0mm) was performed for coronary calcium score (CACS) analysis. Subsequently, CCTA was performed using CarekV (kVp optimization assistance), depending on patient size; patients scanned at 70 kVp were included. ECG-gated high-pitch spiral scanning was performed in low, regular heart rate, otherwise ECG-triggered sequential scanning. Patients received sublingual nitroglycerin, unless contraindicated. If the heart rate was > 70-73 beats/min, the patient received intravenous beta-blocker, unless contraindicated. Contrast timing was determined using a test bolus. Iomeprol (Iomeron 350) was injected with dose-and flow-rate depending on patient characteristics and scan mode. A dual-injection technique was used followed by a saline flush. CCTA images were reconstructed at 0.6 mm thickness. Patient characteristics Baseline patient characteristics were collected from clinical records. Age, sex, and CAD risk factors were collected. The classification criteria of risk factors were as follows: (a) hypertension-systolic blood pressure > 140 mmHg or diastolic blood pressure > 90 mmHg according to guidelines [18] and/ or anti-hypertension medication use; (b) hyperlipidemiapatients with a low-density lipoprotein > 4.5 mmol/L or total cholesterol > 6.5 mmol/L based on guidelines [19] were considered as hyperlipidemic; lipid-lowering medications used at the time of CT scanning was considered as a separate factor indicating treated hyperlipidemia; (c) diabetes mellitus-antidiabetic medication use; (d) smoking status was classified as non-smoker, current smoker, or former smoker. Depending on the risk factors, information was missing in 26 to 51 patients. If there was no mention of a risk factor, the risk factor was considered absent. Body mass index (BMI) information was collected as well. Plaque analysis Visual, qualitative analysis For visual plaque evaluation only, the main coronary arteries, left anterior descending (LAD), left circumflex (LCx), and right coronary artery (RCA) were taken into account to optimize patient comparability. Plaque composition and diameter stenosis (DS) were assessed for the most severe plaque per coronary artery. Plaque components were classified into noncalcified plaque (NCP), mixed plaque, and calcified plaque (CP). Using visual analysis, CP was defined as plaque when it had > 75% volume with density higher than the luminal contrast; NCP was defined as plaque when it had > 75% volume with a density lower than the lumen contrast and higher than soft tissues around. Mixed plaque was defined as plaque comprising 25 to 75% volume with density higher than the luminal contrast [20,21]. DS was classified into 4 stenosis categories: minimal, DS 1-24%; mild, DS 25-49%; moderate, DS 50-69%; and severe, DS 70-100% [22]. Quantification of the plaque composition was semiautomatically performed by the software (vascuCAP, Research Edition, Elucid Bioimaging) [23]. Automatic segmentation of the entire coronary lumen and wall was performed, allowing manual corrections if needed. Subsequently, the matrix burden, CP burden, and lipidrich necrotic core (LRNC) burden were automatically calculated by the software on a per-vessel level [24]. The classification of the different plaque components, which was validated with plaque histology, was based on an adaptive threshold. The LRNC lower limit was defined as −300HU; LRNC-IPH boundary was defined as 25HU. The lower limit and upper limit of the CP were 250 and 3000HU. Matrix burden was calculated by dividing the total wall volume by the matrix volume, where the matrix is defined as normal organization tissues in the vessel wall [23]. Plaque burden was defined as 1-matrix burden [24]. PCAT MA measurements PCAT MA was measured proximally in the RCA, LAD, and LCx, using dedicated software (Aquarius iNtuition, TeraRecon, Version 4.4.13). The starting point of the proximal PCAT MA measurement was 10mm after the left main bifurcation for LAD, at the bifurcation point for LCx, and 10mm after the ostium for RCA [17]. In vessels with plaque, a lesion-specific PCAT MA measurement was performed centered around the most severely stenotic plaque. The proximal and distal ends of the measurement were 5mm away from the lesion center. The measurement length and width for all measurements were 10mm and 1mm. A 1mm gap was left between the outer vessel wall, taking into account eccentric plaques, and the measured cylindrical volume to avoid artifacts. PCAT MA was defined as the mean CT value in the measured area within the range of −190 to −30 HU (Fig. 2). Data analysis First, PCAT MA was studied on per-patient level (Fig. 1). Patients with any coronary plaque were considered as CAD patients; patients without plaque were considered non-CAD patients. For the per-patient PCAT MA , the mean of the proximal PCAT MA values based on the three main coronary arteries was calculated to represent an overall, patient-based PCAT MA value. Patient-based CACS and DS were analyzed in conjunction with the per-patient PCAT MA . Patient-level categorization of DS degree was based on the most severe DS in all three coronary arteries. To allow comparison with prior studies that used only the proximal measurement of PCAT MA of the RCA, we additionally performed analyses for RCA-based PCAT MA . Additionally, a comparison of patients with and without at least 50% stenosis was performed. The total plaque burden of the main coronary arteries was considered as the patient-based plaque burden. Second, vessel-based analysis was performed (Fig. 2). We discriminated arteries with any plaque, and arteries without plaque. CAD patients could contribute arteries without plaque. For arteries with multiple plaques, the lesion with the highest DS was used. The proximal PCAT MA was used in arteries without plaque to compare with lesion-specific PCAT MA in arteries with plaque. Lesion-specific PCAT MA was analyzed based on plaque type and DS severity. Statistical methods Normality testing for continuous variables was performed with the Shapiro-Wilk test. Continuous variables are represented as mean± standard deviation (SD) or median (interquartile range [IQR]), according to distribution. The model estimated values are given in mean with 95% confidence interval (CI). Categorical variables were recorded as numbers (n) and percentages (%). Paired t-tests were used to evaluate differences between proximal and lesion-specific PCAT MA . Independent t-tests were used to compare PCAT MA measurements between patients. One-way analysis of variance (ANOVA) testing was used to compare PCAT MA between categories of plaque type and DS severity. Spearman correlation testing was used to assess the correlation of PCAT MA with plaque burden and plaque component burden. A generalized linear model was used to evaluate the influencing factors for patient-based PCAT MA . Using mixed models with random intercepts, the model estimated marginal means and 95% CI of the corrected PCAT MA were calculated. The basic model included age, sex, and vessel, while the advanced models included CAD risk factors. The models did not include BMI because of 43 missing values. PCAT MA was taken as a dependent variable in order to study the relationship between PCAT MA and plaque features. A p value < 0.05 was considered statistically significant. Statistical analyses were performed using SPSS version 25 (IBM). Patient demographics In total, 93 patients with CAD and 72 patients without CAD were included. Figure 2 shows an overview of the inclusion process. Patient characteristics are given in Table 1. Patients with CAD were significantly older (60.9 ± 8.7 vs. 51.2 ± 12.6 years, p < 0.001) and had significantly more hypertension Patient-based PCAT MA analysis An overview of PCAT MA values for CAD and non-CAD patients, CACS, and DS category is provided in Table 2 Table S1 and Table S2. Vessel-based proximal PCAT MA analysis There were 204 arteries with plaque and 291 without plaque (216 from patients without CAD and 75 from patients with CAD). The mean proximal PCAT MA of vessels without plaque was −95.6 ± 9.6 HU and −96.3 ± 8.3 HU for patients with and without CAD, respectively (p = 0.567). The different plaque components or degrees of stenosis groups did not show a difference in proximal PCAT MA . Vessel-based lesion-specific PCAT MA analysis Lesion-specific PCAT MA showed a significant difference (p = 0.002) for the coronary lesions with different plaque Figure 3 gives an overview of proximal and lesionspecific PCAT MA measurements for different plaque components and degrees of stenosis. After correction for CAD risk factors, LRNC burden and plaque burden had significant effects (estimate: −0.8 vs. −0.6) on proximal PCAT MA , while the CP burden had no significant effects on proximal PCAT MA (Table 3). Discussion This study investigated the relationship between PCAT MA and plaque presence, plaque type, and stenosis severity in the main coronary arteries in symptomatic patients undergoing CCTA at 70 kVp. PCAT MA was higher in vessels with plaque than in vessels without plaque, taking into account patients' risk factors. Lesion-specific PCAT MA was higher for non-calcified and mixed plaques compared to calcified plaques, and for minimal stenosis compared to severe stenosis. In contrast to proximal PCAT MA , lesion-specific PCAT MA showed clear relationships with plaque presence and stenosis degree. The proof-of-concept paper by Antonopoulos et al [5] demonstrated that RCA-based PCAT MA differed by approximately 3HU between CAD and non-CAD patients, where CAD was defined as the presence of a stenosis of more than 50%. As PCAT MA values vary between coronary arteries and plaque distribution among the coronary arteries, with the LAD most often affected, taking only the RCA as a PCAT MA reference location may not accurately represent the patient's PCAT MA status. Oikonomou et al [6] reported that increased PCAT MA in the RCA and LAD rather than LCx was related to increased cardiac mortality risk. Gaibazzi et al [25] reported significant differences between the LAD/RCA and the LCX in vessels with a stenosis < 50%, with a HU difference of approximately 1.5 HU on 120kVp scans. In our previous study, comparing PCAT MA at different kVp levels in patients without plaque, there were significant differences between the PCAT MA of LAD, LCX, and RCA with a HU difference around 2~4 HU [17]. Besides the coronary artery, the measurement location may also have a significant effect on PCAT MA . Goeller et al [8] showed that, although there was a correlation between PCAT MA and epicardial adipose tissue (EAT), there was no correlation between changes in EAT and plaque burden progression. Dai et al [16] found no relationship between lesion-specific PCAT MA and high-sensitive C-reactive protein, suggesting that PCAT MA may be associated with local coronary inflammation rather than global inflammation. Previously mentioned studies used lesion-specific PCAT MA only; few investigated the relationship with coronary plaque. Kwiecinski et al [26] found that increased lesion-specific PCAT MA in patients with high-risk plaque was related to focal 18F-NaF PET uptake. Lin et al [27] reported on the relationship of PCAT radiomic features and PCAT MA in the proximal RCA and around (non-) culprit lesions at presentation and 6 months post-MI, in comparison to stable CAD and non-CAD cases. They report that the most significant radiomic parameters distinguishing patients with and without MI were based on texture and geometry, yielding information not included in PCAT attenuation. They found that radiomic features were not different between culprit and non-culprit lesions, where the PCAT MA showed a significant difference. The authors mention that PCAT MA may have utility as a lesion-specific imaging biomarker, while radiomics features may have more value as a patient-specific biomarker of systemic inflammation. Our study, using both proximal and lesion-based PCAT MA , confirms that lesion-specific PCAT MA is a better representation of focal inflammation and plaque development. Only lesion-specific PCAT MA measurements showed a difference between vessels with and without plaque. Using an adjusted model, the PCAT MA of vessels with plaque was around 2HU higher than those without plaque. This result is similar to the HU difference in the study by Antonopoulos et al [5]. Lesion-specific PCAT MA differed by DS categories, taking into account age, sex, and coronary artery. Our results suggest that there may be more inflammation in mild and moderate DS than in severe DS. This fits with the hypothesis that as the plaque becomes more stabilized and more calcified in severe DS, inflammation could be relatively decreased [28]. Inflammatory cytokines play a critical role in the development and progression of coronary atherosclerosis [29,30]. The theory behind PCAT MA is that vessel wall atherosclerosis inhibits adipocyte maturation and lipid accumulation in the pericoronary fat tissue, increasing the attenuation. Additionally, corresponding increases in edema and amount of inflammatory cells possibly result in an additional increase in PCAT MA in patients at risk of or with CAD [31,32]. Results from previous studies suggest that the relationship between coronary inflammation and PCAT MA may be more evident in NCP than CP, since CPs are relatively stable and have only a minimal inflammatory component [31,32]. Goeller et al [8] investigated the relationship between PCAT MA and progression of plaque burden on CCTA. Measuring patient-based plaque burden/composition and RCA-based PCAT MA , they found that PCAT MA is related to progression of total plaque burden and NCP burden. PCAT MA > −75 HU of the proximal RCA was independently associated with increased NCP burden at 120kVp CCTA [8]. However, similar to our results, they found that there was no relationship with CP burden. In our study, the model-adjusted, lesion-specific PCAT MA values for NCP were 5-7 HU higher compared to CP and mixed plaques at 70kVp CCTA, measured in the three main coronary arteries. Our study showed only a weak correlation between vessel-based plaque burden and per-vessel PCAT MA , and no significant correlation between patient-based total plaque burden and patient-based PCAT MA . The per-vessel LRNC burden had a moderate correlation with PCAT MA whereas the CP burden showed a very poor correlation. Recent research revealed that LRNC burden is capable of predicting myocardial infarction better than CAC scoring, cardiovascular risk scores, and coronary artery stenosis [33]. There are reports that show that lipid-lowering medication could decrease the EAT attenuation independent of decreasing lipid values [34]. Our study also shows a significant effect of lipid-lowering medication on PCAT MA values, supporting the idea that statins have an effect on cardiac fat attenuation and, potentially, adipose tissue activity [35]. Additionally, we found that vessel, sex, and age had significant effects on PCAT MA . The relationship between age, sex, and CAD has been reported frequently [36][37][38]. Men showed generally higher PCAT MA values than women (−94.0 vs −97.3 HU). Gender-specific hormones may be the reason for the different effects on coronary inflammation. Limitations This is a single-center, cross-sectional study of patients with clinically indicated CCTA. No follow-up information is available; hence, CCTA results cannot be related to cardiovascular prognosis. Although our study demonstrates a relationship between plaque presence, type, and stenosis degree with PCAT MA , it was not designed to show direct causality between inflammatory status, plaque characterization, and PCAT MA . Plaque burden quantification was performed by automatic software, allowing manual corrections. In general, automatic analysis might be sensitive to errors due to image artifacts or decreased image quality and errors in segmentation. To avoid these errors in this study, scans were selected on image quality (2 scans were excluded), and at each segmentation step, the segmentation was visually assessed and manually corrected when necessary by an experienced radiologist to avoid errors. Window levels could be adjusted manually to reduce, for example, blooming effects from calcifications in order to optimize the segmentation and automated analysis. Conclusion PCAT MA was higher in coronary arteries with plaque, compared to vessels without plaque. Lesion-specific PCAT MA was higher in NCP and mixed plaque compared to CP, and in minimal stenosis compared to severe stenosis. Proximally measured PCAT MA only showed differences by plaque composition, and only when corrected for clinical parameters. This suggests that in particular lesion-specific PCAT MA is related to plaque development and vulnerability.
5,173
2021-04-16T00:00:00.000
[ "Medicine", "Biology" ]
SOME ECONOMIC INDICATORS OF PRODUCTION OF COW’S MILK IN THE REPUBLIC OF SERBIA The subject of this research is cattle breeding with a focus on the production of cow’s milk in the Republic of Serbia. The main goal is to analyze the state and trends of cow’s milk production in Serbia during the last ten years in relation to production in Europe, the European Union and the world. Data from the SORS, FAO databases, etc. were used. In Serbia, 908,102 head of cattle are raised on 177,552 family farms, ie, an average of 5.11 head of cattle per farm. The number of cattle has dropped by more than 200,000 head over the last decade. In the total milk production in Serbia, cow’s milk accounts for 96.84%. The average milk yield of cows in Serbia is far below the European average. The highest average amount of milk is recorded in the Belgrade region, where 5,335 liters per milking head are produced in one year. The quality of cow’s milk in Serbia is far below EU standards, which is a key restriction on exports. © 2021 EA. All rights reserved. Introduction Cattle breeding are the most important branch of livestock production and represents an indicator of the development of the entire agricultural and food sector, both in the world and in the Republic of Serbia. The harmonization of livestock with field production contributes to achieving greater stability of production on the farm, and in general, the overall agriculture of the country. In Serbia, 908,102 head of cattle are raised on 177,552 family farms, that is, an average of 5. In the structure of agricultural farms by regions of Serbia, farms where livestock is represented, in the region of Šumadija and Western Serbia they occupy 79.10%, while in the region of Vojvodina; this share is somewhat smaller, 72.90% (SORS, 2019). One of the most important livestock products of the Republic of Serbia is cow's milk, and the largest quantities are produced in the region of Šumadija and Western Serbia, exactly where the largest number of dairy cows is raised. According to Kitsopanidis (2000), farms with less than 5,000 liters of milk per cow per year are not sustainable, while farms with 5,000-6,000 liters per cow are sustainable but not competitive, and farms with over 6,000 liters of milk per cow are both sustainable and competitive. MacDonald et al. (2007) have determined a gross income of 4,051.50 euros per head of cow, and in the structure the value of milk is 87.80%, while the share of meat sales is much smaller (6.70%), and different types of support represent the smallest part of the total gross income, only 5.50%. Of the represented breeds of cattle bred in Serbia, the largest share is the Simmental breed, the so-called "Serbian Simmental". A large number of high-quality cattle of this breed have been imported during the last decade from Germany and Austria, so that the Simetal breed makes up about 80% of the total number of cattle in the country. The share of crossbreeds, whose origin is from Simmental with other breeds of cattle, mainly with bush, is about 5%, while pure autochthonous breeds (bush and podolska) have a little more than 1,000 heads. Recently, there is a decreasing trend in Serbia in the number of producers who raise 1-3 head of dairy cows (Perišić et al., 2012). Stankov (2015) states that farms, on which there are small farms (1-9 head of dairy cows), are relatively acceptable in terms of profitability, due to the engagement of family members. However, such farms have a low yield rate, about 36%. Due to insufficient animal productivity and small volume of final sales of products, the economic efficiency of small farms is not satisfactory. Animal nutrition has a great impact on the profitability of family farms. In the total cost of keeping dairy cows, the largest share is made up of food costs and ranges from 45 to 60% (Glavić et al., 2017). Artificial insemination of cattle, import of breeding heads, application of selection, as well as crossing of domestic autochthonous breeds with noble breeds of cattle have significantly contributed to the change of the racial composition of cattle in Serbia. According to Popović (2014), the average capacity of cattle production of 5.1 head of cattle of all categories per farm indicates that cattle breeding in Serbia are dominant on small family farms. At the same time, milk production takes place on farms with an average capacity of 2.8 head of dairy cows. Perišić et al., (2002) found that there were statistically very significant effects of age at first fertilization on milk yield and 4% MCM in standard lactations, a significant The highest average amount of milk per head of milk is recorded in the Belgrade region, where 5,335 liters of cow's milk, 200 liters of sheep's milk and 345 liters of goat's milk are produced in one year. In accordance with the structure of milk production, the production of dairy products on farms is mostly based on cow's milk products. Therefore, the subject of this research is cattle breeding with a focus on the production of cow's milk. The main goal of the research is to based on selected parameters, determine the state and trends of cow's milk production in Serbia during the last tenyear period in relation to the production in Europe, the European Union and the world. Materials and Methods For this research, data and publications of official statistics published in statistical yearbooks, bulletins and other materials of the Republic Statistical Office of the Republic of Serbia, relevant publications of the FAO database, as well as other available domestic and foreign sources on the Internet were used. Data from the Expert Reports, Institute of Animal Husbandry of Belgrade were used as an additional source of data. When considering the situation and trends in cow's milk production in Serbia, the selected parameters were compared with the parameters of milk production in the European Union and the world. Agriculture and the food industry in the EU are protected by trade barriers and receive substantial financial support through a specific and dedicated policy (Andrei et al., 2020). It can influenced on production structures and their evolution (Popescu et al., 2018). Results and Discussions According to FAO data (2020), in the world cattle are mostly raised in the United States (35.50%), followed by Asia (33.40%), a slightly lower share in Africa (17.80%), and the lowest share in Europe (10.50%). The number of dairy cattle in the period 2008-2017 gradually increased in the world, so that in 2017 it was higher by 10% compared to 2008, with a slight decrease in 2018. In Europe and the European Union, it had a similar trend, with in 2018 the number decreased by 8% in the EU compared to 2008 and in Europe decreased by 10% ( Figure 1). Over the last decade, the number of cattle in Serbia has dropped by more than 200,000. Farms that specialize in one of the branches of animal husbandry make up 14.30% of the total number of farms where livestock is raised (SORS, 2019). Territorially, the largest number of farms specialized in animal husbandry is located in the region of Šumadija and Western Serbia, and the smallest is in the Belgrade region, which is consistent with the total number of farms where livestock is raised. The average milk yield of cows in the Republic of Serbia is far below the European average. According to the SORS (2019), the total number of musk deer in the Republic of Serbia is declining. On the other hand, the number of heads from which milk is delivered to dairies is constant. In the structure of farms in Serbia where cattle are raised, farms with up to three heads make up about 50%, while the share of farms with 20 or more heads is only 3.20% (Table 1). Milk production in developed countries is based on large family farms. This is a consequence of the fact that, in the development of cattle production, during the last decades, there has been a decrease in the number of producers, and an increase in the number of heads on existing farms on family farms. The biggest problem for small producers in the new member states was the harmonization with EU standards. The chance can be found in organic farming (Popescu & Andrei, 2011;Vasile et al., 2015). The average production of cow's milk in Serbia decreased by 8% less in 2013 compared to 2008 and then stagnated over the next three years with a slight increase of 2% in 2018 compared to 2013. Indices of average milk production per year were calculated on the basis of data from the FAO database ( Figure 3). (Table 3). The second place in terms of the amount of cow's milk production belongs to the region of Vojvodina (28.67%), then to the region of Southern and Western Serbia (17.95%), and finally to the Belgrade region (6.70%). During the period from 2007 to 2018, the average annual production of cow's milk ranged from 670 to 750 million liters with significant oscillations over the years. The largest quantities of milk were produced in 2007 and 2008, and the least in 2013 and 2014 during the analyzed period ( Figure 5). The average milk yield of Holstein-Friesian cows is lower than in Croatia (7,633 kg) and Slovenia (7,535 kg The composition of raw milk and hygienic correctness are factors on which its purchase price depends. In most industrialized countries, milk quality is defined by the level of somatic cells in the raw milk tank (Eduardo, 2014). High levels of somatic cells indicate poor milk quality, which further leads to adverse effects, such as lower selling (purchase) price of milk, and thus unfavorable business results, as well as other consequences for dairy products and health in general people. In essence, the main parameters for determining the price of raw milk are the content of protein and fat in it. According to the same Report (above), of the Institute of Animal Husbandry in Belgrade, during 2003-2016, in the content of raw milk of Simetalac cows, the fat content was 4.50-5.00%, and protein 3.20-3.30%, while in the milk of However, in Serbia, the limits of the stated parameters of milk are somewhat lower, as follows: for the extra class it is less than 100,000 bacteria / ml; for class I is 100-400,000 bacteria / ml; for class II it is 400-1,000,000 bacteria / ml; for class III it is more than 1,000,000 bacteria / ml, and for all classes up to 400,000 somatic cells / ml (Mandić et al., 2006). Therefore, milk that is considered an extra quality in Serbia, in the EU satisfies the quality of class III, which is a limiting factor for the export of milk from Serbia to the EU. According to expert estimates, a large number of Serbian producers do not meet the standard of extra milk class according to EU standards. Of the total milk produced in Serbia, about 54% is placed on the market through dairies, which meet business requirements in terms of food safety, while the rest of the milk is consumed or processed on farms and thus placed through green markets. Given that the right to milk premiums is exercised only for quantities delivered to dairies (at least 3,000 liters of cow's milk per quarter), it can be expected that the share of placements through dairies will increase in the future. According to the data from the Report of the Ministry of Agriculture, Forestry and Water Management, Figure 9 shows the value of exports and imports of cow's milk and dairy products in millions of euros per year in the period 2014-2018. The value of exported milk and milk products from Serbia in 2018 is almost three times higher than the value of exported the same type of products in 2014. The value of imports of milk and milk products in the mentioned period ranged from 9-27 million euro and the value of imported products in the range of 50-57 million euro which confirms the surplus in foreign trade of these livestock products. Conclusions In the world, cattle are mostly raised in America (35.50%), followed by Asia (33.40%), a slightly smaller share in Africa (17.80%), and the smallest share is in Europe (10.50%). In the total milk production in Serbia, cow's milk makes 96.84%, then goat's milk 2.20%, and a smaller share is sheep's milk, 0.96%. Cow's milk production takes place on farms with an average number of head of 2. Due to the still small number of quality breeding heads and lower production characteristics compared to countries with developed countries and EU member states, the non-competitiveness of our cow's milk production is evident compared to the production in those countries. Also the quality of milk is a key problem that limits its export from Serbia to the EU. According to the allowed number of microorganisms, cow's milk in Serbia is classified into: class I milk, class II milk and class III milk (The Official Gazette of Rеpublic of Serbia No. 106/2017). In addition to these three classes, there is an extra class. Milk quality criteria in Serbia are far below the standards in the European Union. For milk that is considered extra quality in Serbia, in the EU it meets the criterion of the third III class. The average consumption of milk in households in Serbia during the analyzed period shows a declining trend. Thus, in 2010, the average consumption was about 170 liters per household in all regions, and in 2018 it dropped to 90 liters in the Belgrade region, to 100 liters in the regions of Vojvodina, Sumadija and Western Serbia, and 130 liters in Eastern and Southern Serbia. Of the total milk produced in Serbia, about 54% is placed on the market through dairies, which meet business requirements in terms of food safety, while the rest of the milk is consumed or processed on farms and thus placed through green markets. Positive tendencies in the development of cow's milk production in Serbia can be expected with the realization of favorable macroeconomic business conditions and more efficient fulfillment of requirements for quality standards. Achieving long-term stability of the market of milk and dairy products with better organization of sales channels is the basis of stability and sustainability of overall agricultural production in Serbia.
3,354.4
2021-01-01T00:00:00.000
[ "Economics", "Agricultural and Food Sciences" ]
FINISHING AND POLISHING OF DIRECT COMPOSITE RESTORATIONS - A REVIEW Contemporary cosmetic dentists are expected to create realistic and seamless restorations that mimic natural tooth structures. Recent advancements in composite resin systems have improved the practitioner‘s ability to deliver optimal results using chair-side techniques. Many times, the form and function is achieved but the surface of these restorations is not smooth. This might lead to rough or uneven surfaces which over time invite microbial flora leading to an inevitable failure of the restoration. To avoid such failures and to satisfy this patient-driven demand for aesthetic restorations, the use of exemplary finishing and polishing materials is required. These techniques of finishing and polishing helps to achieve the proper form and function of the restoration along with pleasing aesthetics and the maintenance of proper periodontal and gingival health. This article aims to briefly outline finishing and polishing of composite restorations. residual adhesive, periodontal root stain tooth The patent abstract indicates that the structure of the ‗‗rod‘‘ bur is made up of fibers and optionally a load of particles embedded in a resinous matrix giving the working surface of the rod a continuous abrasive power temporary cement and debris on tooth preparations before final cementation and therefore replaces other methods of preparation surface cleaning, such as a rubber cup and pumice, or use of hand instruments. Contemporary cosmetic dentists are expected to create realistic and seamless restorations that mimic natural tooth structures. Recent advancements in composite resin systems have improved the practitioner's ability to deliver optimal results using chair-side techniques. Many times, the form and function is achieved but the surface of these restorations is not smooth. This might lead to rough or uneven surfaces which over time invite microbial flora leading to an inevitable failure of the restoration. To avoid such failures and to satisfy this patient-driven demand for aesthetic restorations, the use of exemplary finishing and polishing materials is required. These techniques of finishing and polishing helps to achieve the proper form and function of the restoration along with pleasing aesthetics and the maintenance of proper periodontal and gingival health. This article aims to briefly outline finishing and polishing of composite restorations. Clinicians often consider color to be the main challenge in bonded composite restorations; however finishing and polishing procedures are the main steps that create the shape and texture of the restoration and are equally important. The shape and texture of these restorations impart all the three-dimensional features that natural teeth possess. Finishing and polishing of dental restorations are important aspects of clinical restorative procedures that enhance both the aesthetic element as well as the longevity of restored teeth. The process of making the surface smooth so as to enable them to reflect light evenly is known as finishing and polishing. Finishing is the removal of surface irregularities, whereas polishing is the creation of surface layer which can reflect light as good as enamel surface. The processes of finishing and polishing entail incorporating finer scratches to the surface of the substrate in order to methodically remove the deeper scratches. These step wise wear processes help to achieve a restoration with high gloss, lustre and finish. Finishing and polishing are both wear processes but they differ in intent and degree. [1] Finishing is defined as the transformation of an object from a rough to a refined form. The procedure involves the removal of surface irregularities and shaping the restoration according to functional occlusion. ISSN: 2320-5407 Int. J. Adv. Res. 9(06), 193-204 194 Polishing is defined as the production of a shiny mirror like surface which reflects light similar to enamel without creating supplemental films by the addition of wax or lacquer. [1] It has been established that rough or uneven surfaces invite microbial flora to flourish on surfaces which eventually lead to periodontal problems. Various authors have determined through literature that bacterial adhesion could be prevented if the surface roughness value is kept under 200nm. In addition to an aesthetic long standing restoration these procedures also maximize the patients' oral heath by reducing the chances of plaque accumulation, recurrent caries, gingival irritation and surface staining. [2] Moreover, proper polishing may preserve high surface quality and gloss over time whilst preventing marginal staining and discoloration. These steps additionally reflect light uniformly imparting. Several steps are followed to achieve aesthetic, natural looking and predictable results while doing a direct restoration. These steps include planning, dental preparation, adhesion, layering followed by finishing and polishing. [3] Advantages of proper finishing and polishing: 1. Greatly enhances the longevity, durablility and long term wear resistance of the restoration 2. Proper occlusions and desired anatomy can be achieved which make the restorations seem seamless 3. Enhanced oral function as food glides freely over occlusal and embrasure surfaces during mastication, minimizing wear rates 4. Polishing the interproximal surfaces will significantly lower patients' risk of secondary caries and periodontal disease 5. A polished surface is more biologically compatible with the gingival tissues reducing gingival irritation, so the health of the gingiva is maintained 6. A smooth surface reduces the likelihood of adhesion, which means that plaque is less likely to accumulate on a polished surface 7. A polished tooth surface increases the reflective and refractive index of the restoration and enhances the optical property of the material to create more natural and esthetic smiles 8. Polished surfaces enhance patient comfort and satisfaction; patients can detect a surface roughness change of less than 1µm by tongue proprioception 9. Finishing also removes the smear layer from the composite. This is surface layer prevented from polymerizing properly by the oxygen. A highly polished surface of composite is difficult to achieve. The resin matrix and the filler particles of composites do not abrade at the same degree because of a difference in their hardness. The surface smoothness varies with the type of composite resin, owing to the nature of the filler particles. 1. Conventional composites pose a great difficulty in achieving a smooth surface because of the difference in the hardness of the organic and inorganic phases. The resin matrix is soft and the filler particles are hard. [1] If the fine grit polishing agents are used, the soft resin abrades away easily leaving the hard filler particles behind. If coarse abrasives are used, the organic and inorganic material is removed leaving rough marks, the unequal surfaces lead to unequal wear thereby disturbing the occlusal harmony and causing TMJ problems further on. 2. Hybrid composites can be polished to a semi gloss state but the surface is somewhat hydrophobic. 3. Microfilled composites can undoubtedly be polished to the highest gloss and are considered esthetically best, amongst all the composites. The surfaces of these restorations are highly smooth and chances of extrinsic stains are minimal. Procedure for finishing and polishing of direct composite restorations A surface finish attained with the use of a plastic matrix band is the most desirable finish for resin restorations, but this is rarely obtainable because of the need for contouring and removal of excess material. [3] However, such a surface has high resin content and may yield a surface that is that is less resistant to wear. Hence, it is advisable to contour the unpolymerized composite with hand instruments, so that the need for removal of large amounts of set resin leading to surface damage are minimized. The composite resins may be filled directly into the cavity (direct restoration) or may be fabricated outside the oral cavity and cemented into the cavity (indirect restoration). To obtain adequate surface qualities, a series of procedural steps are conducted such as: 1. Reduction of excess resin cement/gross reduction 2. Contouring 3. Fine finishing 4. Polishing Gross reduction is where excess restorative material is removed. Contouring includes the reproduction of the size, shape, grooves and other details of the tooth form as well as reestablishing contact with adjacent teeth to a normal and functional form. Finishing and polishing establishes an even, well-adapted junction between the tooth surface and the restoration and removes scratches to produce a visually smooth and shiny surface. Once a composite has been cured, it must be finished and polished to produce the final surface. This step removes the air-inhibited layer. This being said finishing and polishing should be delayed for at least 10-15 minutes following the final phase of light curing. [4] The restoration must be left undisturbed to allow the resin to completely polymerize. This may aid in reducing surface trauma from the finishing process. Premature finishing may result in an increased risk of initiating micro cracks and accelerated surface wear. Finishing and polishing steps remove the outer surface of the composite that is resin rich and actually is already a smooth surface. However, this cannot be avoided. The anatomic contours of composites cannot be so well established before curing to avoid reshaping. On the latter procedure, several investigations have shown that removal of the polymer-rich, outermost resin layer is essential to achieving a stain resistant, more esthetically stable surface. Direct Composite Restorations: Excess composite at the cavosurface margins is scraped away using scalpel blades, such as the No. 12 or 12b, or specific resin carving instruments made of carbide, anodized aluminum, or nickel titanium, are useful for shaping polymerized resins.The use of stainless steel instruments must be avoided, as they tend to leave grey marks on the restoration. [5] The trick to finishing and polishing is to gradually move from larger to smaller abrasive containing agents. This will produce progressively finer scratches on the surface. The finishing sequence includes diamond burs, followed by polishing discs or rubber polishing discs, points, wheels, and polishing with buffing wheels, brushes and polishing paste. Polishing can be done using one-step or two step systems such as silicon tips, Sof-Lex disks, rubber cups, polishing pastes, and liquid polishers. During finishing and polishing, the softer matrix tends to wear away faster than the harder filler particles, resulting in a rough surface. Finishing and polishing is usually done in a brush stroke. [6] Each consecutive step in finishing and polishing should be done in perpendicular directions, so that scratch removal is more effective and smoother surface can be obtained. The instruments used for finishing are: 1. Manual instruments for removing small amount of marginal excesses (Scaler, curette) 2. Coarse (30-40 μm) and Fine (15 μm) grit diamond burs for occlusal adjustment 3. Tungsten carbide burs for finishing 4. Finishing strips for proximal surfaces 5. Waxed dental floss for checking BURS: Prior to finishing and polishing the surface must be contoured and be defect free by using diamond or carbide finishing burs. The 12-fluted carbide burs have traditionally been used to perform gross finishing of resin composite. These finishing burs may be used to develop the proper anatomy for the restoration. The transition from resin to enamel should be slowly smoothed until it is undetectable. These burs can be used dry to better visualize the margins and anatomy being developed but should be used with light pressure to avoid overheating and possibly damaging the resin composite surface. Fine finishing diamonds are also available for finishing resin composite restorations and have been found to impart less surface damage to microfilled resin composites than carbide finishing burs. They are used in a series of progressively finer abrasive particle sizes. DISCS: Finishing Discs help to perform finishing with more precision and safety as compared to burs.They help contour and finish curved surfaces such as labial proximal line angles, lingual marginal ridges, cervical areas, incisal edges, shaping and finishing of incisal corners, finishing and polishing of labial surfaces. They are also excellent for contouring and finishing of posterior marginal ridge areas, and for lingual and buccal surfaces. Dry finishing with discs used in sequence is reported to be superior or equal to wet finishing for smoothness, hardness and color stability. [7] However, dry finishing tends to clog disks with abrasive particles and makes the disks work less efficiently. Most discs use aluminium oxide or silicon carbide as the abrasive which is adhered on a plastic backing. Sof-Lex Finishing and Polishing Discs-The original Sof-Lex finishing and polishing discs are made from a urethane coated paper that gives the discs their flexibility. The system is comprised of four individual aluminum oxide grits ranging from coarse to superfine. The discs are available in three sizes; 13mm (1/2 inch), 9mm (3/8 inch), and a 16mm (5/8 inch) size with a square brass eyelet. The four-Disc Grit Sequence are as follows: 1. Coarse-This grit is used in conjunction with multi-fluted finishing burs for gross contouring and shaping. The coarse disc makes it easy to blend the composite into the tooth surface, eliminating the white line and raised margins. 2. Medium-The medium grit should be used to continue smoothing the restoration surface. Medium grits remove any remaining imperfections and marks. 3. Fine-This part of the grit sequence is where polish really starts to shine. The fine grit helps remove the smallest imperfections while adding a nice luster to the restoration. 4. Superfine-The superfine grit further refines the surface smoothness attainable to create a highly polished restoration. 1. Sof-Lex XT Finishing and Polishing Discs -The Sof-Lex XT (extra thin) finishing and polishing discs are made with a polyester film which is one third the thickness of the original paper discs. The thinner discs are slightly stiffer and allow more precise refinement of embrasures. These discs also have four individual aluminum oxide grits, ranging from coarse to superfine. They are available in two sizes, 13mm (1/2 inch), or 9mm (3/8 inch). finishing. The strips are made of plastic and are coated with an aluminum oxide abrasive. Sof-Lex strips are free of any abrasive coating at their centers for easy interproximal insertion. Each strip contains two different grits; a coarse/medium, or a fine/superfine. They are also color coded similar to the discs. The coarser grit on each strip is a darker color than its opposing side. 2. Flexistrip (Cosmedent)-They are available in 4 grits. They are available in an ultrathin version to increase their flexibility and to help pass through contacts without peeling off the abrasive from the mylar backing. 3. Epitex (GC)-They are by far the thinnest strips available (0.05mm). Easy passage through contacts is possible and there is no stripping of silicon carbide abrasive during insertion. These cause less tissue trauma. 4. Diamond strips-Diamond strips help start the inter-proximal finishing process while maintaining the integrity of the interproximal contact. A larger-grit (45-µm strip) should be used for interproximal stripping of natural teeth or for gross removal of material, and smaller grits (15 µm and 30 µm) should be used to start interproximal polishing. Sof-Lex Finishing Brush- The Sof-Lex finishing brush is made from a thermoplastic polyester elastomer that contains aluminum oxide abrasive particles molded into a shape similar to a prophy brush. The brush itself is detachable from a stainless steel mandrel. The SofLex Finishing Brush is an easy to use, one-step, reusable brush developed for polishing the concave and convex anatomy found on posterior composite restorations. The soft bristles will conform to the restoration as it travels across the surface resulting in a smooth polished finish. Directions for Use-Place the disc on the mandrel by firmly pushing the eyelet portion onto the mandrel until the disc is secure and does not wobble. The polishing motion should be constant and move from the bulk of the restoration toward the margins. A back and forth movement over the composite/enamel margin is not recommended, as a white line may form. [7,8,9] Use light pressure when polishing; let the discs do the work. To produce a smoother, more uniform finish, keep the tooth, restoration, and disc dry while polishing. Avoid touching the composite with the mandrel or disc eyelet because discoloration may occur. This discoloration can be removed by repetition of the finishing steps. Skipping a grit size in the finishing sequence may compromise the quality of the restoration's polish. [5,10] It is important to maintain a dry field when using this system. After rinsing, and before proceeding to the next grit sequence, dry the area. Impregnated Rubber Points And Cups A wide variety of rubber finishing and polishing points and cups impregnated with abrasive materials are available. Like disks, rubber cups and points are used sequentially from coarse to fine grit. The coarse grits may be effective for gross reduction and finishing, while the fine grits create a smooth, shiny surface. The primary advantage of rubber points and cups over disks is for providing access to grooves, desirable surface irregularities, and the concave lingual surfaces of anterior teeth. Available finishing kits containing disc, cups and points include Enhance finishing system (DENTSPLY Caulk), Fini (pentron) and CompoMaster (Shofu). These are used in a slow-speed hand-piece with a dry field and light intermittent pressure (to avoid the build up of heat on the tooth as well as deterioration of the finishing material). Polishers are available as stand alone products and can also be purchased as kits containing discs, cups and points. Use of PoGo has been found to result in less staining following immersion in coffee for seven days whencompared to the use of a Sof-Lex brush. In a study comparing Sof-Lex, PoGo and Identoflex polishers on a hybrid and microhybride composites, it was found that the smoothest surface was obtained using PoGo and the hybrid composite. The following are some of the commercially available polishing systems:  Enhance PoGo Polishing System:
4,047.4
2021-06-30T00:00:00.000
[ "Medicine", "Materials Science" ]
Practice The Tombros and McWhirter Knowledge Commons at Penn State This article describes the Tombros and McWhirter Knowledge Commons at the Penn State University Park campus and its use by library patrons. The Knowledge Commons required a major renovation of the first floor of the Pattee Library. In addition to providing attractive and inviting new spaces for students to study and collaborate, it includes library, tutoring, information technology, and multimedia support services. Many of these services existed prior to the Knowledge Commons but were not used to the extent they are now. As this article shows, increasing the accessibility of services and offering them in an attractive new setting will increase their use. Introduction Nearly a decade ago, Acker and Miller (2005) wrote that "academic libraries [were] at a significant turning point" as their emphasis was shifting "from being primarily for the storage of books to primarily supporting learning" (p.4).They believed that digital collections, along with improved access to collections worldwide, would reduce the need for shelving and permit libraries to concentrate on creating "collaborative spaces appropriate to the Whitchurch, Bellston, & Baer, 2006).This article describes the recent renovations of the Pattee Library, on Penn State's University Park campus, and the success it has enjoyed with the addition of its new Knowledge Commons. Many libraries have named these new spaces "information," "learning," or "knowledge" commons.The choice partly reflects when the renovations were done as well as the services being introduced.In the 1990s, "information commons" was widely adopted.It emphasized the connection between libraries and information technology and the related resources needed to support students' navigation of online networks (Beagle, 2010).At the turn of the century, many favored "learning commons" to emphasize the library's commitment to the institution's academic mission by concentrating academic services, including tutoring, under one roof to help students not only attain the information they need but also to present what they have learned in new and innovative ways (Beagle, 2010;Bonnand & Donahue, 2010).More recently, "knowledge commons" has been adopted by some to reflect access to the range of resources-physical, virtual, and human-that students need, in order to gain a meaningful and enduring understanding of a subject (Ren, Sheng, Lin, & Cao, 2009).Although much has been written about the choice of these names (Beagle, 1999;Beagle, 2010;Bonnard & Donahue, 2010;Cowgill, Beam, & Wess, 2001;Halbert, 2010;Leeder, 2009;Malenfant, 2006;Ren et al., 2009), in practice there is considerable overlap in their use.As Lippincott (2010) wrote, "There does not seem to be a generally agreed-on definition of each variation" (p.30).Nonetheless, all are identified as a "commons."The choice is appropriate: The term commons arose in Middle English to designate resources beneficial to all members of a given group ("Commons," 2014).This goal, all agree, is the intent of those who have introduced commons within academic libraries. Literature Review Since 1992, when the University of Iowa opened its Information Arcade (a term that did not catch on), librarians have been describing their new facilities.Dozens of profiles have appeared in such sources as the ARL SPEC Kit: The Information Commons (Association of Research Libraries, 2004), The Information Commons Handbook (Beagle, Bailey, & Tierney, 2006), Learning Spaces (Oblinger, 2006), and A Field Guide to the Information Commons (Forrest & Halbert, 2009) (infocommonsandbeyond.blogspot.com).Some commons provide links on their websites to their annual reports, including usage statistics.Notable examples include Queen's University, 1 University of Connecticut, 2 and Georgia Tech. 3 These resources serve as a primer for any library planning to add a commons at its institution. There are fewer sources that explore the impact of these spaces.One exception is Shill andTonner (2003, 2004), who used library gate counts as their primary indicator to determine whether new library construction attracts more students.Their study examined projects between 1995 and 2002.In their survey, they did not ask institutions about the inclusion of a commons but inquired instead about physical spaces typically associated with themcomputer labs, cafés, multimedia production and writing laboratories, and group study rooms.Although many libraries they surveyed were adding such facilities, they found that "there is no indication . . .that their presence has a significant impact, either positive or negative, on facility usage" (p.143).Yet, in their recommendation for future research, they acknowledge the need to explore this further and posed this question, "Does the creation of an increase in the gate count in the academic year following the opening of their commons (p.282).Dallis and Walters (2006) compared fall semesters before and after they opened their information commons at Indiana University-Bloomington.They reported a 10% increase in gate count but also found that the number of reference interactions actually decreased by 8% (p.251).The authors speculated that one of the reasons for the decline was that new IT services were handling technology-related questions that were once taken to the reference desk. and the yet-to-be-named leisure reading room, which contains best sellers, mysteries, science fiction, cookbooks, and other "fun" reading to help students and faculty relax. However, it is the west end of the floor that has the greatest concentration of new features and services and that has drawn record numbers of students and visitors.It is the west end-henceforth referred to as either the Knowledge Commons or simply the Commons-that will be described below. Description The Knowledge Commons offers several seating areas, as well as computer workstations, group study rooms, multimedia production rooms, practice presentation rooms, and an instruction room (Figure 1).The "living room" areas have comfortable lounge chairs and coffee tables that are easy to arrange in whatever configuration students want.These appear along the perimeter, often near windows, to take advantage of natural light (Figure 2). Plants are plentiful, either in single containers or as part of a green wall and a living wall. Forty-eight desktop personal computers with 24-inch monitors sit on tables large enough for students to place their personal devices, books, or food alongside the keyboard.All tabletops contain electrical outlets to allow students to plug in the numerous devices they bring with them (Figure 3).Based on the computer usage reports, students spend more time at these computers (average of 90 minutes per log-in) than at computers in a traditional lab (average of 45 minutes).Students' comments have also been positive: "I had room to spread out"; "The table is really big, the screen is really big.Two people can work together"; and simply, "Love the desk space" (Donahue, 2013, p. 136).An upper-level student wrote me, "I like to come here because the desks have enough space that I can spread out and not get into anyone else's way[;] also the computer screen is very large so I can do multiple things at once" (R. P. Mack, personal communication, March 7, 2014). There are nine group study rooms, with white boards and interactive monitors (Steelcase's Mediascape units).The group study rooms can be reserved through the university's Event Management System software. Students are able to reserve a room for three hours per day, two weeks in advance.Reservations can be made in person at the library service desk, by phone, or online.The rooms are very popular and in high demand.As one student commented, "Here we could schedule a time and know that people won't come in" (Donahue, 2013, p. 53). Students appreciate this ability to secure a convenient and quiet space for their groups.These rooms are usually fully booked in the afternoons and early evenings, especially during midterms and finals.Because of the high demand, we were unable to fill 611 of the 10,445 requests made during the fall 2013 semester.The group study rooms are clearly one of the most popular and widely appreciated features of the Knowledge Commons.We anticipate that demand will remain high, and we have plans for additional rooms in a future expansion.In addition to these rooms, there are six production rooms where students can edit video and audio recordings.These rooms are managed by the multimedia consultants as part of the Media Commons @ the Knowledge Commons.When not scheduled for editing a video or podcast, students can use them as group studies. The multimedia consultants also oversee the One Button Studios (OBS) in this space.The OBS is an easy-touse video production room.By simply inserting a thumb drive into a USB hub, students can access lights, camera, and microphone.Once they load their PowerPoint presentations, drop the green screen, or place themselves in the middle of the screen, they need only to touch a button to activate all of the equipment required to record their presentations. In its first year, 4,200 people created more than 270 hours of video.Faculty appreciate the ease with which students can record a presentation and then critique themselves.University instructors have also recorded lectures and presentations that can then be placed on the course management system for their classes to view when the instructors are traveling or when snowstorms necessitate class cancellations.Because of its popularity, the OBS has been replicated in other departments and computer labs on the University Park campus as well as on several other campuses in the Penn State system. Finally, there is a 40-seat iMac instruction room.Its "X"-seating configuration eliminates rows and positions each student within 15 feet of one of the five projection screens.Instructors control the number of projectors and screens in use via a Crestron room management system.They can teach from a podium in the center of the space or use Doceri software to advance slides or project websites from anywhere in the room. Throughout much of the academic year, the Knowledge Commons is open 24/5 (Sunday through Thursday). During the last two weeks of the fall and spring semesters, the Commons and the entire library are open 24/7.semester (pre-construction, pre-Knowledge Commons opening) to assess the apparent impact of this new academic facility (Table 1).This table also identifies the number of hours in a typical week each service has been available, before and after the Commons opened.The Tech Tutors assist students, faculty, and staff in the use of software and web-based application tools. They are housed in a very visible group study room (W122) in the Knowledge Commons.They also have a space in a large computer lab located elsewhere on campus.They are available Monday through Friday, 11:00 a.m. to 6:00 p.m. Like the writing tutors, this service is not available in the Knowledge Commons during the summer.Between 10 and 15 Tech Tutors support the two locations. Tech Tutors were added to the Knowledge Commons at the start of the 2012-2013 academic year.They did not exist before then.Thus, there is no comparable fall 2010 data.In the fall of 2013, the Knowledge Commons' Tech Tutors assisted 207 students and faculty.Help with Excel, Word, WordPress, network storage, and coding/programming questions were most frequently requested by students.ANGEL, the university's course management system, was first among the faculty.Sessions ranged from a few minutes to greater than one hour that semester, with an average of 35 minutes per session (this increased to 46 minutes in the spring 2014 semester).As indicated, the Tech Tutors service is new, not only to the library but to the campus as well.They were not considered during the planning and development of the Knowledge Commons.Once formed, however, the library and its IT partners saw how they complemented the range of IT support provided in the Commons.Nathan Culmer, who oversees this group, appreciated the value of developing a new service in a busy area: "Having boots on the ground so to speak in a high profile, high traffic area is worth its weight in marketing gold [and] the association with the library is an asset" (Donahue, 2013, p. 97).I decided to place them near the entrance where they are very visible in order to help this new service establish itself. IT Service Desk Consultants handle problems that users have with their personal electronic devices, such as phones, iPads, laptops, and home computers.The most frequent requests concern problems connecting to the Wi-Fi network, removing viruses, or installing new software, but they have also fixed problems with jammed keys and cracked screens.The service is available to anyone with a Penn State affiliation-students, faculty, staff, alumni, and retirees.Forty students work at this desk.Their hours are the same as those of the ITS Lab Consultants. Prior to the Knowledge Commons, the IT Service Desk was located in the basement of a major classroom building.As the former head of this service, Mark Warren, said: "This location was less than ideal" (personal service requests increased by 36% from 4,050 to 6,365.Although the increase in hours did not result in a corresponding increase in requests, all of those who use this service appreciate its new central location.Library administration anticipates that the demand for IT support will grow as the number of personal electronic devices students bring to campus increases. From the beginning, the university's Media Commons was a key partner.This unit in the Knowledge Technical details, room requirements, and the software application needed for all of the OBS equipment to work together are available online (onebuttonstudio.psu.edu).As a result, dozens of universities in the United States have created their own studios, and the American Library Association recognized the OBS in 2014 with an award for "cutting-edge technologies in library services" (Wright, 2014, para. 7). The services identified thus far are provided by students and staff hired, trained, and supervised by either Penn State's Learning Center or the ITS Department.They are not library employees.Each of the five units above has its own supervisor.As Head of the Knowledge Commons, I routinely meet with these supervisors as a group.There are no formal reporting lines.Yet, we often discuss problems that have arisen and explore solutions to ensure that our users receive good service.I also participate in their employee training and orientation programs for these units. When students and staff provide exceptional customer service, I make it a point to praise them to their supervisors. During tours and presentations, I ensure that visitors and guests hear from the students who are sitting at the service desks.All of these steps contribute to the successful partnerships established in the three years of operation. The last unit in this description is the library service desk.Whenever the Commons is open, there is at least one library employee stationed there.Since it opened, its staffing model has changed several times in response to the skills needed and the volume of traffic.The current model appears to be working well: Staff and student workers rotate between the library service desk in the Commons and two other reference desks, which are located on other floors of the library.Based on our observations, hourly head counts, and data on when we receive the most questions, we added additional staff in the afternoons and most evenings from Sunday through Friday (we have not found that a second person is needed on Saturdays).A reference librarian was hired to provide coverage on Sundays and weeknights.When that person is not available, full-time library employees provide the additional support.individuals are often information support specialists whose primary duties include desk coverage at one of the six subject library desks in the complex.A few librarians from these areas also provide support.The information support specialists are advised to refer patrons to the appropriate subject librarian for complicated reference needs. We use DeskTracker, a data collection program, to track patron questions and whether they are asked inperson, by phone, or online.When comparing fall 2013 desk activity with fall 2010, the most suitable desk for the comparison is the one that previously existed on the first floor of Pattee Library.Admittedly, there are problems with this comparison.The Knowledge Commons library service desk is in an area with higher foot traffic.Its staff now provides more hours, and they are also the primary point of contact for students making reservations for the group study rooms.On the other hand, the staff in its previous location answered all phone calls to the general number for the library.Keeping these differences in mind, the total number of questions asked at the old desk was 3,720 (fall 2010) versus 9,783 (fall 2013)-increasing 2.6 times.Much of this increase can be attributed to making room reservations and providing directions.As the supervisor of the IT Service Desk noted, "Their arms [those of staff at the library service desk] are permanently cocked with their finger pointing towards our desk" (Donahue, 2013, p. 89). Since the Commons opened, there has been only a slight increase in the number of reference questions, which rose only 3%.As students discover the online reservation systems and the IT Service Desk, we hope to see fewer reservations and directional questions, thus, freeing staff to devote more time to reference questions. In addition to these usage statistics, there are other indicators that the Knowledge Commons has resulted in heavier use of the library itself.The library's entrance counts are 16% higher (1,021,692 in fall 2010; 1,186,434 in fall 2013).The head counts indicate that the library is often at capacity (2,750 users), especially at midterm and the end of the semester.In addition, computer usage reports indicate that the Knowledge Commons computers are very heavily used, with a higher number of log-ins per machine than any other public computers on campus.Furthermore, anecdotal student comments have confirmed the Commons' popularity and that of the library housing it.As I have given tours, I have heard comments like "very nice upgrade," "about time-I actually use the library now," and the colloquial "freaking sweet."A graduate student completing her studies expressed her appreciation to me, writing: "I used the Knowledge Commons every day for the last year while I was job hunting, teaching, and writing.The environment provided a beautiful setting which gave the necessary atmosphere to produce good work" (A.Kazeem, personal communication, August 11, 2014). Conclusion Although these before-and-after comparisons of student use must be viewed with caution, they provide consistent evidence of a positive impact on library utilization at University Park.Unlike Shill andTonner (2003, 2004), this study offers evidence that the addition of non-library facilities and services, such as those included as part of the Knowledge Commons at Penn State, can increase student library use.The increased visibility of these services and their location in more attractive and accessible spaces have been associated with an increase in library gate counts.More importantly, and more directly, when use of specific services is compared before and after their inclusion in the Knowledge Commons, we find that all but one has increased-in some cases, dramatically.When the changes in the numbers of hours are considered, three of the four services for which there is data appear to have benefitted from inclusion within the Commons. My colleagues and I recognize that usage reports alone are not sufficient to assess the success of this new Leckie's (2003) methodology.Beginning in the spring of 2015, we will distribute a survey and interview students on their choice of study space.With this data, we hope to better understand why students chose a particular place to work and how physical settings influence their learning behaviors. Figure 1 Figure 1 Tombros and McWhirter Knowledge Commons.See the interactive floor plan at secureapps.libraries.psu.edu/content/kcpublic Figure 2Living room area, Knowledge Commons Figure 4 Figure 4Student using the One Button Studio Commons occupies approximately 1,200 square feet in the northwest corner of the first floor.Two full-time multimedia consultants, supported by additional part-time and student employees, oversee its operations.During the academic year, it is open until 9:00 p.m., Sunday through Thursday, and until 6:00 p.m. on Friday.The consultants advise faculty on ways to add multimedia projects to their course requirements and assist students with ways to complete such work, through individual consultation and classes.Examples of multimedia class assignments and faculty testimonials can be found at the Media Commons website (mediacommons.psu.edu).Students can reserve one of the six production rooms to meet with a consultant or to work on creating and editing their audio and video recordings.The Media Commons also includes the two practice presentation rooms called One Button Studios (OBS).According to Ryan Wetzel, Media Commons Manager, "By removing the uncertainty of using unfamiliar media technology like lights, cameras, and microphones, students can concentrate on their content and delivering their best presentation" (personal communication, July 31, 2014).The Media Commons had been in the Pattee Library since March 2009, but it was housed in a much smaller and less visible space on the second floor.The new, expanded, more attractive and functional space with the addition of the OBS has resulted in a dramatic increase in utilization.Although the Media Commons' hours are currently 1.6 times greater than in fall 2010, the number of requests increased a remarkable 12.6 times.During the fall 2010 semester, when it was on the second floor, 375 patrons used the Media Commons.Three years later, as part of the Knowledge Commons, 4,724 students, faculty, and staff did.The Media Commons @ the Knowledge Commons provides the best testimonial to the impact new technology and improvements in physical design can have on students' learning activities. space and have begun to further explore how students are using the Commons and what impact it has on their academic success.My research assistant and I are now examining students' use of the Commons and comparing it with their use of a traditional space elsewhere in the library and also with a computer lab in a classroom building on campus.During November and December of 2014, we conducted "seating sweeps" of these spaces, using Given and Pennsylvania Libraries: Research & Practice The Tombros and McWhirter Knowledge Commons at Penn State palrap.org . Information on commons can be found online as well: David Murray at Brookdale This article adds to the existing literature a detailed description of the Tombros and McWhirter KnowledgeCommons at Penn State University and its use.Unlike earlier studies, however, the focus is on before and after utilization of specific services in this Knowledge Commons. the Marion McKinnon Office of Adaptive Technology and Services, which ensures that Penn State students across the Commonwealth have equal access to University resources; Table 1 Service Hours per Week and Semester Service Requests, Pre-and Post-Knowledge CommonsThe library's partnership with the Learning Center and its writing tutors dates back to 2006.These tutors arrive in the Commons after the Learning Center, which is located elsewhere on campus, closes at 10 p.m. Typically, two to three student tutors work until midnight in the Commons, Sunday through Thursday, every semester (in 2010, prior to the Knowledge Commons, tutors were available for an additional hour, until 1 a.m.).Since the Knowledge Commons opened, a group study room in the leisure reading room has been reserved for their use.Prior to this, the tutors sat at a table in an open area on the first floor of Pattee Library.The group study room provides a Research & Practice The Tombros and McWhirter Knowledge Commons at Penn State palrap.orghowever,waslessthan the drop in hours (about 33%).The Director of the Penn State Learning Center and I believe that the numbers can be increased with adding service hours and/or with better marketing and promotion.Information Technology Services (ITS) hires and trains students to work as lab consultants, tech tutors, and service desk consultants.ITS Lab Consultants have been in the library since 2007.The Tech Tutors and IT Service Desk Consultants were introduced when the Knowledge Commons opened in 2012.Initially, all three units were located behind one IT service desk.To distinguish the services, each unit assigned t-shirts of a different color to its workers.The service desk was too small to house multiple teams, however, so each unit was moved to a separate service point several months after the Commons opened.ITS LabConsultants provide technical support for public computers in the library as well as another 40 computer labs elsewhere at University Park.Having an established presence in the library, they were early partners in the planning of the Knowledge Commons.They have a service desk in the Knowledge Commons and one in the Sidewater Commons.A third service desk, opposite the multimedia classroom, was added during summer 2012.In addition to troubleshooting any problems with printers in these areas, they roam all floors of the library, putting paper in printers and resolving other hardware problems.Four consultants usually work each shift.The hours are reduced during the summer, but during the fall and spring semesters, they are extended until midnight daily.During the fall of 2013, 112 student workers responded to 3,517 user requests and serviced the 21 printers in the libraries 17,842 times.Data is not available for the fall 2010 semester. quieter and more private space for students to meet with their tutors.During the fall 2010 semester, 255 students met with a writing tutor in the library.In the new location, this number actually decreased, but so did the number of hours the writing tutors offered.The drop in use (about 15%), Pennsylvania Libraries: Research & Practice The Tombros and McWhirter Knowledge Commons at Penn State communication,July 28, 2014).Few knew that it existed; many could not find it.With the move to the Commons, IT Service Desk staff were able to add more hours, almost doubling the time the desk was staffed.At the same time, Pennsylvania Libraries: These
5,756.8
2015-04-30T00:00:00.000
[ "Computer Science" ]
Characterization of the human cytochrome P4502D6 promoter. A potential role for antagonistic interactions between members of the nuclear receptor family. The functional mapping of the human cytochrome P4502D6 (CYP2D6) promoter in HepG2 cells revealed the presence of both positive and negative regulatory elements. One of these regulatory elements overlapped a sequence that is highly conserved in most members of the CYP2 family. This element, which consists of a degenerate AGGTCA direct repeat spaced by 1 base pair (DR1) and is known to be a target for members of the steroid receptor superfamily, was found to bind in vitro translated hepatocyte nuclear factor 4 (HNF4) in gel retardation analysis. Using HepG2 nuclear extracts, three protein-DNA complexes were formed on the DR1 element, one of which was confirmed to be dependent on the binding of HNF4. The other DR1 complexes were shown to be due to the interaction of the orphan receptor chicken ovalbumin upstream promoter transcription factor I (COUP-TFI). Experiments in COS-7 cells showed that HNF4 could activate the CYP2D6 promoter 30-fold. Surprisingly, mutation of the DR1 element produced a relatively minor 23% decrease in activity in HepG2 cells. Additionally, COUP-TFI was shown to inhibit HNF4 stimulation of the CYP2D6 promoter in COS-7 cells, suggesting that COUP-TFI could attenuate the effect of HNF4 in HepG2 cells. However, when HNF4 levels were increased in HepG2 cells by co-transfection, it resulted in the enhancement of CYP2D6 promoter activity, indicating that HNF4 could overcome the repressive effect of COUP-TFI. Therefore, the contribution of the DR1 element in controlling the transcription of the CYP2D6 gene depends on the balance between positively and negatively acting transcription factors. The functional mapping of the human cytochrome P4502D6 (CYP2D6) promoter in HepG2 cells revealed the presence of both positive and negative regulatory elements. One of these regulatory elements overlapped a sequence that is highly conserved in most members of the CYP2 family. This element, which consists of a degenerate AGGTCA direct repeat spaced by 1 base pair (DR1) and is known to be a target for members of the steroid receptor superfamily, was found to bind in vitro translated hepatocyte nuclear factor 4 (HNF4) in gel retardation analysis. Using HepG2 nuclear extracts, three protein-DNA complexes were formed on the DR1 element, one of which was confirmed to be dependent on the binding of HNF4. The other DR1 complexes were shown to be due to the interaction of the orphan receptor chicken ovalbumin upstream promoter transcription factor I (COUP-TFI). Experiments in COS-7 cells showed that HNF4 could activate the CYP2D6 promoter 30-fold. Surprisingly, mutation of the DR1 element produced a relatively minor 23% decrease in activity in HepG2 cells. Additionally, COUP-TFI was shown to inhibit HNF4 stimulation of the CYP2D6 promoter in COS-7 cells, suggesting that COUP-TFI could attenuate the effect of HNF4 in HepG2 cells. However, when HNF4 levels were increased in HepG2 cells by co-transfection, it resulted in the enhancement of CYP2D6 promoter activity, indicating that HNF4 could overcome the repressive effect of COUP-TFI. Therefore, the contribution of the DR1 element in controlling the transcription of the CYP2D6 gene depends on the balance between positively and negatively acting transcription factors. The cytochrome P450 superfamily represents a group of enzymes that are involved in the oxidative metabolism of both endogenous and foreign compound (1,2). The expression of P450 genes is subject to diverse regulatory controls, which display tissue-specific, sex-specific, and developmental patterns (2). Most foreign compound-metabolizing P450s are mainly expressed in the liver; however, some enzymes can also be detected in extrahepatic tissues such as lung, kidney, and intestine and in the brain (2). Certain hepatic P450s are constitutively expressed, while others are known to be induced by various foreign chemicals including phenobarbital, polycyclic aromatic hydrocarbons, and peroxisome proliferators (3). The latter two classes of chemicals act through the aryl hydrocarbon receptor (4) and peroxisome proliferator-activated receptor (5), respectively, while the exact mechanism(s) responsible for transducing the response to phenobarbital have yet to be elucidated (3). In addition, the expression of some P450 enzymes can be modulated by endogenous steroid and peptide hormones (3,6). In rodents for example, the sexually dimorphic expression of certain P450s is controlled by the sex-specific pattern of growth hormone secretion (6). Most of the regulatory effects on P450 expression are at the transcriptional level. However in some instances, such as the induction of cytochrome P4502E1 (CYP2E1) 1 by ethanol, post-transcriptional mechanisms are also involved (7). Within the P450 superfamily, the CYP2 family is the largest and most diverse (1). This family, whose members are mainly expressed in the liver, contains many of the drug-metabolizing isoforms and also some of the enzymes involved in the metabolism of endogenous substrates (for review see Ref. 8). In addition to the constitutively expressed CYP2 members, this family also contains isoforms that are regulated by phenobarbital, ethanol, and growth hormone (8). Regarding the study of CYP2 gene regulatory DNA elements and their corresponding transacting transcription factors, relatively little is known in comparison to members of the CYP1 family or some of the steroid metabolizing P450s (2). Research in this area has been hampered by the difficulty in maintaining the expression of, or the ability to induce, P450s in isolated hepatocytes or liver-derived cell lines. As a result, most of the data generated to date comes from studies using transient transfection of promoter constructs into various hepatoma cell lines. Nevertheless, using this approach some information has been obtained about the transcriptional control of the CYP2 genes. The transcription of the CYP2E1 gene was reported to be partly controlled by HNF1␣ (9), and that of CYP2C6 by DBP (10), both of these transcription factors being hepatocyteenriched. A phenobarbital-responsive region was identified in the chicken CYP2H1 gene using transient transfection of primary chicken hepatocytes (11), and a functional glucocorticoid response element was identified in the rat CYP2B2 gene promoter (12). Analysis of the rabbit CYP2C1 and CYP2C2 promoters in HepG2 cells revealed the presence of a regulatory element, which was shown to be a target for HNF4 (13), a member of the steroid receptor superfamily (14). Co-expression of HNF4 in COS-1 cells resulted in the induction of the CYP2C2 promoter (13). Furthermore, mutation of the CYP2C2 HNF4 element resulted in a marked decrease in promoter activity in HepG2 cells (13). This HNF4 element has been reported to be conserved in other members of the CYP2 family, and it was proposed to be of importance in the transcriptional control of other CYP2 genes (13). Additional studies have also demonstrated a role for HNF4 in the transcriptional control of the human CYP2C9 gene (15). However, in contrast, studies of the rat CYP2C genes (CYP2C7, CYP2C11, CYP2C12, and CYP2C13) demonstrated that co-expressed HNF4 gave only a maximal 3-fold induction of promoter activity in COS-7 cells (16). Furthermore, mutation of the HNF4 binding site in the respective promoters had no effect on the activity of the 2C7 or 2C11 promoters in HepG2 cells, while it caused decreases to 60 and 80% in the activity of CYP2C13 and CYP2C12 promoters, respectively (16). Regarding the CYP2D subfamily, using in vitro transcription analysis and transient transfections into HepG2 cells, it was possible to identify basal and sex-specific regulatory elements in the mouse CYP2D genes (17,18). Another CYP2D promoter that has been analyzed is that of the rat CYP2D5 gene, where it was reported that C/EBP and Sp1 cooperate in controlling its transcriptional activity (19). There was no evidence presented in support of a role for the HNF4 binding site in the modulation of CYP2D5 expression (19). Human CYP2D6 is known to play a major role in the metabolism of a wide range of clinically important drugs (20). It is also polymorphic, with 5-10% of the Caucasian population classified as poor metabolizers of CYP2D6 substrates (21). This is caused by mutations within the gene resulting in the absence of CYP2D6 protein (22). This polymorphism was subsequently reported to be associated with the incidence of various forms of cancer (23) and the susceptibility to Parkinson's disease (24). The CYP2D6 protein was also reported to be absent until the first week after birth (25), suggesting that its expression might be repressed by maternal hormones. Given that the levels of CYP2D6 expression may be critical in the responsiveness to certain clinically used drugs and in disease susceptibility, it is important to understand how CYP2D6 expression is controlled at the transcriptional level. Therefore, in this paper, we have performed the functional analysis of the CYP2D6 promoter and investigated what role the HNF4 binding site plays in controlling transcription of the CYP2D6 gene. First, the results indicate that both positive and negative regulatory elements contribute toward promoter activity. Second, although the HNF4 binding site alone appears to play a relatively minor role in HepG2 cells, the findings indicate that the balance between HNF4 and negatively acting transcription factors is an important factor. EXPERIMENTAL PROCEDURES Cell Culture-HepG2 and COS-7 cells were grown in monolayer and cultured in Dulbecco's modified Eagle's medium, supplemented with 10% heat-inactivated fetal bovine serum, 100 IU/ml penicillin and 100 g/ml streptomycin (all from Life Technologies, Inc.) at 37°C in 5% CO 2 . Transient Transfection Analysis and Expression Constructs-DNA transfections were carried out by the calcium phosphate method (26) as described by Gorman (27) with the exception that the glycerol step was omitted. Cells were harvested 24 -36 h after transfection, extracts were prepared, and chloramphenicol acetyltransferase (CAT) activity was assayed as described by Gorman et al. (28). Cell extracts were assayed for protein content (29). All CAT assays were performed such that the rate of acetylation was in the linear range. In all experiments, the values given represent the mean Ϯ S.E. of at least three experiments. A minimum of two plasmid preparations were used for each construct. Cells were also co-transfected with pSVgal (Promega) to assay for ␤-galactosidase activity as a control for transfection efficiency. Using this technique, transfection efficiency was found to vary by less than 10%. The expression plasmid pCMVHNF4 was kindly provided by Prof. B. Kemper (13), and the pMTHNF4 and pMTEAR3 constructs were gifts from Dr. J. Ladias (30). For the expression of HNF4 and COUP-TFI in COS-7 cells, 75-cm 2 tissue culture flasks were transfected with 10 g of pMTHNF4 or pMTEAR3, respectively, as described above. After 36 h, cells were harvested, and whole cell extracts were prepared by three cycles of freeze-thawing in 0.4 mM KCl, 20 mM Tris-HCl, pH 8, 2 mM dithiothreitol, and 20% (w/v) glycerol. Extracts were centrifuged at 10,000 ϫ g for 15 min at 4°C to remove debris, and phenylmethylsulfonyl fluoride was added to a final concentration of 1 mM. Promoter Deletion Constructs-The CYP2D6 promoter was isolated by polymerase chain reaction from human genomic DNA using the upstream oligonucleotide 5Ј-CAGATAAGCTTGCTGAAGGTCACTCT--3Ј (with HindIII site) and downstream oligonucleotide 5Ј-GGGCTCC-TCTAGACACACCTCCCACC-3Ј (with XbaI site). The resulting promoter fragment (Ϫ392 to ϩ56) was subcloned between the HindIII and XbaI sites in pCATbasic (Promega) and checked by sequencing. Deletion fragments of the CYP2D6 promoter were generated using the Erase-a-Base system following the manufacturer's protocol (Promega). After confirming the sequence of the various promoter fragments, they were subcloned into the HindIII and XbaI sites of pCATbasic (Promega) after the addition of a HindIII linker at the 5Ј end. The Ϫ69CATMUT construct was prepared by polymerase chain reaction using the 5Ј-mutated oligonucleotide with HindIII linker 5Ј-TTGGAAGCTTTTCACTC-ACAGCAGCTTTACACTTAATCATCAGCTCCC-3Ј and the 3Ј oligonucleotide with XbaI linker 5Ј-AACCTCTAGACACACCTGGCACCCCCA-CCC-3Ј. After sequencing, the mutated fragment was subcloned into the HindIII and XbaI sites of pCATbasic (Promega). Gel Retardation Analysis-Nuclear extracts from HepG2 cells were prepared as described by Dignam et al. (31). Radiolabeled probe for DNA-binding reactions was prepared by isolating the promoter fragment spanning Ϫ69 to ϩ56 from Ϫ69CAT, dephosphorylating the DNA with alkaline phosphatase (Boehringer Mannheim), and phosphorylating with [␥-32 P]ATP (Amersham Corp.; Ͼ5000 Ci/mmol) and T4 polynucleotide kinase (Promega). Binding reactions of 20 l were carried out in buffer containing 10 mM Hepes, pH 7.5, 2.5 mM MgCl 2 , 10% (w/v) glycerol, 0.1 mM EDTA, 1 mM dithiothreitol, 2 g of poly(dI-dC) (Pharmacia Biotech Inc.), 50 mM KCl, 0.1-0.3 ng of radiolabeled probe, and the indicated amount of protein. Reactions were incubated at 20°C for 20 min. Free and protein-bound DNA were separated on 4% nondenaturing polyacrylamide (acrylamide:bisacrylamide, 37.5:1 (v/v)) gels, which were run at 4°C and a constant voltage of 200 V in 0.25 ϫ TBE (22 mM Tris borate, 0.5 mM EDTA). Where indicated, competitor oligonucleotides or specific antiserum was included in the DNA-binding reactions. The CTE oligonucleotide corresponds to the sequence spanning Ϫ69 to Ϫ28 of the CYP2D6 promoter, which includes the degenerate AGGTCA direct repeat separated by one base pair (DR1) element 5Ј-TTCACTCACAGCAGAGGGCAAAGGCCATCATCAGCTC-CCTTT-3Ј. The Sp1 oligonucleotide corresponds to the Sp1 consensus sequence 5Ј-ATTCGATCGGGGCGGGGCGAGC-3Ј from the SV40 early promoter (32). Anti-HNF4 antiserum was kindly provided by Dr. Francis Sladek (14). Anti-COUP antiserum was kindly provided by Dr. Ming-Jer Tsai (33). Gel retardation analysis with COS-7 whole cell extracts was carried out under the conditions described above. In Vitro Translation of HNF4 -The plasmid pSG5HNF4 (gift from Dr. Francis Sladek (14)), was linearized with BglII. T7 RNA polymerase was then used to generate HNF4 transcript, which was then translated using rabbit reticulocyte lysate. Reactions were carried out in a final volume of 25 l containing 17.5 l of rabbit reticulocyte lysate (Promega), 20 M amino acids (including methionine), 20 units of RNasin (Promega), and 0.5 g of HNF4 transcript. Reactions were incubated for 90 min at 30°C. RESULTS Deletion Analysis of the CYP2D6 Promoter-In order to determine potential regulatory elements in the CYP2D6 promoter, progressive 5Ј deletions were generated. The various promoter fragments were then fused to the CAT gene in pCATbasic and transiently transfected into HepG2 cells to assay for promoter activity. As can be seen in Fig. 1, the fusion of CYP2D6 promoter sequences from Ϫ392 to ϩ56 upstream of the CAT gene resulted in the 30-fold induction of CAT activity when compared with pCATbasic. No activity was observed when the same construct was transfected into HeLa cells (data not shown). The deletion of sequences from Ϫ392 to Ϫ308 resulted in a 28% drop in activity in HepG2 cells, and further deletion to Ϫ242 had no additional effect. However, removal of sequences down to Ϫ156 produced an additional 2-fold decrease in activity. The presence of a negative regulatory element between Ϫ128 and Ϫ90 was indicated by the approximately 2-fold increase in CAT activity. Further deletions to Ϫ69 and Ϫ18 produced additional 2.5-and 3.5-fold decreases in activity, respectively. Therefore, the deletion analysis revealed the presence of four positively acting regulatory regions (Ϫ392/Ϫ308, Ϫ242/Ϫ156, Ϫ90/Ϫ69, and Ϫ69/Ϫ18), and one negative element (Ϫ128/Ϫ90). It is noteworthy that none of the deletions produced a difference in activity in excess of 2-3-fold, despite that fact that overall promoter activity was 20 -30-fold higher than pCATbasic. This would suggest that these regulatory elements work together to control the activity of the CYP2D6 promoter. Analysis of Orphan Receptors Binding to the CYP2D6 Promoter-The deletion of sequences between Ϫ69 and Ϫ18 produced the largest change in CAT activity (Fig. 1). This region contains, in addition to the TATA-box, a sequence that is highly conserved within the CYP2 family (13) (see Fig. 2A). This element consists of a DR1. In addition to the high degree of conservation, perhaps the most striking feature is the fact that the nucleotide at the fourth position in both half-sites of every element is nonconserved with respect to the consensus DR1. Whether this characteristic has any functional significance is unclear at present. The DR1 elements of other CYP2 genes have been reported to bind HNF4 (13,15,16). Therefore, we examined if the CYP2D6 DR1 was a target for in vitro translated HNF4 in gel retardation analysis. The addition of in vitro translated HNF4 to the radiolabeled Ϫ69/ϩ56 promoter fragment containing the DR1 element resulted in the formation of one major (complex A) and one minor (complex b) protein-DNA complex (Fig. 2B, lane 2). Complex A was shown to be specific, since it was competed out by the addition of an oligonucleotide (CTE) spanning the DR1 site (lane 3) but not by an unrelated oligonucleotide (Sp1) corresponding to the Sp1 consensus sequence from the SV40 early promoter (lane 4). Complex b was observed to be nonspecific, since its formation was not abolished by the addition of the CTE oligonucleotide (lane 3) or the unlabeled Ϫ69/ ϩ56 fragment (data not shown). Indeed, this complex was observed to be due to the reticulocyte lysate itself, since it was also seen using nonprogrammed lysate (lane 5). The identity of complex A was confirmed as being due to HNF4 by the addition of anti-HNF4 antiserum, which produced a supershift of the protein-DNA complex (lane 7), which was not seen with the nonimmune serum (lane 8). In order to test which nuclear proteins from HepG2 cells could bind to the CYP2D6 DR1 element, gel retardation analysis was performed using nuclear extracts. As shown in Fig. 2C (lane 1), the addition of HepG2 nuclear protein resulted in the formation of three protein-DNA complexes (complexes 1-3) on the Ϫ69/ϩ56 promoter fragment. All of these complexes were observed to bind specifically to the DR1 sequence, since their formation was abolished by the addition of CTE oligonucleotide (lane 2) but not by Sp1 oligonucleotide (lane 3). Since HNF4 was previously shown to be capable of binding to this DR1 element Fig. 2B), we examined if any of the complexes were due to the interaction of HNF4. Subsequently, complex 2 was shown to be dependent on the interaction of HNF4 as it was supershifted by the inclusion of anti-HNF4 antiserum in the DNA-binding reaction (Fig. 2C, lane 5). This supershift was observed to be specific, since the antiserum had no effect on the mobility of the other two complexes, and the addition of nonimmune serum to the DNA-binding reactions had no effect (lane 6). It was noted that when HNF4 was effectively removed from the DNA-binding reaction by its antiserum, the intensity of complex 1 increased (lane 5), suggesting that HNF4 and the factor responsible for the formation of complex 1 may bind to the DR1 element in a mutually exclusive manner. It was previously demonstrated that DR1 elements from other genes could be recognized by not only HNF4, but also by other members of the steroid receptor family, including COUP-TFI (EAR3), ARP1 (COUP-TFII), EAR2, peroxisome proliferator-activated receptor (PPAR) and retinoid X receptor (RXR), (30,34,35). Therefore, we tested if any of the unidentified complexes formed by HepG2 nuclear extracts on the CYP2D6 DR1 element were due to the interaction of other members of the steroid receptor superfamily. Based on what was already reported in the literature, the most obvious candidates were COUP-TFI/ARP1. These two proteins are highly homologous FIG. 1. Deletion analysis of the CYP2D6 promoter. HepG2 cells were transfected with the different promoter constructs (2.5 g) as described under "Experimental Procedures." CAT activity was calculated as the percentage of chloramphenicol converted to the acetylated forms after these forms had been cut out of the TLC plate and the relative amounts were determined by liquid scintillation counting. Data are expressed as -fold activity in relation to pCATbasic. members of the steroid receptor family (36), and ARP1 has also been termed COUP-TFII. As Fig. 2D demonstrates, the addition of HepG2 nuclear extracts to radiolabeled Ϫ69/ϩ56 fragment resulted in the formation of the same three complexes (lane 2). The inclusion of anti-COUP antiserum (which recognizes both COUP-TFI and ARP1 (37)) in the DNA-binding reactions, resulted in the inhibition of complex 1 and the disappearance of complex 3, with the concomitant appearance of supershifted complexes (lane 3). None of the complexes were affected by the addition of nonimmune serum (lane 4). The identification of COUP-TFI/ARP1 as being responsible for the formation of complex 1 was in agreement with the observed relative mobilities of HNF4 and COUP-TFI/ARP1 reported by other groups (37). The observation that complex 3 was also supershifted by anti-COUP antiserum suggests that its formation was also dependent on COUP-TFI/ARP1. Alternatively, the factor(s) responsible could be antigenically related to COUP-TFs. However, results from gel retardation analysis using whole cell extracts from COS-7 cells transfected with a COUP-TFI expression vector support the former (see Fig. 5A), where the addition of COUP-TFI resulted in the formation of two protein-DNA complexes similar in mobility to complexes 1 and 3. HNF4 Effect on the Activity of the CYP2D6 Promoter-In order to investigate if HNF4 was capable of activating the CYP2D6 promoter, co-transfection experiments were performed in COS-7 cells (see Fig. 3). In contrast to the results obtained from HepG2 cells, the transfection of Ϫ392CAT into COS-7 cells did not result in any enhancement of CAT activity above that observed with pCATbasic. Co-transfection with the mammalian HNF4 expression vector CMVHNF4 produced an approximately 30-fold induction of CAT activity from Ϫ392CAT, which was not seen with the empty expression vector CMV. Therefore, HNF4 was capable of activating the CYP2D6 promoter. Mutational Analysis of the CYP2D6 DR1 Element-To assess the functional importance of the CYP2D6 DR1 element, it was mutated in both repeats in order to abolish any interaction with nuclear factors. Gel retardation analysis using HepG2 nuclear extracts was performed to check that the mutations made had indeed abolished the ability of nuclear factors to interact with the DR1 element. As shown in Fig. 4A, competition with the wild-type Ϫ69/ϩ56 fragment abolished the formation of all three complexes (compare lanes 2 and 3). The addition of the mutated Ϫ69/ϩ56 fragment, however, had no effect on the formation of any of the complexes (lane 4), indicating that the mutations had abolished the protein-DNA interactions. Furthermore, gel retardation analysis using the mutated fragment as a probe for HepG2 nuclear extracts did not result in the formation of any protein-DNA complexes (data not shown). To examine the functional effect of the DR1 mutation, the mutated fragment was subcloned upstream of the CAT gene in pCATbasic, and its activity was compared with that of the wild-type fragment (Ϫ69CAT). Fig. 4B shows that the Ϫ69CAT construct gave approximately 7-fold higher CAT activity than that observed with pCATbasic. However, mutation of the DR1 element (Ϫ69CATMUT), resulted in only a 23% decrease in CAT activity. This result indicated that first, the DR1 element alone appears to play a relatively minor role in controlling the expression of CYP2D6 promoter in HepG2 cells, and second, that the observed difference in activity between the Ϫ69CAT and Ϫ18CAT constructs in HepG2 cells is probably due to the presence of the TATA-box in the longer construct, although at present we cannot rule out the possibility of an as yet unidentified transcription factor binding to this region. The Effect of COUP-TFI on Transactivation by HNF4 -It was reported that COUP-TFI could antagonize HNF4 activity on DR1 elements present in other genes, mediated by competition for the same DNA-binding site (35,37). One possible reason for the relatively minor effect of the DR1 element mutation is that in HepG2 cells, the stimulatory effect of HNF4 was being repressed by COUP-TFI. Therefore, the mutation would abolish both a positive and negative response. This was investigated in vitro using gel retardation analysis and in vivo by co-transfection experiments in COS-7 cells. First, COS-7 cells were transfected with expression vectors for HNF4 or COUP-TFI, and whole cell extracts (WCE) were prepared. When analyzed by gel retardation analysis, the addition of increasing amounts of WCE containing HNF4 or COUP-TFI resulted in the formation of their respective protein-DNA complexes (Fig. 5A). Interestingly, the addition of the higher concentrations of COUP-TFI resulted in the appearance of major and minor complexes, similar to the two COUP-TFI-dependent complexes observed with HepG2 nuclear extracts (Fig. 2D). Both HNF4-and COUP-TFI-dependent complexes were shown to be specific, since they were abolished by the addition of a 100-fold molar excess of CTE oligonucleotide (Fig. 5B, lanes 3 and 6) but not by the unrelated Sp1 oligonucleotide (Fig. 5B, lanes 4 and 7). In addition, none of the complexes were observed with untransfected COS-7 WCE (Fig. 5B, lane 1), and they could be supershifted with their respective antisera (data not shown). Using the COS-7 WCE, we then tested if COUP-TFI could antagonize HNF4 binding to the DR1 element. The FIG. 3. Effect of HNF4 on CYP2D6 promoter activity in COS-7 cells. COS-7 cells were transfected as described under "Experimental Procedures" with 2.5 g of pCATbasic, 2.5 g of Ϫ392CAT alone, or together with 0.25 g of either pCMVHNF4 or pCMV. CAT activity was calculated and data were expressed as described in the legend to Fig. 1. FIG. 4. Mutational analysis of the DR1 element in HepG2 cells. A, the effect of mutating the DR1 element on the binding of nuclear receptors. Gel retardation analysis was performed as described under "Experimental Procedures." Radiolabeled Ϫ69/ϩ56 promoter fragment was incubated in the absence (lane 1) or presence (lanes 2-4) of 2 g of HepG2 nuclear extract and together with a 100-fold molar excess of wild-type (lane 3) or mutated (lane 4) Ϫ69/ϩ56 promoter fragment. Wild-type and mutated Ϫ69/ϩ56 promoter fragments were isolated from their respective constructs by restriction enzyme digestion and purified. B, functional analysis of the mutated DR1 element. HepG2 cells were transfected as described under "Experimental Procedures" with the different promoter constructs. CAT activity was calculated and the data were expressed as described in the legend to Fig. 1. inclusion of COUP-TFI in the HNF4 DNA-binding reactions resulted in a decrease in the amount of HNF4-dependent complex with a concomitant appearance of the COUP-TFI-dependent complexes (Fig. 5B, compare lanes 5 and 8), while the addition of untransfected COS-7 WCE had no effect on the activity of HNF4 (Fig. 5B, lane 9). This experiment demonstrated that HNF4 and COUP-TFI were competing for the same DNA element. To examine the effect of COUP-TFI in vivo, co-transfection experiments were performed in COS-7 cells. Fig. 5C demonstrates that co-transfection with CMVHNF4 produced a 12-fold induction of CYP2D6 promoter activity, as observed previously (Fig. 3). However, in the additional presence of COUP-TFI/ EAR3 (from the expression of pMTEAR3), this activity was totally abolished, while no effect was observed with the empty expression vector pMT. The expression of COUP-TFI/EAR3 in the absence of HNF4 had no effect on the activity of the promoter, indicating that COUP-TFI itself lacked any transactivation capability when bound to this element. This experiment demonstrated that the induction of the CYP2D6 promoter by HNF4 can be inhibited by COUP-TFI and suggested that the presence of COUP-TFI in HepG2 cells attenuated the effect of HNF4. The Effect of Increasing HNF4 Levels in HepG2 Cells-The co-expression of COUP-TFI in COS-7 cells resulted in the complete inhibition of HNF4 activity on the CYP2D6 promoter, suggesting that it was the dominant factor. Therefore, we tested if increasing the concentration of HNF4 in HepG2 cells could overcome the repressive effect of COUP-TFI/ARP1. As shown in Fig. 6, the co-transfection of HNF4 into HepG2 cells resulted in the marked enhancement in the activity of both Ϫ392CAT and Ϫ69CAT promoter constructs, but not the Ϫ69CATMUT construct containing the mutated DR1 element. In fact, the activity of this mutated construct was repressed by the presence of excess HNF4, probably caused by the squelching of promoter activity in a DNA binding-independent manner. One important conclusion from these results is that the stimulatory effect of HNF4 is still observed with the longer construct, despite the fact that its expression in HepG2 cells was initially relatively high. This experiment demonstrates that altering the balance between the two transcription factors can modulate the CYP2D6 promoter activity. DISCUSSION In this paper, the deletion analysis performed on the CYP2D6 promoter revealed that its overall activity was controlled by several regulatory elements, both positive and negative in nature. Although the progressive deletion of each of the regulatory elements did not result in any dramatic changes in promoter activity, their combined effect still resulted in the 30-fold stimulation of transcription (see Fig. 1). At present we do not know if each of these elements is acting individually to modulate promoter activity or if there are any cooperative/ antagonistic interactions between the various factors. In addition, it is possible that other DNA elements that are situated upstream of Ϫ392 base pairs may also contribute to the activity of the CYP2D6 promoter in vivo. Regarding the CYP2 family, there have been various studies that investigated the functional role of the conserved DR1 element. In the case of the CYP2D6 promoter, it was clear that HNF4 could significantly stimulate its activity in COS-7 cells COS-7 cells were transfected as described under "Experimental Procedures," with 2.5 g of Ϫ392CAT alone (NONE) or cotransfected along with 0.25 g of pCMV, pCMVHNF4, pCMVHNF4 and pMT2, pCM-VHNF4 and pMTEAR3, pMT2, or pMTEAR3. CAT activity was calculated and the data were expressed as described in the legend to Fig. 1. (Fig. 3). This is similar to the situation with the rabbit CYP2C2 and human CYP2C9 genes (13,15), but is in contrast to the findings from studies on several rat CYP2C genes, where coexpression of HNF4 resulted in a relatively weak 3-fold induction of any of the rat CYP2C promoters studied (16). Despite the fact that HNF4 could significantly activate the CYP2D6 promoter when expressed in COS-7 cells, mutation of the DR1 element resulted in a relatively minor (23%) decrease in promoter activity in HepG2 cells (Fig. 4B). We believe that this is caused by the additional presence of COUP-TFI/ARP1 in HepG2 cells, which counteracts the stimulatory potential of HNF4 on the wild-type element, by competing with it at the DNA binding level. It is also possible that COUP-TFI may actively repress minimal promoter activity via an interaction with co-repressor molecules, and in fact evidence for such a mechanism has recently been presented (38). If this were the case, then abolishing the interaction of COUP-TFI with its DNA target would also result in the relief of active transcriptional repression. However, at present this activity of COUP-TFI on the CYP2D6 promoter has not yet been investigated. The results from the mutational analysis of the CYP2D6 DR1 element are analogous to the findings reported for several rat CYP2C genes, where the mutation of their respective DR1 elements had either no effect or caused similar minor decreases in promoter activity (16). The minor effects upon mutating the DR1 elements in the CYP2D6 and several rat CYP2C genes are in contrast to the rabbit CYP2C2 gene, where mutation of its DR1 element produced a marked decrease in promoter activity (13). Importantly, however, in contrast to the rat CYP2C genes (16), the CYP2D6 DR1 element retained the potential to respond to HNF4 as observed by the significant inductions upon co-expression of HNF4 in both COS-7 and HepG2 cells (Figs. 3 and 6). The reason for the above differences in the activity of the conserved DR1 element between different CYP2 genes is unclear at present, since the respective studies were all performed in similar cell lines. Therefore, it cannot be due to any cell specific differences, for example in the expression of co-activa-tors. It is possible that small changes in the sequence of the respective DR1 elements may determine the stimulatory capability of HNF4 once it is bound to the DNA. The same sequence differences may alter the relative affinity of the element for HNF4 and COUP-TFI/ARP1 in HepG2 cells, thereby affecting its capacity to stimulate transcription. In fact, DR1 elements appear to fall into two classes, those that bind HNF4 but not COUP-TFI and those that bind both (39). Alternatively, the relative positioning of the DR1 element with respect to the TATA-box or differences in the composition of the basal transcription machinery may be critical. Despite the fact that the mutation of the CYP2D6 DR1 element had a relatively minor effect on promoter activity, it could be predicted that this sequence will still have an important role to play in regulating CYP2D6 expression, with its activity being controlled by the relative concentrations of HNF4 and COUP-TFI in any given cell type. Studies on the human coagulation factor VII promoter revealed that a mutation that inhibited HNF4 binding to its DNA element only produced a 20 -50% drop in activity, but significantly, this same mutation when it occurs in a similar site of the factor IX promoter causes a severe bleeding disorder (Ref. 40 and references therein). Therefore, relatively small changes in promoter activity can have rather drastic consequences. It is noteworthy that two tissues where the rat CYP2D1 enzyme is highly expressed, namely liver and kidney (2), coincide with two of the major sites of HNF4 expression (14). HNF4 has also been reported to cooperate with other transcription factors in regulating the expression of certain genes (41). Therefore, it is possible that additional upstream elements in the CYP2D6 promoter may require HNF4 for their stimulatory effect. In fact they may even influence the occupancy of the DR1 element, such that it is in favor of HNF4 rather than COUP-TFI/ARP1. Since these studies were performed by transient transfection analysis, it remains possible that regarding the endogenous gene, the presence of chromatin may also influence the relative affinities of HNF4 and COUP-TFI/ARP1 for the DR1 element. Indeed, the presence or absence of chromatin has been reported to influence transcriptional activation by other transcription factors (42). One interesting question that arises from this and other studies is why HNF4 and COUP-TFI differ in their transactivation capabilities on several DR1 elements. Since COUP-TFI is able to stimulate the transcription of certain genes (43), it cannot be due to its lack of a transcriptional activation domain. It is possible that the ability of COUP-TFI to stimulate transcription depends upon the promoter context, with HNF4 and COUP-TFI differing in their abilities to interact with other transcription factors or co-activators. Alternatively, the exact sequence of the DNA binding site may be more critical for COUP-TFI than it is for HNF4 to function in a positive manner. Regarding the functional importance of the CYP2D6 DR1 element, the experiments performed in this work do not take into account natural situations where the relative concentrations of HNF4 and COUP-TFI might vary. For example, HNF4 levels are regulated during development (44), and its concentration has been reported to vary in rat hepatocellular carcinomas (45). In addition, HNF4 is expressed in a tissue-specific manner (14,44). Therefore, we believe that the CYP2D6 DR1 element is not functionally redundant but has the potential to respond to changes in the balance between positive and negative regulators. Interestingly, DR1 elements present in other genes have been demonstrated to exhibit differential occupancy by various transcription factors in a tissue-specific manner (46). The activity of the DR1 element may also be modulated by FIG. 6. The effect of increasing HNF4 levels in HepG2 cells on the activity of the CYP2D6 promoter. HepG2 cells were transfected as described under "Experimental Procedures," with 2.5 g of pCATbasic, Ϫ392CAT, Ϫ69CAT, or Ϫ69CATMUT. The latter two constructs were co-transfected along with 0.25 g of either pMTHNF4 or pMT2. CAT activity was calculated and the data were expressed as described in the legend to Fig. 1. extracellular signals. In fact, HNF4 has recently been reported to be regulated by a phosphorylation signal-dependent pathway (47), and dopamine has been demonstrated to activate COUP-TFI (48). Furthermore, preliminary evidence indicates that RXR homodimers and PPAR⅐RXR heterodimers are capable of interacting with the CYP2D6 DR1 element, 2 suggestive of a possible effect of peroxisome proliferators/fatty acids or 9-cis-retinoic acid on CYP2D6 expression. Both COUP-TFI (49) and CYP2D6 3 are expressed in the brain; therefore, given the involvement of CYP2D6 in the susceptibility to Parkinson's Disease (24), it is tempting to speculate that dopamine may alter CYP2D6 levels in the brain via its effects on the activity of COUP-TFI. Interestingly, CYP2D6 expression in primary human hepatocytes has recently been reported to be modulated by extracellular matrix proteins (50), although the exact mechanism behind this effect was not investigated. In summary, several DNA elements are responsible for controlling the transcriptional activity of the CYP2D6 promoter, while the conserved DR1 element has the potential to modulate CYP2D6 expression in response to temporal, spatial, and hormonal signals via changes in the balance between positive and negative transcription factors.
8,395.8
1996-10-11T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Spurious Resonance of the QCM Sensor: Load Analysis Based on Impedance Spectroscopy A research topic of equal importance to technological and application fields related to quartz crystal is the presence of unwanted responses known as spurious resonances. Spurious resonances are influenced by the surface finish of the quartz crystal, its diameter and thickness, and the mounting technique. In this paper, spurious resonances associated with fundamental resonance are studied by impedance spectroscopy to determine their evolution under load conditions. Investigation of the response of these spurious resonances provides new insights into the dissipation process at the QCM sensor surface. The significant increase of the motional resistance for spurious resonances at the transition from air to pure water is a specific situation revealed experimentally in this study. It has been shown experimentally that in the range between the air and water media, spurious resonances are much more attenuated than the fundamental resonance, thus providing support for investigating the dissipation process in detail. In this range, there are many applications in the field of chemical sensors or biosensors, such as VOC sensors, humidity sensors, or dew point sensors. The evolution of D factor with increasing medium viscosity is significantly different for spurious resonances compared to fundamental resonance, suggesting the usefulness of monitoring them in liquid media. Introduction Quartz crystal is a natural substance that exhibits a piezoelectric effect and is chemically and mechanically very stable. Its properties have been of great interest since the early days of electronics. Quartz crystal performs electrically like a series circuit consisting of resistance, inductance, and capacitance (RLC). Unlike other materializations of the RLC circuit, quartz crystal has very low attenuation at series resonance [1]. Chemically, quartz crystal is silicon dioxide SiO 2 with no naturally occurring quality suitable for the production of electronic devices. For this reason, technologies for the production of high-purity synthetic quartz crystals have been successfully developed [2,3]. The quality of synthetic quartz crystal is a function of the growth rate, which is about 1 mm/day. A slow growth rate results in a more homogeneous quartz crystal with fewer impurities incorporated into the crystal lattice [4]. A high-performance version of the quartz crystal is SC-cut, a double-rotated cut that minimizes resonant frequency changes due to temperature gradients. SC-cut quartz crystals have an inflection point at around 92 • C. In addition to the inflection point at high temperatures, the temperature dependence is described by a smooth cubic relationship, and they are therefore less affected by temperature deviations from the inflection point. Due to its outstanding qualities, the SC-cut quartz crystal version [5] has become a promising candidate for detecting high-frequency gravitational waves [6]. An important step in the spread of the use of quartz crystal as an electronic device in various applications was made by the generation of its AT-cut version [7][8][9]. AT-cut quartz crystals are singularly rotated cuts on the Y-axis in which the upper and lower halves of the crystal move in opposite directions, causing a thickness-shear vibration during mechanical oscillation. The relationship between temperature and resonant frequency of the quartz Extended BVD Model of the QCM Sensor The BVD model is used to describe the behavior of the QCM sensor as a function of frequency, and its circuit can have multiple motional branches to account for multiple resonances or vibration modes, as shown in Figure 1a. Through this extension of the standard BVD model circuit, the electrical properties of the QCM sensor can be modeled in the vicinity of several resonance frequencies. Sensors 2023, 23, x FOR PEER REVIEW 3 of 14 concepts, and VIA; Section 3 presents the experimental results of impedance spectroscopy and the evolution of key electrical parameters for spurious resonances; and Sections 4 and 5 are devoted to the discussion and conclusions. Extended BVD Model of the QCM Sensor The BVD model is used to describe the behavior of the QCM sensor as a function of frequency, and its circuit can have multiple motional branches to account for multiple resonances or vibration modes, as shown in Figure 1a. Through this extension of the standard BVD model circuit, the electrical properties of the QCM sensor can be modeled in the vicinity of several resonance frequencies. (a) (b) Measurement of all electrical parameters of the extended BVD model is possible using a virtual impedance analyzer with a large scanning range. The VIA front-end hardware component is shown in Figure 1b. As can be seen, this is a simple half-bridge configuration that permits simultaneous electrochemical measurements due to the grounding of the electrode exposed to the operating medium. Simulation via Matlab ® scripts is limited to Bode and Nyquist plot generation, which can provide maximum information about the complex behavior of the QCM sensor. To generate Bode and Nyquist plots for the extended BVD model in Figure 1b, the impedance of the motional branches formed from series RLC circuits in the resonant frequencies range of the QCM sensor is calculated using the following equation: The branch formed by the parallel capacitance has the impedance of the following form: Considering the extended BVD model for spurious resonances, the QCM sensor impedance is calculated using the following equation: Measurement of all electrical parameters of the extended BVD model is possible using a virtual impedance analyzer with a large scanning range. The VIA front-end hardware component is shown in Figure 1b. As can be seen, this is a simple half-bridge configuration that permits simultaneous electrochemical measurements due to the grounding of the electrode exposed to the operating medium. Simulation via Matlab ® scripts is limited to Bode and Nyquist plot generation, which can provide maximum information about the complex behavior of the QCM sensor. To generate Bode and Nyquist plots for the extended BVD model in Figure 1b, the impedance of the motional branches formed from series RLC circuits in the resonant frequencies range of the QCM sensor is calculated using the following equation: The branch formed by the parallel capacitance has the impedance of the following form: Considering the extended BVD model for spurious resonances, the QCM sensor impedance is calculated using the following equation: The spurious resonances can arise from various sources, such as mechanical stresses, acoustic wave reflections, and parasitic capacitances and inductances. Simulation of the Extended BVD Model of the QCM Sensor Investigation of spurious resonances is not frequently encountered in the literature [22], or, in other words, it is limited to reporting their presence [23]. Spurious resonances usually occur at frequencies higher than the fundamental frequency and immediately afterwards and can affect the accuracy of measurements. By considering monitoring spurious resonances of the QCM sensor, it is possible to accurately measure the mass and viscoelastic properties of thin films. Considering the QCM sensor in the air, some values of the electrical parameters for the simulation of the extended BVD model are proposed in Table 1. These electrical parameter values were suggested by experimental investigations during the development of the extended scanning range VIA. For the values of the electrical parameters of the extended BVD model in Table 1, Bode and Nyquist plots are engendered in Matlab ® and shown in Figure 2. The spurious resonances can arise from various sources, such as mechanical stresses, acoustic wave reflections, and parasitic capacitances and inductances. Simulation of the Extended BVD Model of the QCM Sensor Investigation of spurious resonances is not frequently encountered in the literature [22], or, in other words, it is limited to reporting their presence [23]. Spurious resonances usually occur at frequencies higher than the fundamental frequency and immediately afterwards and can affect the accuracy of measurements. By considering monitoring spurious resonances of the QCM sensor, it is possible to accurately measure the mass and viscoelastic properties of thin films. Considering the QCM sensor in the air, some values of the electrical parameters for the simulation of the extended BVD model are proposed in Table 1. These electrical parameter values were suggested by experimental investigations during the development of the extended scanning range VIA. For the values of the electrical parameters of the extended BVD model in Table 1, Bode and Nyquist plots are engendered in Matlab ® and shown in Figure 2. In the simulated case, fundamental resonance is dominant, and this simulated situation is frequently encountered experimentally and is also illustrated in the Nyquist diagram in Figure 2b. This relatively ideal situation simulated in Figure 1 is not found in all cases. In many cases, depending on quartz crystal manufacturing technology and destination, their response may differ substantially. In the simulated case, fundamental resonance is dominant, and this simulated situation is frequently encountered experimentally and is also illustrated in the Nyquist diagram in Figure 2b. This relatively ideal situation simulated in Figure 1 is not found in all cases. In many cases, depending on quartz crystal manufacturing technology and destination, their response may differ substantially. In the various applications of quartz crystals, such as oscillators, the ratio of the series resonance motional resistance R s0 to spurious series resonance motional resistance R si is important and is, by definition, expressed by the following equation: Normally, values between 3 dB and 6 dB may be sufficient for general purpose frequency reference applications. For quartz crystal filters, attenuations higher than 40 dB are often required to avoid distorting their response. This performance can only be achieved with special design techniques and involves the use of very small values of motional capacitance. Simulating the response of the QCM sensor in a liquid medium is extremely important from the perspective of its use as a biosensor. Table 2 shows the values of the electrical parameters of the extended BVD model proposed for simulation. In this case, only one of the QCM sensor armatures is considered to be in contact with water. Again, values as close as possible to the experimental values were chosen. The result of the Matlab ® simulation in this case is shown in Figure 3a, the value of the motional resistances being strongly modified. Similarly, in Figure 3b, the same profound change in ratios can be seen in this case in the Nyquist plot. resonance motional resistance 0 to spurious series resonance motional resistance is important and is, by definition, expressed by the following equation: Normally, values between 3 dB and 6 dB may be sufficient for general purpose frequency reference applications. For quartz crystal filters, attenuations higher than 40 dB are often required to avoid distorting their response. This performance can only be achieved with special design techniques and involves the use of very small values of motional capacitance. Simulating the response of the QCM sensor in a liquid medium is extremely important from the perspective of its use as a biosensor. Table 2 shows the values of the electrical parameters of the extended BVD model proposed for simulation. In this case, only one of the QCM sensor armatures is considered to be in contact with water. Again, values as close as possible to the experimental values were chosen. The result of the Matlab ® simulation in this case is shown in Figure 3a, the value of the motional resistances being strongly modified. Similarly, in Figure 3b, the same profound change in ratios can be seen in this case in the Nyquist plot. By understanding and controlling spurious resonances, the accuracy and precision of QCM measurements can be improved, allowing more reliable analysis and interpretation of the data. This can be particularly important in applications such as volatile organic compound (VOC) monitoring, where the QCM sensor is used to detect and quantify interactions in the functional layer. In addition, the study of spurious resonances can By understanding and controlling spurious resonances, the accuracy and precision of QCM measurements can be improved, allowing more reliable analysis and interpretation of the data. This can be particularly important in applications such as volatile organic compound (VOC) monitoring, where the QCM sensor is used to detect and quantify interactions in the functional layer. In addition, the study of spurious resonances can provide valuable information about the mechanical and environmental factors affecting QCM sensor performance. This knowledge can be used to optimize the design and operation of QCM sensors, leading to improved sensitivity and accuracy in various applications. In addition, the study of QCM sensor spurious resonances can lead to the development of new, more accurate, and reliable detection strategies and technologies. Virtual Impedance Analyzer and Expermental Setup For the experimental investigations in this paper, a VIA built around the Analog Discovery 2 (AD2) virtual instrument from Digilent Inc, Pullman, WA, USA [19,25,26], was used. The AD2 virtual instrument provides the hardware and software support necessary for easy implementation of an impedance analyzer [27,28]. The experimental setup used to measure spurious resonances in various liquid media is shown in Figure 4a. The experimental setup is completed by the QCM flow cell kit (011121, ALS Co., Ltd., Tokyo, Japan) mounted in static measurement mode [29] to the shield built according to its size. The additional hardware component for the AD2 virtual instrument is based on the schematic in Figure 1b. The shield built for the impedance analyzer is shown in Figure 4b. provide valuable information about the mechanical and environmental factors affecting QCM sensor performance. This knowledge can be used to optimize the design and operation of QCM sensors, leading to improved sensitivity and accuracy in various applications. In addition, the study of QCM sensor spurious resonances can lead to the development of new, more accurate, and reliable detection strategies and technologies. Virtual Impedance Analyzer and Expermental Setup For the experimental investigations in this paper, a VIA built around the Analog Discovery 2 (AD2) virtual instrument from Digilent Inc, Pullman, WA, USA [19,25,26], was used. The AD2 virtual instrument provides the hardware and software support necessary for easy implementation of an impedance analyzer [27,28]. The experimental setup used to measure spurious resonances in various liquid media is shown in Figure 4a. The experimental setup is completed by the QCM flow cell kit (011121, ALS Co., Ltd., Tokyo, Japan) mounted in static measurement mode [29] to the shield built according to its size. The additional hardware component for the AD2 virtual instrument is based on the schematic in Figure 1b. The shield built for the impedance analyzer is shown in Figure 4b. A Python module exploiting AD2 SDK (software development kit) functions provides acquisition and processing of raw experimental data. The method of finding spurious resonances is based on the search for minimum and maximum in impedance response, thus determining the frequency of series and parallel resonances for fundamental resonances and two spurious resonances. The value of the electrical parameters of the extended BVD model is based on their calculation from the raw experimental data of the QCM sensor impedance after setting the series resonance and antiresonance frequencies. The series resistor is implicitly determined in this case as a result of the minimum search function. By measuring the impedance of the QCM sensor at a frequency about 10 times lower than the fundamental series resonance frequency (1 MHz), where its reactance = + is purely capacitive, the shunt and stray capacitance are calculated based on the equation: It should be noted that the measurement of shunt capacitance and dispersion capacitance in a task precedes the measurement of the QCM sensor response. This allows A Python module exploiting AD2 SDK (software development kit) functions provides acquisition and processing of raw experimental data. The method of finding spurious resonances is based on the search for minimum and maximum in impedance response, thus determining the frequency of series and parallel resonances for fundamental resonances and two spurious resonances. The value of the electrical parameters of the extended BVD model is based on their calculation from the raw experimental data of the QCM sensor impedance after setting the series resonance ω ri and antiresonance ω ari frequencies. The series resistor R si is implicitly determined in this case as a result of the minimum search function. By measuring the impedance of the QCM sensor at a frequency ω m about 10 times lower than the fundamental series resonance frequency (1 MHz), where its reactance Z pm = Z p + Z stray is purely capacitive, the shunt and stray capacitance are calculated based on the equation: It should be noted that the measurement of shunt capacitance and dispersion capacitance in a task precedes the measurement of the QCM sensor response. This allows automatic compensation of the QCM sensor to measure only the response for series branches. The following equation is used to achieve shunt and stray capacitance compensation in real time: where Z rm is the raw impedance measured at each point of the scan interval and is calculated from the following equation: The calculation of the parameters of the extended BVD model that cannot be determined directly from the raw experimental data is based on the following equations: Other key parameters that better describe the energy dissipation process, such as the Q-factor or D-factor, can be calculated using the following equation: The Python module provides acquisition and processing of raw experimental data by calculating the electrical parameters of the extended BVD model for the fundamental resonance and the first two well-represented spurious resonances. The acquired raw experimental data are plotted together with the parameters of the extended BVD model as well as some key parameters specific to a QCM sensor. Results The experimental setup shown in Figure 4 contains a QCM sensor with a fundamental resonant frequency of 10 MHz (151225-10, International Crystal Manufacturing Co., Inc., Oklahoma City, OK, USA). The QCM sensor was clamped between the silicon O-rings of the static QCM cell with a minimum pressure following its response with VIA. During the measurements, the temperature in the laboratory was 21 ± 2 • C, with a relative humidity of 50 ± 10%. The VIA setup used for the acquisition of the raw experimental data was as follows: (i) passive excitation with a sinusoidal voltage with an amplitude of 1 V in the range of the series resonance and antiresonance frequencies of fundamental and spurious resonances and (ii) measurement of the QCM sensor impedance with a scan step of 1 Hz. Load Analysis of the Spurious Resonances As shown in the simulations presented above, a significant transformation of the frequency response of the QCM sensor occurs when the sensor changes from air to liquid medium (pure water). This QCM sensor transition covers commonly encountered situations with VOC sensors [30], humidity sensors [31], and dew point sensors [32], to name a few applications. Of particular interest from an application point of view are VOC sensors, especially in an array configuration, which is frequently encountered in an e-nose. Tracking the dissipation process in the case of the functionalized QCM sensor provides the experimental basis for an advanced understanding and description of the processes occurring at its surface. Spurious resonances may be useful, and from this hypothesis, the frequency response of the QCM sensor in various operating media is investigated experimentally. Figure 5 shows the first measurement of the QCM sensor in air, still used as a reference. The electrical parameters of the extended BVD model as well as some other key parameters are automatically calculated by VIA and are also part of Figure 5. The fundamental resonance as well as spurious resonances are indicated in Figure 5 by a different color line at the top of the figure. The same color is used in the captions as background where the measured or calculated electrical parameters are shown. Figure 5 shows the first measurement of the QCM sensor in air, still used as a reference. The electrical parameters of the extended BVD model as well as some other key parameters are automatically calculated by VIA and are also part of Figure 5. The fundamental resonance as well as spurious resonances are indicated in Figure 5 by a different color line at the top of the figure. The same color is used in the captions as background where the measured or calculated electrical parameters are shown. After one of the QCM sensor armatures was covered with water, the measurement of the frequency response of the QCM sensor was repeated, and the results are shown in Figure 6 together with the new values of the electrical parameters. A significant decrease in Q factor can be observed for the QCM sensor immersed with one of the armatures in the liquid medium. This significant attenuation also affects spurious resonances. To confirm the hypothesis that spurious resonances can provide useful information, a comparative study between the two measurements is considered below. Since the key parameters describing the energy dissipation process in the case of a QCM After one of the QCM sensor armatures was covered with water, the measurement of the frequency response of the QCM sensor was repeated, and the results are shown in Figure 6 together with the new values of the electrical parameters. ence. The electrical parameters of the extended BVD model as well as some other key parameters are automatically calculated by VIA and are also part of Figure 5. The fundamental resonance as well as spurious resonances are indicated in Figure 5 by a different color line at the top of the figure. The same color is used in the captions as background where the measured or calculated electrical parameters are shown. After one of the QCM sensor armatures was covered with water, the measurement of the frequency response of the QCM sensor was repeated, and the results are shown in Figure 6 together with the new values of the electrical parameters. A significant decrease in Q factor can be observed for the QCM sensor immersed with one of the armatures in the liquid medium. This significant attenuation also affects spurious resonances. To confirm the hypothesis that spurious resonances can provide useful information, a comparative study between the two measurements is considered below. Since the key parameters describing the energy dissipation process in the case of a QCM Figure 6. Bode plot of raw data for the QCM sensor in the water and the electrical parameters of the extended BVD model for the series resonance, respectively the first two spurious resonances. A significant decrease in Q factor can be observed for the QCM sensor immersed with one of the armatures in the liquid medium. This significant attenuation also affects spurious resonances. To confirm the hypothesis that spurious resonances can provide useful information, a comparative study between the two measurements is considered below. Since the key parameters describing the energy dissipation process in the case of a QCM sensor depend, in turn, on the motional resistance according to Equation (10), only the evolution of this electrical parameter will be analyzed below. The values of the motional resistances In the two experimentally investigated cases are summarized in Table 3, as well as the differences between them. The change in motional resistance of the spurious resonances is much greater than the change in motional resistance for fundamental resonances. In other words, spurious resonances better highlight the dissipative processes taking place at the surface of the QCM sensor. Consequently, monitoring the motional resistances of spurious resonances during Sensors 2023, 23, 4939 9 of 13 an experiment is a useful contribution from the perspective of increasing sensitivity to dissipation processes. The evaluation of the effect of the increase in the viscosity of the liquid medium on the spurious resonances is justified under the hypothesis of confirming the usefulness of their monitoring in the case of the QCM sensor. It is also useful to identify the viscosity limit up to which an algorithm to identify the impedance local minimum and maximum specific to spurious resonances can be used. The effect of indeterminacy of the position of spurious resonances by the impedance minimum and maximum search algorithm, implemented in the Python module, was mainly avoided by a sequential search in subranges of the measurement range. The effect of increasing viscosity in liquid media on spurious resonances was investigated for glycerol-water solutions up to 80%, which is also their automatic identification limit for the implemented algorithm. Figure 7 shows the Bode plot together with the parameters of the extended BVD model for a 40% glycerin-water solution. The values of the motional resistances In the two experimentally investigated cases are summarized in Table 3, as well as the differences between them. The change in motional resistance of the spurious resonances is much greater than the change in motional resistance for fundamental resonances. In other words, spurious resonances better highlight the dissipative processes taking place at the surface of the QCM sensor. Consequently, monitoring the motional resistances of spurious resonances during an experiment is a useful contribution from the perspective of increasing sensitivity to dissipation processes. The evaluation of the effect of the increase in the viscosity of the liquid medium on the spurious resonances is justified under the hypothesis of confirming the usefulness of their monitoring in the case of the QCM sensor. It is also useful to identify the viscosity limit up to which an algorithm to identify the impedance local minimum and maximum specific to spurious resonances can be used. The effect of indeterminacy of the position of spurious resonances by the impedance minimum and maximum search algorithm, implemented in the Python module, was mainly avoided by a sequential search in subranges of the measurement range. The effect of increasing viscosity in liquid media on spurious resonances was investigated for glycerol-water solutions up to 80%, which is also their automatic identification limit for the implemented algorithm. Figure 7 shows the Bode plot together with the parameters of the extended BVD model for a 40% glycerin-water solution. After a major change in the motional resistance for spurious resonances at the transition from air to water mentioned above, it can be seen that the increase in the viscosity of the liquid medium does not produce significant changes. In other words, the spurious After a major change in the motional resistance for spurious resonances at the transition from air to water mentioned above, it can be seen that the increase in the viscosity of the liquid medium does not produce significant changes. In other words, the spurious resonances, although significantly attenuated at the transition to the liquid medium, show a robustness in relation to the increase in viscosity. Figure 8 shows the Bode plot for an 80% glycerin-water solution again following the evolution of the parameters of the extended BVD model. As the viscosity of the liquid medium increases, there is a decrease in the frequency of spurious resonances in accordance with the shift of the fundamental resonance and a slight increase in the motional resistance. Even if the increase of the motional resistance is slow for glycerol-water solutions higher than 80%, it is difficult to automatically identify the position of the spurious resonances. The evolution of the electrical parameters of the extended BVD model in liquid media can provide additional information to better describe the interactions occurring at the QCM sensor surface. A deep analysis of the effects of the working medium on the electrical parameters of the extended BVD model is the subject of the next subsection. resonances, although significantly attenuated at the transition to the liquid medium, show a robustness in relation to the increase in viscosity. Figure 8 shows the Bode plot for an 80% glycerin-water solution again following the evolution of the parameters of the extended BVD model. As the viscosity of the liquid medium increases, there is a decrease in the frequency of spurious resonances in accordance with the shift of the fundamental resonance and a slight increase in the motional resistance. Even if the increase of the motional resistance is slow for glycerol-water solutions higher than 80%, it is difficult to automatically identify the position of the spurious resonances. The evolution of the electrical parameters of the extended BVD model in liquid media can provide additional information to better describe the interactions occurring at the QCM sensor surface. A deep analysis of the effects of the working medium on the electrical parameters of the extended BVD model is the subject of the next subsection. Parameters of Interest for Spurious Resonances The electrical parameters of the extended BVD model provide all the information that can be monitored about the behavior of the QCM sensor regardless of the specifics of an application. In practice, for reasons related to the evolution of QCM instrumentation, the evolution of only a few key parameters of the QCM sensor is retained, usually the evolution of the series resonance frequency and the D-factor. Advanced methods for monitoring key parameters and data processing have been developed for these particular cases [18,19]. The use of a VIA brings all imaginable benefits, with one limitation related to the measurement time, which can be critical in the case of fast interactions occurring at the QCM sensor surface [17]. The experimental results are also interpreted in the sense of compatibility with many traditional measurements. In this context, the usefulness of spurious resonances has been highlighted in the transition zone from air to liquid medium but also can be partially demonstrated even in liquid medium. In the case of series frequency monitoring of odd harmonic resonances, their different response provides better experimental support for the interpretation of the phenomena occurring at the QCM sensor surface. Similarly, it is useful to investigate the behavior of spurious resonances in relation to the fundamental resonance in order to reach a conclusion about their usefulness. Parameters of Interest for Spurious Resonances The electrical parameters of the extended BVD model provide all the information that can be monitored about the behavior of the QCM sensor regardless of the specifics of an application. In practice, for reasons related to the evolution of QCM instrumentation, the evolution of only a few key parameters of the QCM sensor is retained, usually the evolution of the series resonance frequency and the D-factor. Advanced methods for monitoring key parameters and data processing have been developed for these particular cases [18,19]. The use of a VIA brings all imaginable benefits, with one limitation related to the measurement time, which can be critical in the case of fast interactions occurring at the QCM sensor surface [17]. The experimental results are also interpreted in the sense of compatibility with many traditional measurements. In this context, the usefulness of spurious resonances has been highlighted in the transition zone from air to liquid medium but also can be partially demonstrated even in liquid medium. In the case of series frequency monitoring of odd harmonic resonances, their different response provides better experimental support for the interpretation of the phenomena occurring at the QCM sensor surface. Similarly, it is useful to investigate the behavior of spurious resonances in relation to the fundamental resonance in order to reach a conclusion about their usefulness. This comparative study involves investigating the series resonance frequency shift for spurious resonances relative to the series frequency shift of the fundamental resonance. The evolution of the motional resistance, Q-factor, and D-factor are also parameters of interest in the hypothesis supporting the usefulness of spurious resonances. The evolution of these parameters has been investigated by a set of systematic measurements that have been summarized in Figure 9. As can be seen in Figure 9a, the series frequency shift of the spurious resonances monitored in this study is identical to the frequency shift of the fundamental series resonance. Also, in Figure 9a, the significant increase of the motional resistance for the spurious resonances at the transition from air to liquid medium (pure water) can be seen. The evolution of the motional resistance for spurious resonances is significantly different from the evolution of the motional resistance of the fundamental series resonance. The motional resistance of spurious resonances increases with increasing viscosity of the liquid medium after a smooth slope. For liquid media, it would seem at first glance that monitoring them is not useful. A different conclusion is drawn from the D-factor analysis shown in Figure 9b. The evolution of the D factor with increasing viscosity is significantly different for spurious resonances, suggesting that it is useful to monitor them in liquid media. This comparative study involves investigating the series resonance frequency shift for spurious resonances relative to the series frequency shift of the fundamental resonance. The evolution of the motional resistance, Q-factor, and D-factor are also parameters of interest in the hypothesis supporting the usefulness of spurious resonances. The evolution of these parameters has been investigated by a set of systematic measurements that have been summarized in Figure 9. (a) (b) Figure 9. QCM sensor: (a) evolution of series resistance and series resonance frequency shift in air, water, and glycerol-water solution; (b) evolution of Q-factor and D-factor in air, water, and glycerolwater solution. As can be seen in Figure 9a, the series frequency shift of the spurious resonances monitored in this study is identical to the frequency shift of the fundamental series resonance. Also, in Figure 9a, the significant increase of the motional resistance for the spurious resonances at the transition from air to liquid medium (pure water) can be seen. The evolution of the motional resistance for spurious resonances is significantly different from the evolution of the motional resistance of the fundamental series resonance. The motional resistance of spurious resonances increases with increasing viscosity of the liquid medium after a smooth slope. For liquid media, it would seem at first glance that monitoring them is not useful. A different conclusion is drawn from the D-factor analysis shown in Figure 9b. The evolution of the D factor with increasing viscosity is significantly different for spurious resonances, suggesting that it is useful to monitor them in liquid media. Discussion As shown in Figure 9a, tracking spurious series resonance frequencies is not of interest from the perspective of better interpreting the interactions occurring at the QCM sensor electrode surface. The same conclusion is not reached from Figure 9b, where, in the case of the D factor associated with spurious series resonances, the behavior of the QCM sensor with increasing viscosity is significantly different from the evolution of this parameter for the fundamental series resonance. This behavior of spurious series resonances in the case of the D factor makes a crucial contribution to a more detailed analysis of the dissipative phenomena occurring at the QCM sensor surface. It is worth mentioning the usefulness of measuring and monitoring the series motional resistances and, implicitly, based on Equation (10), the D factor for spurious series resonances. A distinct favorable situation is created by the significant increase of the motional resistance for spurious resonances at the transition from air to water. In other words, spurious series resonances are much more strongly attenuated than the fundamental Figure 9. QCM sensor: (a) evolution of series resistance and series resonance frequency shift in air, water, and glycerol-water solution; (b) evolution of Q-factor and D-factor in air, water, and glycerol-water solution. Discussion As shown in Figure 9a, tracking spurious series resonance frequencies is not of interest from the perspective of better interpreting the interactions occurring at the QCM sensor electrode surface. The same conclusion is not reached from Figure 9b, where, in the case of the D factor associated with spurious series resonances, the behavior of the QCM sensor with increasing viscosity is significantly different from the evolution of this parameter for the fundamental series resonance. This behavior of spurious series resonances in the case of the D factor makes a crucial contribution to a more detailed analysis of the dissipative phenomena occurring at the QCM sensor surface. It is worth mentioning the usefulness of measuring and monitoring the series motional resistances and, implicitly, based on Equation (10), the D factor for spurious series resonances. A distinct favorable situation is created by the significant increase of the motional resistance for spurious resonances at the transition from air to water. In other words, spurious series resonances are much more strongly attenuated than the fundamental series resonance. In this range of dissipation processes between the air and water media, many applications in chemosensors or biosensors, such as VOC sensors [30] or humidity sensors [31], have already been identified. The evolution of the D factor for spurious series resonances with increasing medium viscosity suggests a new candidate for future analysis, providing a better interpretation of the interactions taking place at the QCM sensor surface. In this paper, the QCM sensor and its spurious resonances were investigated from the perspective of increasing the viscosity of the working medium. It cannot be argued that the QCM sensor response is identical from the perspective of spurious resonances in any experimentally investigated situation. These differences are not significant as long as the QCM sensors come from the same manufacturer. The dissipation process induced by the glycerol-water solution is considered standard in the literature, so it can be considered that the experimental investigation presented in this study regarding the spurious response of the QCM sensor is representative. Conclusions In this paper, the presence and evolution of spurious resonances as a function of the changes induced by the QCM sensor environment were studied. These investigations revealed a different evolution of the motional resistance of spurious resonances compared to the motional resistance of the fundamental resonance. This situation is very favorable in the case of switching from air to a water medium, in which case the motional resistance of spurious resonances changes significantly, as shown in Table 3. This result confirms the usefulness of monitoring the motional resistance of spurious resonances in specific applications such as VOC sensors, humidity sensors, or dew point sensors. This first experimentally demonstrated conclusion confirms the significant increase in sensitivity of measuring dissipative processes induced in the functionalization layer of a VOC sensor based on the QCM sensor. Extensive experimental investigations in the liquid working medium revealed significantly different behavior of the D factor for spurious resonances relative to fundamental resonance. This different evolution may be useful in understanding the dissipative phenomena induced by interactions occurring at the QCM sensor surface.
8,843.8
2023-05-01T00:00:00.000
[ "Physics" ]
β-Carboline Compounds, Including Harmine, Inhibit DYRK1A and Tau Phosphorylation at Multiple Alzheimer's Disease-Related Sites Harmine, a β-carboline alkaloid, is a high affinity inhibitor of the dual specificity tyrosine phosphorylation regulated kinase 1A (DYRK1A) protein. The DYRK1A gene is located within the Down Syndrome Critical Region (DSCR) on chromosome 21. We and others have implicated DYRK1A in the phosphorylation of tau protein on multiple sites associated with tau pathology in Down Syndrome and in Alzheimer's disease (AD). Pharmacological inhibition of this kinase may provide an opportunity to intervene therapeutically to alter the onset or progression of tau pathology in AD. Here we test the ability of harmine, and numerous additional β-carboline compounds, to inhibit the DYRK1A dependent phosphorylation of tau protein on serine 396, serine 262/serine 356 (12E8 epitope), and threonine 231 in cell culture assays and in vitro phosphorylation assays. Results demonstrate that the β-carboline compounds (1) potently reduce the expression of all three phosphorylated forms of tau protein, and (2) inhibit the DYRK1A catalyzed direct phosphorylation of tau protein on serine 396. By assaying several β-carboline compounds, we define certain chemical groups that modulate the affinity of this class of compounds for inhibition of tau phosphorylation. Introduction The dual-specificity tyrosine phosphorylation regulated kinase 1A (DYRK1A) gene is located within the Down syndrome critical region on chromosome 21. Overexpression of DYRK1A has been proposed to be a significant contributor to the underlying neurodevelopmental abnormalities associated with Down syndrome. Transgenic animals overexpressing DYRK1A show marked cognitive deficits and impairment in hippocampal dependent memory tasks [1,2]. Studies in cell culture models and transgenic models of Down syndrome that overexpress DYRK1A implicate the DYRK1A kinase in the generation of both amyloid and tau pathologies associated with the early onset Alzheimer's disease (AD) that is uniformly observed in Down Syndrome [3,4,5,6]. We and others have shown that DYRK1A is important for phosphorylation of tau protein on multiple sites in several cellular models [3,4,6,7]. Interestingly, DYRK1A protein has been found to be associated with neurofibrillary tangles (NFTs) in sporadic Alzheimer's disease [8] and some studies have found a genetic association between SNPs within the DYRK1A locus and Alzheimer's disease in some populations [3] but not others [9]. These combined observations raise the possibility that DYRK1A may be a critical contributor to tau dysfunction and tau pathology of Alzheimer's disease and, moreover, that this kinase may be an important therapeutic target for pharmacological interventions seeking to modify the course of tau pathology in AD. The family of b-carboline alkaloids, characterized by a core indole structure and a pyridine ring, are naturally occurring compounds in some plant species that affect multiple central nervous system targets. These include the 5-hydroxytryptamine receptor substypes 5-HT 2 and 5-HT 1A [10], the NMDA receptor [11], monoamine oxidase (MAO-A) [12], and dopaminergic signaling pathways [13,14,15]. In addition to these targets, the b-carboline alkaloid, harmine, has recently been reported to be a high affinity inhibitor of DYRK1A kinase activity [16,17], suggesting that harmine, and possibly other b-carboline derivatives, could alter tau phosphorylation. In this study, we extend previous published findings from our lab and others that show DYRK1A is involved in phosphorylation of tau protein on sites that are hyperphosphorylated during the course of tau pathology in AD and show that certain b-carboline alkaloids can significantly reduce the levels of phosphorylated tau protein. Specifically, we identify DYRK1A dependent tau phosphorylation on threonine 231 and serine 396. We further show that harmine and other b-carboline compounds inhibit DYRK1A dependent tau phosphorylation with varying affinities that are dependent upon several structural features of the molecules. These results suggest that this class of compounds warrant further investigation as candidate tau-based therapeutics to alter the onset or progression of tau dysfunction and pathology in Alzheimer's disease and other tauopathies. siRNA transfection 4R0N tau overexpressing H4 neuroglioma cells [7] were maintained in Dulbecco's Modified Eagle Medium (Invitrogen) supplemented with 10% fetal bovine serum (Invitrogen), 1% penicillin-streptomycin, geneticin (0.25 mg/ml), and 2 mM L-Glutamine (Invitrogen). Cells were maintained by splitting 1:10 at 90% confluency. Prior to any experimentation, cells were 70-75% confluent to ensure cells were in their active growth phase. To test effects of DYRK1A knockdown on tau phosphorylation, cells were transfected with DYRK1A siRNA. Prior to treating cells with DYRK1A siRNA, siRNA was first complexed with siLentfect lipid transfection reagent (Bio-Rad) and reduced serum medium (Invitrogen) using a 6 well plate format. The final effective siRNA molarity used was 22.85 nM per well. Cells were grown for 96 hours at 37uC, 5% CO 2 . Cell lysates were prepared using the Complete Lysis-M, EDTA-free kit (Roche Applied Science) and total protein concentration was quantified using the BCA protein assay (Pierce). Westerns for the multiple forms of tau were performed as described below. Western Blotting For all cell-based experiments, including siRNA treatments and compound treatments, cells were treated for 96 hours at 37uC, with 5% CO 2 . Cell lysates were then prepared using the Complete Lysis-M, EDTA-free kit (Roche Applied Science) supplemented with phosphatase inhibitor cocktails 1 and 2 (Sigma). Lysates were quantified using the BCA protein assay (Pierce). Protein from lysates (30 mg total protein per lane) was separated on SDS-PAGE gels and transferred to nitrocellulose membrane. Membranes were blocked in 5% blocking solution for one hour at room temperature (RT). Blocking buffer solution used for detection of non phosphorylated protein contained 5% Non-fat dry milk in 16-TBS-T (50 mM Tris-HCl pH 7.4, 137 mM NaCl 2 , 2.7 mM KCl, 0.1% Tween). For detection of phosphorylated protein, blocking buffer solution contained 5% Bovine Serum Albumin in 16 TBS-T. Membranes were probed with primary antibody (various dilutions depending on the epitope -see below) in blocking buffer overnight at 4uC on a rocker. Membranes were subsequently washed with 16 TBS-T and probed with secondary antibody in blocking buffer for fortyfive minutes using a 1:25,000 dilution of HRP-GAM or HRP-GAR (Jackson Immunoresearch), depending on the species (mouse or rabbit) in which the primary antibodies were raised. Following incubation with secondary antibody, membranes were washed in 16 TBS-T and developed with Super Signal West Femto Maximum Sensitivity Substrate Kit (Promega) and digitally imaged. Protein band signal saturation was assessed before any further analysis of multiple forms of tau. Alpha Innotech Fluoro Chem SF imaging software verifies degrees of saturation when signal intensity is beyond the dynamic range (which is from 0-65,535). Protein band signal intensities used for quantification were within the instrument's dynamic range. Compound treatments Cells undergoing any treatment, including b-carboline derivative dosing and siRNA treatment, were maintained in Dulbecco's Modified Eagle Medium (Invitrogen) supplemented with 10% fetal bovine serum (Invitrogen) and 2 mM L-Glutamine (Invitrogen). Viability assays were performed using a 96-well pate format. Metabolic activity was measured 12 hours after the addition of 10% alamar Blue (Invitrogen) directly to attached cells in full medium. This assay was based on the ability of metabolically active cells to convert alamar Blue reagent into a fluorescent signal proportional to innate metabolic activity. Once the ideal IC 50 value for viability was identified, effects on multiple forms of tau were investigated after treating with the b-carbolines indicated in Table S1 using the larger 6 well plate format. For both the viability assay and cell culture tau assays, cell culture media was removed and cells were treated with freshly made drug in new media every 24 hours for four days. For cell culture tau assays, protein lysates were prepared after 96 hours of treatment. All compounds were solubilized in dimethylsulfoxide (DMSO), diluted in growth medium to their respective 0.01 mM, 0.1 mM, 1 mM and 10 mM final working dilutions and added directly to cultured cells. The final DMSO percentage in culture for all compounds and concentrations tested was 0.1%. All treatments conditions were compared to their respective controls which contained DMSO at 0.1% in growth medium. In Vitro Kinase Assay Evaluation of DYRK1A kinase activity was determine by incubating 0.08 ug of recombinant human DYRK1A protein (Invitrogen) with 0.15 ug of 4R2N recombinant human Tau (SignalChem) in 16 kinase buffer (25 mM Tris-HCl (pH 7.5), 5 mM beta-glycerophosphate, 2 mM dithiothreitol (DTT), 0.1 mM Na3VO4, 10 mM MgCl2 -Cell Signal) and 1 mM ATP in a final volume of 20 ul for 30 minutes @ 30C. For testing the effects of the b-carboline derivatives, recombinant human DYRK1A was pretreated with compounds for 10 minutes prior to the addition of kinase buffer, ATP, and recombinant human tau. The reaction was inactivated upon addition of 16 Novex LDS sample buffer and Novex sample reducing reagent, 50 mM DTT, (Invitrogen) followed immediately by heating for 10 minutes @ 95C. Phosphorylated Tau was resolved using 7% Tris Acetate gels and detected by western analysis. Westerns were probed for Phospho-Tau S396 (abcam) @ 1:5,000 dilution and a secondary of Goat anti -Rabbit HRP (Jackson Immuno Labs) @ 1:50,000 in 5% BSA. Membranes were stripped as above and reprobed with rabbit anti Human Total Tau (Dako) @ 1:15,000 dilution and a secondary of Goat anti -Rabbit HRP (Jackson Immuno labs) @ 1:100,000 dilution in 5% milk. For quantitation purposes, both bands of pS396 phosphorylated tau were used in the in vitro phosphorylation assays. Reduced DYRK1A expression affects tau phosphorylation at multiple sites We previously reported that silencing of DYRK1A expression causes reduced tau phosphorylation at the 12E8 epitope (phosphorylated serines 262 and 356) [7]. Here we show that RNAi-mediated silencing of DYRK1A expression simultaneously affects multiple additional AD-relevant tau phosphorylation sites, including threonine 231 and serine 396 (Figure 1). We transfected H4 neuroglioma cells that overexpress 4R0N tau with siRNA specific for DYRK1A. Results showed that reduction of DYRK1A expression to 38% of control leads to pT231 and pS396 tau expression that is 48% and 35% of control nonsilencing siRNA, respectively. These results are consistent with the hypothesis that DYRK1A may be a promiscuous tau kinase and are also consistent with prior studies that have shown DYRK1A phosphorylation of tau on several other sites, including threonine 212 [3,4], serine202, and serine 404 [6]. The finding that DYRK1A is involved in the phosphorylation of sites that control key microtubule binding functions of tau (S262, T231), as well as sites that are phosphorylated relatively late during the formation of neurofibrillary tangles (S396, S404), raises the interesting possibility that DYRK1A could be an important site of regulatory control for tau function and for the formation of tau pathology during the progression of tauopathies. The high affinity DYRK1A inhibitor, harmine, affects tau phosphorylation on multiple sites Harmine, a naturally occurring b-carboline alkaloid, is a potent inhibitor of DYRK1A with a reported IC 50 of ,80 nM in an in vitro kinase assay using synthetic peptide substrate [17]. Based on our findings that DYRK1A silencing reduces multiple phosphorylated tau species, we tested harmine for effects on tau phosphorylation in the H4 neuroglioma cell line. We first determined the toxicity profile for harmine (Figure 2A). Results of increasing concentrations of harmine showed that 12 mM resulted in 50% cell viability. Based on this toxicity profile and the reported in vitro IC 50 value for harmine against DYRK1A, we selected doses of 80 nM, 800 nM and 8 mM for the tau phosphorylation assays. Harmine reduced the expression of each phospho-tau species tested, including 12E8 (pS262/pS356), pT231, and pS396 ( Figure 2B). Significant (P,0.05) reductions to 12E8 and pT231 tau were noted at 0.8 mM and 8 mM concentrations. However, it is important to note that harmine at 0.8 mM and 8 mM also reduced the levels of total tau protein consistent with the reductions detected with the various phospho-tau antibodies. Even accounting for reductions to total tau levels, significant reductions to 12E8 (58% of control) and pT231 (44% of control) levels remained ( Figure 2B). pS396 effects are largely accounted for by overall reductions to total tau levels. In addition to DYRK1A inhibition, harmine has been reported to be a selective inhibitor of monoamine oxidase (MAO-A) [12]. To test if effects on tau could result from MAO-A inhibition, we tested another MAO-A selective antagonist, moclobemide, for effects on tau. Moclobemide has a reported IC 50 against MAO-A of 3.9 mM [18]. Results showed that this MAO-A antagonist did not reduce levels of either total tau or of specific phosphorylated forms of tau protein at doses up to 500 mM ( Figure 2C). These results suggest that the effects of harmine on tau do not result from MAO-A inhibition. Additional b-carboline alkaloid derivatives alter the expression of multiple tau species Based on results for harmine, we tested additional b-carboline derivatives, including harmol, harmane, harmaline, norharmane, 9ethylharmine, and two novel proprietary compounds MPP-021 and MPP-313 (MediProPharma, Inc.: patents pending). We first performed toxicity assays for each compound in our H4 cell line (Table S1). We then tested each compound for effects on phosphotau and total tau expression ( Figure 3). Quantification of phosphorylated tau levels is shown in Figure 4. Data for phosphorylated tau levels have been corrected for effects of the compounds on total tau levels. Quantification of absolute phosphotau levels is included as supplementary material ( Figure S1). There was a positive correlation between the toxicity of each compound and the sensitivity with which each compound reduced total tau levels and the levels of phosphorylated forms of tau. For toxicity, the rank order of the compounds was 9-ethylharmine.harmine. harmol.harmane.harmaline.MPP-021.norharmane.MPP-313. In terms of the sensitivity with which each compound reduced tau levels, only 9-ethylharmine, harmine, and harmol showed significant effects (p,0.05) in reducing total tau and phosphorylated tau levels at doses $1 mM. As with harmine results in Figure 2, reductions to 12E8 tau and pT231 tau remained significant after accounting for total tau reductions ( Figure 4). MPP-021 and MPP-313 significantly reduced (p,0.05) tau levels at 50 mM. The 9-ethylharmine and harmine compound treatments showed significant reductions at 1 mM and 0.8 mM, respectively. These lower doses have no detectable effect on the viability of the cells. We therefore conclude that reducing tau levels beyond ,50% of the control levels, as occurs at higher concentrations, leads to significant cellular toxicity rather than the alternative of the observed reductions in tau resulting from general drug-induced toxicity. Harmine and other b-carboline compounds inhibit the direct phosphorylation of tau by DYRK1A DYRK1A has been reported to directly phosphorylate tau protein on T212, S202, and S404 [3,4,6]. To determine if the effects observed with various b-carboline derivatives above could result from the inhibition of direct phosphorylation of tau protein by DYRK1A, we first tested if DYRK1A could directly phosphorylate serine 396. Using an in vitro phosphorylation assay with recombinant DYRK1A and tau proteins, we confirmed that DYRK1A could directly phosphorylate tau protein ( Figure 5A). Phosphorylation occurred only in the presence of tau protein, DYRK1A protein, and ATP. We observed a doublet of pS396 phosphorylated tau. Because the primary protein band recognized by the total tau antibody is the molecular weight of the lower band of this pS396 doublet, the doublet is unlikely to result from any protein degradation. Rather, it may be that the higher molecular weight band contains phosphorylations in addition to serine 396. This explanation is consistent with reported literature indicating a role for DYRK1A in the phosphorylation of tau protein on S202, S404, and T212 [3,4,6]. The observed pS396 tau phosphorylation was potently inhibited by harmine with an IC 50 of 0.7 mM ( Figure 5B). We next tested each b-carboline derivative compound in this in vitro assay ( Figure 6 and Figure 7). Several interesting observations emerged from this series of studies. We first determined the IC 50 values for each compound for the inhibition of DYRK1A dependent tau phosphorylation at serine 396. These results reflect the rank ordered affinities for each compound that were obtained in the cell based tau phosphorylation assays and the toxicity assays (Table S1 and Figures 2B and Figure 3), with one exception. Harmol was the most potent inhibitor in this in vitro phosphorylation assay with an IC 50 of 90 nM, followed by 9-ethylharmine (400 nM) and harmine (700 nM). Harmol was the third ranked compound in both the toxicity and cell-based tau assay. Reasons for this slight disconnect are unclear, but could be related to differential cellular metabolism of the free hydroxyl group on carbon 7 of harmol. The addition of an ethyl group to N-9 of harmine reduced the IC 50 nearly 2-fold, suggesting that additional modifications on this nitrogen might increase the affinity of harmine for DYRK1A more substantially. Harmane, norharmane, and harmaline were more than an order of magnitude lower affinity than harmine, consistent with the relatively muted effects of these compounds in our cellbased tau assay (Figure 3). The MPP-313 compound reduced phosphorylation of the lower molecular weight form of phosphorylated tau protein with an IC 50 of 5 mM. However, effects on the larger molecular weight band were striking. Levels of this larger phosphorylated tau protein decline to near 50% of the no drug treated sample at a concentration of 5 mM, but then sharply reverse and increase significantly beyond the no drug treated control at doses of 50 mM (255% control) and 100 mM (640% control) ( Figure 6). This was the only b-carboline derivative tested that displayed this pattern. All other compounds showed consistent effects on both molecular weight forms of pS396 tau. While we cannot yet explain this observation, it does suggest that at higher concentrations MPP-313 induces DYRK1A activity through a yet-tobe determined mechanism. DYRK1A is involved in phosphorylation of multiple sites on tau Previous literature indicates that DYRK1A can phosphorylate tau protein on T212, S202, S404, and the 12E8 epitope (S262/ S356) [3,4,6,7]. Here we provide evidence that DYRK1A is also involved in tau phosphorylation at threonine 231 and serine 396 (Figure 1). This growing list of phosphorylation sites affected by DYRK1A, which now includes key sites regulating microtubule affinity (T231, S262) [19,20] and references therein], tau toxicity (T231, S262, T212) [21], and sites hyperphosphorylated coinci- dent to NFT pathology (S396, S404) [22] suggest that DYRK1A could be a critical regulator of tau function and dysfunction during the course of AD. As such, targeting this kinase pharmacologically may provide a means to modify the course of tau dysfunction and pathology in AD and other tauopathies. DYRK1A directly phosphorylates tau on serine 396 DYRK1A can directly phosphorylate tau protein on serine 396 ( Figure 4A). Due to high baseline phosphorylation of tau on threonine 231 and on the 12E8 epitope in these preparations of recombinant tau (data not shown), we were unable to test for direct tau phosphorylation by DYRK1A on these epitopes. However, our data are consistent with these sites either being directly phosphorylated by DYRK1A or being controlled by a pathway that is dependent on DYRK1A activity. Whether these pathways ultimately result in the direct phosphorylation of tau on T231 or the 12E8 epitope or whether these sites are ultimately affected through indirect mechanisms, such as altered tau protein half-life resulting from decreased tau phosphorylation at other sites, remains to be determined. Harmine, a DYRK1A inhibitor, alters the expression of multiple forms of phosphorylated tau Based on our RNA interference data implicating DYRK1A in the phosphorylation of tau protein at multiple sites (Figure 1), we tested the high affinity DYRK1A inhibitor harmine for effects on tau. Harmine showed a clear dose response profile for the inhibition of tau phosphorylation ( Figure 2B) at doses that elicited no toxicity (Figure 2A). The increased toxicity that occurred at higher concentrations prevented obtaining an IC 50 for tau effects in our cell-based tau assay. However, because harmine significantly affected tau levels prior to observed toxicity, we conclude that the toxicity seen at higher concentrations beyond 8 mM likely results from excessive reductions to tau protein levels. This Quantification of tau phosphorylation data from the H4 cells is shown for each compound tested. The phospho-tau data have been normalized to account for any changes to total tau levels. Effects on total tau are indicated in each graph. Significance at p,0.05, as assessed by Student's T-test, is indicated by asterices above the error bars on the graphs. Error bars (standard deviation) from three independent replicates are shown. doi:10.1371/journal.pone.0019264.g004 interpretation is consistent with our prior published findings that reductions to overall tau levels via siRNA knockdown of the tau transcript, and concomitant decreases to levels of phosphorylated tau, lead to significant cellular toxicity [7]. Although harmine significantly reduced total tau levels, reductions to both 12E8 and pT231 phosphorylated tau remained highly significant (Figure 4). However, interestingly, the reductions to pS396 tau at higher concentrations mirror the reductions to total tau levels. This is a somewhat striking observation since DYRK1A can clearly directly phosphorylate tau protein on serine 396 ( Figure 5) and suggests at least two possibilities. First, phosphorylation of serine 396 could be very tightly correlated to the overall stability of tau protein. Second, harmine and the other b-carbolines, all of which show the same differential effect on the relationship between pS396 tau and total tau (Figure 4), could target additional cellular proteins that then lead to this S396/Total Tau specific correlation. Further experiments will be needed to account for the relationship between these compounds, pS396 tau, total tau levels, and cellular toxicity. b-carboline alkaloid derivatives alter the levels of phosphorylated tau Based on positive results for harmine, we tested several harmine derivatives, including fully aromatic b-carboline compounds (harmol, harmane, norharmane, 9-ethylharmine, and 3,4-substituted derivatives MPP-021 and MPP-313) and a dihydroderivative (harmaline). Modifications to certain structural components of the b-carboline ring structure significantly affected the ability of these compounds to inhibit tau phosphorylation. A comparison of results for harmaline and harmine indicate that a fully aromatic ring structure provides higher affinity for tau inhibition and toxicity (see Table S1, Figure 2B, Figure 3, Figure 6). Certain modifications to carbon 7 increase toxicity and tau effects (compare harmine and harmol to harmane). Also, the methyl group on carbon 1 appears to be important for the observed tau effects and toxicity (compare norharmane to harmane). Lastly, the addition of an ethyl group to N-9 increased the effects of harmine on tau and increased toxicity (compare 9ethylharmine to harmine). The combination of results provide insights into which structural features of harmine could be targeted and altered to improve the tau effects. Another important observation from these compound derivative studies was that correlations between tau reductions and cellular toxicity that were first observed with harmine, were also found with all of the b-carbolines tested. While we were initially hopeful that certain derivatives could separate the tau effects from the toxic effects, no compounds separated these effects. This may well result from a causative relationship between excessive tau reductions and toxicity, consistent with our prior results targeting tau expression with siRNA . Quantification of b-carboline affinities for inhibition of pS396 phosphorylation in vitro. Quantification of the in vitro phosphorylation data at each drug concentration tested is shown. Error bars represent the standard deviation of three independent replicates. Significance at p,0.01, as assessed by Student's T-test, is indicated by a single asterix above the error bars on the graphs. Significance at p,0.001 is indicated by two asterices above the error bars. doi:10.1371/journal.pone.0019264.g007 [7]. This may have implications in the development of therapeutic strategies designed against tau expression or phosphorylation. Conclusions Pharmacologic inhibition of tau phosphorylation at certain key sites that regulate the functional activity of tau or that promote the aggregation of tau in to neurofibrillary tangles may provide a promising approach for the treatment of AD and other tauopathies. We show that the b-carboline alkaloids inhibit DYRK1A kinase activity and reduce the levels of multiple phosphorylated forms of tau protein that are important in the pathological progression of AD. Further refinement of this class of compounds on functional groups that are important determinants of their affinity for DYRK1A could lead to high affinity inhibitors of tau phosphorylation. Figure S1 Quantification of the inhibition of tau phosphorylation by multiple b-carbolines. Quantification of the absolute tau phosphorylation data from the H4 cells is shown for each compound tested. Data have not been normalized to account for changes to total tau levels. Significance at p,0.05, as assessed by Student's T-test, is indicated by asterices above the error bars on the graphs. Error bars (standard deviation) from three independent replicates are shown. (TIF) Supporting Information Table S1 b-carboline compounds tested in this study. Shown in columns from left to right are the compound names, chemical structures, concentration resulting in 50% viability in the H4 neuroglioma cell line used in all of the cell-based tau assays. (TIF)
5,696.6
2011-05-06T00:00:00.000
[ "Chemistry", "Medicine" ]
TransFusion Model Fusion Mechanism Based on Transformer for Traffic Flow Prediction In recent years, the problem of traffic congestion has become a hot topic. Accurate traffic flow prediction methods have received extensive attention from many researchers all over the world. Although many methods proposed at present have achieved good results in the field of traffic flow prediction, most of them only consider the static characteristic of traffic data, but do not consider the dynamic characteristic of traffic data. The factors that affect traffic flow prediction are changeable, and they will change over time. In response to this dynamic characteristic, the authors propose a model fusion mechanism based on transformer (TransFusion). The authors adopt two basic forecasting models (TCN and LSTM) as the underlying architectures. In view of the performance of different models on the traffic data at different times, the authors design a model fusion mechanism to assign dynamic weights to basic models at different times. Experiments on three datasets have proved that TransFusion has a significant improvement compared with basic models. INTRoDUCTIoN The explosive growth of urban population and the increase of vehicles are likely to cause traffic congestion. Traffic accidents have become another turbulence factor in people's lives. At the same time, traffic congestion has caused a great burden on the environment. The traffic problem has received considerable attention all around the world. There are many factors influencing traffic conditions. Because the traffic state is gathered by human activities, traffic conditions are different at different times and regions (such as the regular congestion the morning and evening traffic peaks, and the more vehicles in the center of the city). In addition to basic factors such as time and region, weather and solar terms are also important factors that affect traffic conditions. Complex factors make it difficult to alleviate traffic congestion. In order to solve the problem of traffic congestion, researchers from various countries have put forward the Intelligent Transportation System (ITS). Intelligent transportation system (ITS) is a non-linear and time-variant system. Its technical core is traffic prediction. Traffic prediction is to predict the current or next traffic flow through the traffic flow at the previous moments. The status information mainly includes traffic flow, road structure, average vehicle speed, etc. The traffic flow information is particularly critical. Traffic flow refers to the number of vehicles passing through the current section of a highway in a unit period. The traffic flow can clearly see the congestion degree of the current section of a highway. In order to make more accurate prediction of traffic flow in the next time, predicting traffic flow requires not only knowing the traffic flow in the area at historical moments, but also combining local time, region, weather, solar terms and other factors. The traffic prediction problem is a complex time varying problem. To tackle this problem, an increasing number of traffic conditions forecasting models have been proposed in relevant literature during the past several years, like ARIMA28, SVR9, SAE21, LSTM34, Conv-LSTM20. The spatialtemporal characteristic of traffic data is dynamically related. For example, during the peak period of traffic flow, the central area of the city is inconvenient for vehicles to enter and exit due to traffic congestion. There is little difference in traffic flow between the previous moment and the next moment. The temporal characteristic of traffic data has a greater impact on the predictive effect. At midnight, because the traffic flow of the entire city is small, in a certain period of time, vehicles can reach a farther location from the current location. Therefore, the traffic flow of the area is greatly affected by the traffic flow of the surrounding areas. It is difficult to find a single model with good performance to predict traffic flow at all times, which hampers the improvement of performance. Although some hybrid models simultaneously mine the spatial-temporal relevance of traffic data, the features they get are static, not dynamic. In order to better capture dynamic spatio-temporal characteristic, in this paper we propose a model fusion mechanism based on Transformer, referred to as TransFusion. We select some models that are general in spatio-temporal network data prediction as baseline models. The Transformer layer is used to assign different weights to different models at different times. Dynamic weight distribution can make full use of the advantages of each baseline model at different times to consistently outperforms other baselines. Besides, TransFusion is data-driven and don't need external information(e.g. location information of sensors, topological map of the road). The contribution of this paper can be summarized as follows: • We propose a model fusion mechanism TransFusion to dynamically combine some simple models. TransFusion can extract the dynamic spatio-temporal relevance of traffic data by assigning appropriate weights to different models. TransFusion is data-driven and don't need external information(e.g. location information of sensors, topological map of the road). • Experiments on three datasets demonstrate the superior performance of TransFusion for Traffic Flow Prediction. ReLATeD woRK Traffic flow forecasting can predict the traffic flow at the next moment by capturing the current flow information of the highway and combining regional time and geographical factors. It is of great significance for guiding traffic travel and solving the problem of heavy traffic. It has aroused extensive attention at home and abroad. There are many methods for traffic flow prediction, which can be divided into four categories. The first one is linear model (e.g., autoregressive moving average model ARMA, seasonal ARIMA, Kalman filter methods). The second category is nonlinear model (e.g., wavelet models [Yang & Hu, 2016] and chaotic theory models [Xue & Shi, 2008]). The third category is machine learning models (e.g., k-nearest neighbor models [Cai et al., 2016] and support vector regression [Castroneto et al., 2009;Cheng et al., 2017;Sun et al., 2015;Xiao et al., 2018]). The fourth category is deep learning model. With the advent of deep learning, researchers have found that deep learning models have powerful capabilities that can greatly compensate for the shortcomings of traditional methods, and have achieved significant results in natural language processing , image recognition (Wang et al., 2020) and other aspects. Some researchers use it for traffic flow prediction research. Lv et al. (2015) adopt stacked autoencoders to predict traffic flow, and find that the stacked autoencoder model is superior to the BP neural network model and the support vector machine (SVM) model. Huang et al. (2013) Because the traffic flow prediction problem is affected by both temporal and spatial characteristic, it is difficult to achieve better prediction results using a single model. Some researchers use deep learning hybrid models to extract features from both time and space for prediction. Lin et al. (2019) proposed the SpAELSTM model and Wu et al. (2016) adopted a hybrid model of CNN and LSTM to predict traffic flow. Traffic data is extremely dynamic (at this moment it may be more affected by time factors and at another moment it may be more restricted by space factors), therefore, models with poor overall performance will also be better than splendid overall performance at some moments. Based on this dynamic influence, this paper proposes a dynamic model fusion mechanism (TransFusion) based on Transformer. TransFusion is composed of two layers from a bottom-up perspective: The first layer is Basic Layer. It consists of some basic models. The second layer is Transformer Layer. It assigns appropriate weights to basic models. The advantages of the model fusion mechanism are as follows: • The network architecture makes full use of the dynamic characteristic of traffic data. It can make full use of the advantages shown by different models at different moments, thereby improving the accuracy of prediction. • The architectures of the underlying models are quite different, which can fully complement each other's advantages. The model fusion mechanism has good flexibility and scalability. The basic models can be replaced. • Transformer Layer can capture long-term dependence and performs well in hour-level traffic flow prediction. MeTHoDoLoGy In this section, we introduce the design of Model Fusion Mechanism based on Transformer (TransFusion). Specifically, we first give the preliminaries of traffic flow prediction. Then, we briefly introduce the structure of several basic models. Finally, we elaborate the technical details of TransFusion. Preliminaries Traffic flow refers to the number of vehicles passing by a road section in a small interval. Traffic flow prediction refers to a certain area. According to its historical traffic flow data, it predicts the traffic flow of the area in the future for a period of time. The mathematical description of the traffic flow forecasting problem is as follows: The traffic flow of location p at the current time t is x t p ( ) , and the traffic flow at the next time Using the traffic flow data of n adjacent locations { _ | , , , , } p i i n = … 1 2 3 (including location p) in the past N time periods t N t − + ( ) 1, to predict the traffic flow data at the next moment of location p is the traffic flow prediction. The input of the problem can be expressed as a twodimensional space-time matrix: where n represents the spot within the region and N represents the time. A simplified description of the problem is as follows(ρ is the predictive model): Basic Models We choose two models (LSTM) that perform well in time series forecasting tasks as the basic models. LSTM Due to the fully connected mode between layer and the layer (input layer, hidden layer, output layer) in the traditional neural network model, it is difficult to achieve better results for contextual sequence. The emergence of recurrent neural networks has improved this dilemma. The recurrent neural network stores the previous content in the current cell for calculation. The output of the previous cell and the output of the next cell are related. At the same time, the connection mode is changed between the nodes of the hidden layer. The output of the hidden layer is not only related to the input of the current layer, but also affected by the output of the hidden layer at the previous moment. Therefore, recurrent neural network (RNN) can effectively extract time series features. Due to the chain derivation rule of RNN, if there is a minimum value in the weight matrix, the gradient will shrink exponentially after the matrix is multiplied N times. After a period of time, the gradient becomes 0, which is the disappearance of the gradient. If the value of the weight matrix is large, after N times of multiplication, the gradient explosion phenomenon will appear. The existence of gradient disappearance and gradient explosion restricts the development of RNN, making it difficult to learn long-term dependence. With the continuous advancement of deep learning research, a new type of neural network is proposed, namely the long and short-term memory network (LSTM). LSTM can solve the problem of gradient disappearance and gradient explosion. It has been widely used in time series feature extraction (Sulo et al., 2019) and natural language processing (Cai et al., 2019). The structure of LSTM is shown in Figure 1. LSTM uses cell state to store long-term memory and uses three gates (forget gate, input gate, output gate) to control cell state. The output of the gate is a real vector between 0 and 1. When the output is 0, any vector multiplied by it will result in zero vector, which is equivalent to nothing passing. When the output is 1, there will be no change in any vector multiplied by it, which is equivalent to everything passing. Forget gate determines how much of the cell state C t-1 at the previous moment is retained to the current moment C t . Input gate determines how much of the input of node at the current moment is saved in the cell state C t . Output gate controls how much of the cell state C t is output to the current output value h t of LSTM. TCN Research has indicated that certain convolutional architectures can reach state-of-the-art accuracy in multiple sequence modeling tasks Gehring, Auli, Grangier, Yarats, et al., 2017), a generic temporal convolutional network (TCN) (Bai et al., 2018) was proposed. The TCN is based upon two principles: the fact that the network produces an output of the same length as the input, and the fact that there can be no leakage from the future into the past. The architecture of TCN can be expressed as: . The TCN uses a 1D fully-convolutional network (FCN) architecture to achieve the first point. The TCN uses causal convolutions, convolutions where an output at time t is convolved only with elements from time t and earlier in the previous layer to achieve the second point. In order to solve the long-term dependence problem in sequence modeling tasks, TCN employ dilated convolutions to expand the receptive field. The convolution operation F on element s of the sequence is defined as where d is the dilation factor and k is the filter size. choosing larger filter sizes k and increasing the dilation factor d can expand the receptive field of TCN. The struture of dilated causal convolution is shown in Figure 2(a). To maintain the stability of deeper and larger TCNs, a residual module is employed is adopted instead of a convolutional layer. A residual block add the input to outputs of the block. The residual block of TCN is shown in Figure 2(b). Within a residual block, the TCN has two layers of dilated causal convolution and the rectified linear unit (ReLU). Weight normalization and spatial dropout are adopted to normalization and regularization. Besides, the TCN adopts 1 ×1 convolution to ensure consistent input and output width. TransFusion Since the appearance of Transformer (Vaswani et al., 2017), it has been widely used in NLP (Devlin et al., 2018;Raffel et al., 2019;Zhou et al., 2018) and CV (Carion et al., 2020, Zheng et al., 2020 tasks. Transformer does not process data in an ordered sequence manner. It processes entire sequence of data and uses self-attention mechanisms to learn dependencies in the sequence. Therefore, Transformer-based models can model complex dynamics of time series data. In order to extract dynamic spatialtemporal characteristic of traffic data, we add a dynamic model fusion mechanism to the basic models. Transfusion has made some changes on the basis of Transformer's architecture. It is shown in Figure 3. TransFusion consists of encoder and decoder layers. The input of TransFusion is defined as X. X is composed of , ,..., X X Xl , j is the number of stations) and ). X X Xl 1 2 , ,...,       is spatio-temporal matrices of traffic data. , ,..., Y Y Ys 1 2       is the output of basic models. l is the length of sequence. s is the number of basic model. We concatenate , ,..., X X Xl 1 2       and Y Y Ys 1 2 , ,...,       as the input of the TransFusion. First, we map the high-dimensional input X to a low-dimensional vector of dimension dmodel through a fully-connected network. Second, we use positional encoding with sine and cosine functions to encode sequential information in the time series data by element-wise addition of the input vector with a positional encoding vector. Then, we input the resulting vector to N encoder layers. The output of the encoder layers and the input X of model are used as the input of the decoder layers. The output of the decoder layers becomes the final output of the model after dimension transformation. Encoder: The encoder is composed of a stack of N identical layers. Each layer has two sublayers: multi-head self-attention mechanism and fully connected feed-forward network. A residual connection is used between each layer. Each sub-layer is followed by a normalization layer. Decoder: The encoder is also composed of a stack of N identical layers. Each layer has three sub-layers. The first one and the second one are same as the encoder. The decoder inserts a third sub-layer to apply self-attention mechanisms over the encoder output. A residual connection and a normalization layer follow each of sub-layers. Multi-Head Attention: We employ the attention mechanism to assign dynamic weights to the prediction results of different basic models. Multi-Head Attention and Encoder-Decoder Attention are depicted in Figure 4. In the first encoder layer and decoder layer, , ,..., , ,...,       is the output of the previous encoder layer and decoder layer. y y 1 2 ,       is the variant of the output of basic models Y Y 1 2 ,       . They perform the same transformation as x x xl 1 2 , ,...,       . In Multi-Head Attention, we employ spatio-temporal matrix as the queries and keys. In Encoder-Decoder Attention, we employ the output of encoder layers as the queries and keys. Combined with the spatio-temporal characteristic of traffic data, attention mechanism dynamically assigns weights to values. Besides, we extract temporal dependency with LSTM cells. We linearly project the values h times with different projections. eXPeRIMeNT In this section, we will perform experiments to validate the performance of TransFusion. First, we will introduce the datasets used in experiments. Then, we will analyze the parameter sensitivities through experiments and prove the necessity of model fusion mechanism. Finally, we select several experiments with incremental and ensemble algorithms as the baselines for comparison in a series of experiments. Datasets We carried out comparative experiments on three real-world highway traffic datasets (I105-E, PEMSD4, PEMSD8). The datasets are collected from the Caltrans Performance Measurement Systems (PeMS) (Chao et al., 2000). The system has more than 39,000 detectors deployed on the highway in the major metropolitan areas in California. The traffic data are aggregated into every 5-minute interval from the raw data, which means there are 12 points in the flow data for each hour. We use traffic flow data from the past hour to predict the flow for the next hour. We apply max-min normalization. 60% of data is used for training, 20% are used for testing while the remaining 20% for validation. I105-E: This traffic dataset contains traffic information collected from 24 detectors in the highway I105-E. The time span of this dataset is from May to June in 2014. The locations of the detectors in the highway I105-E are illustrated in Figure 5. PEMSD4: It is the traffic data in San Francisco Bay Area, containing 3848 detectors on 29 roads. The time span of this dataset is from January to February in 2018. PEMSD8: This traffic dataset contains the traffic data of 1979 detectors on 8 roads in San Bernardino. The time span of this dataset is from July to August in 2016. evaluation Metrics To evaluate the prediction results of the traffic flow prediction model, three evaluation functions are used, i.e., root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The evaluation criterion function is defined as follows: RMSE where N denotes the indices of observed samples; x i represents the ground truth, y i  represents the predicted values. Missing values are excluded in calculating these metrics. Baselines To fully evaluate the performance of TransFusion mechanism, we compare our model with the following seven baseline methods: • VAR (Zivot, 2020): Vector Auto-Regressive can capture the pairwise relationships among all traffic flow series. The number of lags is set to 12. • SVR (1997): Support Vector Regression uses a linear support vector machine for regression tasks. It uses kernel trick which can map inputs to a feature space with high dimensional. Then, a non-linear regression could be converted into a linear function. The penalty term C is set to 0.1. • LSTM (Hochreiter & Schmidhuber, 1997): Long Short-Term Memory Network (LSTM) has been widely applied in the time series forecasting area. Because of its memorial ability, LSTM performs well in the forecasting studies. • SAE (Lv et al., 2015): Stacked AutoEncoder is composed of several auto-encoders stacked in series. It can convert complex input data into a series of simple high-order features. We use fully connected layers to construct the encoder and decoder. • TCN (Bai et al., 2018): Temporal Convolutional Network uses causal convolution and residual module to model time series forecasting task. Because of the larger receptive field, it performs well in multiple fields. • Seq2Seq (Sutskever et al., 2014): Sequence-to-sequence is also an encoder-decoder architecture, where the encoder is composed of a fully connected dense layer and a LSTM layer. It learns from input and returns a sequence of encoded outputs and a final hidden state. The decoder is of the same structure as encoder. • Conv-LSTM : Uses Convolutional Neural Networks(CNN) to extract spatial characteristic and uses Long Short-Term Memory Network(LSTM) to extract temporal characteristic. experiment Result We split all datasets with ratio 6:2:2 into training sets, validation sets and test sets. One hour historical data is used to predict the next hour's data. It means that we use the past 12 continuous time steps to predict the future 12 continuous time steps. We compared TransFusion with seven baseline methods. Table 1 summarizes RMSEs, MAEs and MAPEs for each method, as well as relative performance gain with respect to VAR method. In the experiment, we find that SAE(FC) based on fully connected layers has the problem of underfitting on complex datasets(PEMSD4 and PEMSD8). We adopt Seq2Seq(based on LSTM) on PEMSD4 and PEMSD8. It can be seen from Table 1 that TransFusion achieves the best performance in both two datasets in terms of RMSE and MAE. The comparison suggests that deep learning models overall outperform VAR and SVR for three evaluation metrics. Within the deep learning approaches, the performance of Seq2Seq is higher than TCN, ConvLSTM and LSTM. Seq2Seq can better extract the long-term dependence of feature. It is also applicable to hourly traffic flow forecasting and shows good performance on it. Conv-LSTM performs well on small datasets, but not on large datasets. In terms of RMSE, the TransFusion model outperforms Seq2Seq, with relative RMSE decrease of 3.8% on I105-E, 2.8% on PEMS4 and 8.5% on PEMS8, respectively. It also outperforms Conv-LSTM, with relative RMSE decrease of 4.9% on I105-E, 9.5% on PEMS4 and 17.2% on PEMS8. It suggests that TransFusion's attention mechanism can better capture dynamical spatiotemporal characteristic in the traffic data compared to the static characteristic extractd by Conv-LSTM. Because TransFusion don't use external road information, we don't compare it with methods based on graph neural network. Influence of TransFusion Transformer-based models have good effect on time series data (Wu et al., 2020). In order to prove the effectiveness of proposed TransFusion, we also compare TransFusion with Transformer-based models on same datasets. As Table 2 shows, TransFusion has comprehensive performance improvement compared with Transformer-based model. It can be seen from Table 2 that Transformer does not extract the spatiotemporal features of traffic data well. With the aid of the underlying model, TransFusion can better solve this problem, achieving 5.5% 13% improvement than Transformer-based model. Besides, we conduct experiments on PEMSD8 dataset to research the influence of different experiment settings(the number of encoder layers N and the number of heads h ). As Figure 6 shows, N and h have a big impact on the performance of TransFusion. Although searching suitable settings is important for TransFusion, the performance of TransFusion is better than the basic models under many settings. CoNCLUSIoN In this paper, we propose a novel Transformer-based model fusion mechanism called TransFusion for traffic flow prediction. We employ two traditional models with good performance (TCN and LSTM) as basic models. Besides, we use Transformer-based structure to assign dynamic weights at different times. Finally, we conduct extensive experiments on three real-world datasets and the results show that our proposed method is superior to several baseline models. -5.5% -6.2% -10.1% -6.4% -8.5% -12.3% -6.6% -9.8% -13.0%
5,602.8
2023-07-10T00:00:00.000
[ "Computer Science" ]
The Effective Equivalence of Geometric Irregularity and Surface Roughness in Determining Particle Single-scattering Properties This study investigates the effects of geometric irregularity and surface roughness on the single-scattering properties of randomly oriented dielectric particles. Starting from a regular crystal with smooth faces, effects of roughening are compared with effects of perturbing the regular configuration of the smooth faces. Using the same slope distribution for small roughness facets and tilted faces provides a natural way to compare the effects on the single-scattering properties. It is found that the geometric irregularity and surface roughness have similar effects on the single-scattering properties of an ensemble of randomly oriented particles. In other words, particles with irregular geometries and those with surface roughness are optically equivalent if the slope distributions are the same. Furthermore, an ensemble of particles with irregular geometries can be used as an effective approximation for simulation of the scattering properties of roughened particles, and vice versa. This approach also provides new interpretation of the observed, relatively featureless and smooth, scattering phase functions of naturally occurring particles. Light scattering by complex ice-analogue crystals, " J. On optical and microphysical characteristics of contrails and cirrus, " J. Ice cloud single-scattering property models with the full phase matrix at wavelengths from 0.2 to 100 µm, " J. Quant. optical properties of particles with small-scale surface roughness: combination of group theory with a perturbation approach, " Opt. Uncertainties associated with the surface texture of ice particles in satellite-based retrieval of cirrus clouds-part I: single-scattering properties of ice crystals with surface roughness, " IEEE Trans. The effects of surface roughness on the scattering properties of hexagonal columns with sizes from the Rayleigh to the geometric optics regimes, " J. The impact of surface roughness on scattering by realistically shaped wavelength-scale dust particles, " J. Quant. Spectrosc. Radiat. Transf. in press. Optical properties of pristine ice crystals in mid-latitude cirrus clouds: a case study during CIRCLE-2 experiment, " Atmos. Observational quantification of the separation of simple and complex atmospheric ice particles, " Geophys. Single-scattering properties of complex ice crystals in terrestrial atmosphere, " Contrib. Light scattering by single natural ice crystals, " J. Shape modeling of mineral dust particles for light-scattering calculations using the spatial Poisson–Voronoi tessellation, " J. Light scattering by feldspar particles: comparison of model agglomerate debris particles with laboratory samples, " J. Quant. Modeling the scattering properties of mineral aerosols using concave fractal polyhedra, " Appl. Light scattering by randomly irregular dielectric particles larger than the wavelength, " … Introduction Atmospheric particles (e.g., water droplets, ice crystals, and aerosol particles) influence the Earth's radiation budget and climate by scattering and absorbing both solar radiation and thermal emission.Because of the great variation of particle microphysical properties (e.g., size, shape and composition) and lack of quantitative observations of these properties, there are significant difficulties in the numerical representation of atmospheric particles and their scattering properties.This is especially true for non-spherical particles.Laboratory and in situ observations indicate that a large fraction of ice crystals and aerosol particles occurs with irregular geometry or some degree of surface roughness [1,2], and that the particles show relatively featureless and smooth scattering phase functions [3][4][5].For this reason smooth phase matrix elements are extensively used in radiative transfer and remote sensing applications, and show considerable success in representing the radiative effects of the atmospheric particles [6]. Various numerical models have been used to study the effects of surface roughness on the light scattering properties, which are based on idealized or realistic surface structures, as well as imaginary ones [7][8][9][10][11][12].Whatever the models assume, the primary finding has been that one of the most significant influences of surface roughness on light scattering is to smooth out the peaks of the scattering phase function, for example, the substantial weakening or complete removal of the sharp peaks corresponding to the 22° and 46° halos produced by pristine hexagonal ice crystals within cirrus clouds.However, the scale of rough structures needed to remove the phase function peaks has typically been much larger than the scale observed on real ice crystals [5,11].This suggests that, in addition to surface roughness, there may be other mechanisms that cause the featureless scattering phase functions. Due to complicated meteorological environment in which the particles are formed, as well as collision and coalescence processes, naturally occurring ice crystals often show irregular geometries, as has been widely reported in various in situ and laboratory measurements [13][14][15].These irregular structures can be responsible for smoothing out the phase function peaks at certain scattering angles.Furthermore, because scattering peaks occurring at different scattering angles are related to particular particle geometries, averaging over an ensemble of particles with irregular geometry can lead to featureless ensemble-averaged phase function. Thus, both surface roughness and irregular geometry observed for ice crystals and aerosol particles have similar influences on the scattering properties.It is therefore of theoretical interest and practical value to compare their effects on light scattering by atmospheric particles, and to find what relationships there may be between the two factors.The approach taken in this study is to start with a basic crystal shape that is a regular solid hexagon with smooth faces, and alter it in two ways.The first way is to roughen each face in a manner reminiscent of the tilted-facet method [7,16] but producing a definite particle, as in the study by Liu et al. [11].This results in a separation of scales between the face scale and the roughness scale.The second way is to alter the basic particle by tilting each entire face, with independently chosen tilts for each face.In this case the result is a particle with "irregular geometry," but there is no "small scale" roughness.More details are provided in the next section.One might consider the two approaches as extremes on a continuum in which the roughness scale increases from being much smaller than the facial scale to being the same as the facial scale, but this study will not pursue this point.In this study the basic shape is taken to be a regular hexagon with unit aspect ratio; the choice of a hexagon has a natural motivation in considering ice particles.Other aspect ratios and geometries could certainly be considered as basic, but the basic conclusions of this study should not be significantly affected. The remainder of this paper is organized as follows.The models to generate roughened and irregular particles are described in Section 2. Section 3 presents results to show the optical equivalence of particle irregularity and surface roughness as far as their singlescattering properties are concerned.The conclusions of this study are given in Section 4. Roughness and geometric irregularity The tilted-facet (TF) algorithm based on the geometric-optics method is one of the most efficient and widely used models for simulating the effects of surface roughness on light scattering.Ray by ray, it accounts for each reflection-refraction event by calculating the result of encountering an imaginary facet randomly tilted about the local normal with a slope chosen from an assumed probability distribution [7,10,16].The particle surface itself is unchanged, and different rays encountering the same point on the surface are treated using different slopes.There is no single "roughened particle" that all rays meet.The normal distribution has been the most widely used, but the more general Weibull distribution has also been used [17].Liu et al. [11] developed a generalized random wave superposition model to study light scattering by roughened particles over the entire size range, and in this method explicit rough structures are generated.They showed that when the structures are used in a numerical model, with the discretization implicitly creating small local facets, the slopes of facet elements actually follow the normal distribution: where, in terms of a facet-based coordinate system with spherical angle coordinates θ and φ (See Fig. 1), the slopes s x and s y along the x-and y-directions are given by: and σ 2 is the variance.The central parameter characterizing surface roughness is σ 2 .This model of surface roughness used by Liu et al. [11] is the same one that is used in the current study.Figure 2(b) shows an example of the roughened hexagonal column with σ 2 = 0.4.The parameter σ 2 is also central in characterizing geometric irregularity, as explained next, giving a quantitative way of comparing the two forms of particle geometry.In previous studies of the effects of irregularity, a number of highly complicated irregular geometries, e.g.Poisson-Voronoi tessellation, agglomerate debris particles, Gaussian random field particles and fractal particles, have been used to model the light scattering properties of mineral dust or ice crystals [18][19][20][21].We use a relatively simpler model, allowing only perturbations of facial orientations, leaving the faces themselves smooth.Adopting the idea used in the TF algorithm, we generate irregular particles by randomly tilting surfaces of regular ones, but in distinction from the TF algorithm, the tilts are features of the particle generated, and are fixed throughout the scattering simulation.As mentioned in the introduction, this study considers a hexagonal column with a unit aspect ratio (i.e.L/2a = 1 in Fig. 2 roughened particle with σ 2 = 0.4 is shown in Fig. 2(b).Each surface of the hexagonal column is randomly tilted following a given slope distribution, and, to have the same normal distribution as Eq. ( 1), θ and φ, the polar and azimuth angles to determine a tilted surface, can be chosen according to , where ε 1 and ε 2 are random numbers distributed uniformly between zero and one [16].With the center position of a surface fixed and the tilted angles given by Eqs. ( 2) and ( 3), a new surface can be determined.The vertexes and edges of the new irregular particle can be obtained by calculating the intersecting points and lines of corresponding surfaces.We eliminate resultant particles that are significantly different from a hexagonal column, by using only those that are convex and have two hexagonal and six tetragonal surfaces.After the construction, the volume and surface area of the particle are generally changed, so the irregular particle is scaled to have the same effective diameter as that of the regular one.The effective diameter of the particle is defined to be 1.5 × V/A, where V is the particle volume and A is the particle's projected area, i.e. the average of areas of projection over all projection angles. The size parameter of a hexagonal column is defined as 2πL/λ, where L is the length of a column and λ is the incident wavelength: the size parameter of an irregular particle constructed from a hexagonal column is taken to be the same as that of the original column.Figure 3 illustrates some examples of randomly generated irregular hexagonal columns with different values of σ 2 (0.01, 0.1 and 0.4 from top to bottom).The figure illustrates the variability in actual shape that comes from sampling the corresponding slope distribution.Even with such a simple method of construction, the irregular particles show quite different geometries, and, as the value of σ 2 increases, the hexagonal columns become highly irregular.With geometries of irregular and roughened particles explicitly given, it is straightforward to calculate their light scattering properties.In this study both the pseudo-spectral time domain method (PSTD) [22][23][24] and the improved geometric-optics method (IGOM) [25,26] are used.Due to the approximations involved in the ray-tracing technique, the IGOM is only appropriate for large particles, whereas the PSTD, directly solving Maxwell's equations in the time domain, is in principle applicable to light scattering by arbitrarily shaped and sized particles, with a feasibility limit imposed by current computer technology that size parameters be not much larger than 200 [23,24].A refractive index of m = 1.31 (i.e.ice at visible wavelengths) is used for all simulations.All scattering properties calculated are those for randomly oriented particles, and we will mainly discuss two important elements of the scattering phase matrix: phase function P 11 and the degree of linear polarization −P 12 /P 11 .The integral scattering properties will not be discussed, because the extinction efficiency for particles we considered are all close to 2, and no absorption is considered in this study. Results The scattering properties of irregular particles with different realizations can be quite different.Figure 4 shows some examples of P 11 and −P 12 /P 11 for irregular hexagonal columns with a size parameter of 100.Three values of σ 2 , i.e. 0.1, 0.2 and 0.4, are used to generate irregular geometries, and results of 20 realizations are illustrated for each case by colored thin curves.Figure 4 clearly shows the variations of P 11 and −P 12 /P 11 for different particle realizations, which mostly occurs at scattering angles around 20° and backward directions.With an increase in σ 2 , the variations of P 11 are not significantly enhanced whereas P 12 /P 11 becomes more divergent.The black thick curves in the figure are averaged values of the 20 realizations, and can be understood as the mean scattering properties of an ensemble of randomly oriented irregular particles.All averaged values show smooth and featureless phase functions, which are similar to observed ones for dust particles and ice crystals [2,4].This indicates that an ensemble of particles with irregular geometries is a potential choice to model the light scattering properties by atmospheric particles.P 11 and -P 12 /P 11 of an ensemble of irregular particles (the averaged values from Fig. 4) and roughened ones are compared in Fig. 5, where results of the corresponding regular smooth particle are also shown.The size parameter of the hexagonal columns is kept the same at 100, and all simulations are performed using the PSTD.The effect of roughness realization is relatively minor, and only results of a single roughened particle are simulated.The weak peaks at 22° and 46° that are clearly shown by the regular smooth particles are smoothed out by both irregular geometry and surface roughness.With σ 2 being 0.1, the phase functions of the irregular and roughened particles agree closely, whereas −P 12 /P 11 shows some differences.As particles become more irregular (σ 2 = 0.4), the scattering in the backward directions becomes weaker, and slight differences from that of roughened particles emerge.This indicates that the irregular geometry and surface roughness defined in this study have similar effects on the single-scattering properties of particles, in other words, the irregular geometry and surface roughness are optically equivalent.Figure 6 is the same as Fig. 5 but for hexagonal columns with a size parameter of 1000, and the IGOM is used for the simulations.As particle sizes become much larger, halos of the smooth particle become much stronger.P 11 and -P 12 /P 11 for irregular and roughened particles agree closely for both σ 2 values, and this further supports our claim of the optical equivalence of surface roughness and irregular geometry.The closer agreement is mainly because IGOM considers the scattering as reflections and refractions of "rays" on particle surfaces, and, with the same slope distribution for particle surfaces (i.e.surfaces from the ensemble of irregular particles or small facets on roughened ones), the scattering energy will be distributed in similar ways. Conclusions We compare the single-scattering properties (mainly the angular-dependent scattering phase matrix elements P 11 and P 12 ) of particles with roughened surface and irregular geometries.To perform a clean comparison, we choose a common slope distribution for small facets or surfaces for the roughened and irregular particles respectively, and generate particles that are geometrically comparable, in that they are specified by the same value of σ 2 .Numerical results from both the PSTD and IGOM show that the roughening of the surface and irregularization of facial geometry have similar influences on the scattering phase matrices of a hexagonal column.In this sense, the two perturbation forms are optically equivalent.The implication is that an ensemble of particles with irregular geometries can be used as an alternative method to model light scattering by roughened particles, and vice versa.Furthermore, we call special attention to the fact that these conclusions are based on the particular forms of geometric perturbation and surface roughening used in this study, and further investigation will be need to determine the robustness of the conclusions. Fig. 1 . Fig. 1.Geometry of tilted surface element of a rough particle or titled face of an irregular smooth particle. Fig. 2 . Fig. 2. Geometry of a smooth and roughened column with an aspect ratio of 1.The rough surface has σ 2 = 0.4. Fig. 4 . Fig.4.P 11 and P 12 /P 11 of irregular hexagonal columns with different degree of irregularity.The size parameter of the corresponding regular column is 100, and the PSTD is used for the simulation. Fig. 5 . Fig. 5. Comparison of the P 11 and P 12 /P 11 of irregular, roughened and smooth hexagonal columns with a size parameter of 100 simulated by the PSTD. Fig. 6 . Fig. 6.Comparison of the P 11 and P 12 /P 11 of irregular, roughened and smooth hexagonal columns with a size parameter of 1000 simulated by the IGOM.
3,920
2014-09-22T00:00:00.000
[ "Mathematics" ]
A comprehensive workflow and its validation for simulating diffuse speckle statistics for optical blood flow measurements Diffuse optical methods including speckle contrast optical spectroscopy and tomography (SCOS and SCOT), use speckle contrast (κ) to measure deep blood flow. In order to design practical systems, parameters such as signal-to-noise ratio (SNR) and the effects of limited sampling of statistical quantities, should be considered. To that end, we have developed a method for simulating speckle contrast signals including effects of detector noise. The method was validated experimentally, and the simulations were used to study the effects of physical and experimental parameters on the accuracy and precision of κ. These results revealed that systematic detector effects resulted in decreased accuracy and precision of κ in the regime of low detected signals. The method can provide guidelines for the design and usage of SCOS and/or SCOT instruments. Introduction An accurate and often continuous assessment of microvascular, regional blood flow has many 27 implications for diagnosis and treatment of diseases and for the study of healthy physiology. 28 Despite continued efforts to establish practical means for measuring microvascular, regional 29 blood flow in a non-invasive manner, this remains an important unmet need. One potential 30 approach uses near-infrared, coherent light and the arising speckles after its diffusion [1][2][3][4]. 31 Coherent laser light can be used to non-invasively measure local microvascular blood flow in 32 tissue by detecting the fluctuating speckle patterns after light interaction with the tissue [5-9]. 33 For the purposes of this manuscript, we will focus on deep-tissue, i.e. those that utilize light that 34 penetrates up to several centimeters, measurements using photon diffusion. This is possible since Depending on the experimental setup, the imaged field of view will differ. In this example, source and the detector fibers are placed a certain distance ( ) from each other and are coupled to the laser and detector. The imaged field-of-view (imaged over × pixels includes the fiber core which in later steps will be used to calculate 2 over a specified region of interest ( ). Sub-figure b illustrates Step 1 of the simulations. In this step, the rate at which the speckles decorrelate, , is determined from the correlation diffusion equation (CDE). Using this value of , consecutive frames of correlated speckles are simulated so that their electric-field autocorrelation decays with . The intensity of these simulations are in arbitrary units, and independent of exposure time, . Instead they represent speckles measured during a finite time-bin width, , on the 1 curve. In order to simulate several values of , the process illustrated in b can be repeated several times to simulate the dependent change in . In Step 2 (sub-figure c), the arbitrary units of the simulated frames is scaled to represent realistic values of photon current rate, Φ, in units of photons/second. In Step 3 (sub-figure d), an exposure time is introduced to the simulations by summing over frames. This process additionally converts the units of the simulations from photons/s to photons. Various values of can be simulated from the same set of simulated frames of Step 1. In this case, the simulation of two values of exposure time, and , is shown. Multiplying the summed frames in units of photons by the quantum efficiency (QE) of the camera converts the units of the simulations to electrons (e − ). In Step 4 (sub-figure e), the detector effects are simulated by altering the simulated intensity statistics according to the specifications of real detectors. In Step 5 (sub-figures f and g), speckles are sampled over an area, or over pixels of several repetitions of simulations to estimate a value of 2 . The yellow dots represent 2 simulated for the and therefore simulated in Step 1. The two values of simulated in Step 3 are also shown. In the final step (Step 6, sub-figure h), the discrepancies in the exact form of the speckle autocorrelation decay between the solution for the CDE for a semi-infinite medium and the developed model is corrected for. . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint and is also a function of these parameters. 144 2.2. Speckle statistics detected by a two dimensional detector array 145 We have simulated 2 for tissue with specific optical properties and blood flow by simulating 146 consecutive frames of correlated speckles which simulate their electric field autocorrelation with 147 a decorrelation time, , defined by the solution of the CDE for a semi-infinite medium [10]. The 148 methodology presented is independent of this solution and other solutions (layered, heterogeneous, 149 numerical) of the CDE could be utilized. For clarity, electric-field autocorrelation curves following 150 the solution of the CDE will be referred to asˆ1, while the simulated electric-field autocorrelation 151 curves are referred to as 1 . While the two are similar, there are slight differences which are 152 discussed below. Furthermore, the theoretical value of 2 derived from the CDE will be referred 153 to asˆ2 while the simulated values will be referred to as 2 . 154 In the first step of the simulation pipeline (Figure 1b), is derived fromˆ1. The derived 155 value of was used to simulate frames of individual speckles by modifying the copula method 156 developed in Ref. [42]. This method simulates consecutive two dimensional matrices of numbers individual matrix can be considered as a camera frame acquired in a speckle contrast experiment. 160 These matrices are referred to as "frames" ( ) simulating pixel coordinates , while imaging 161 speckles with diameter, Ø. Ø behaves as a scaling factor to put physical units for the pixel 162 size since the speckle diameter is approximately equal to the wavelength of light being used. 163 Therefore, choosing Ø to be equal to three pixels for a system modeling = 785 nm will scale 164 the width of each pixel to be equal to approximately 262 nm. 165 The autocorrelation, 1 , of the first frame, = 1 to the k th frame, = is given by where is the frame number and is a parameter related to the decorrelation of the frames. 167 In our adaptation we have defined to be a function of . Since has been defined as . (2) Each of the individual simulations of 1 consisting of = frames of speckles patterns 170 constitute an experiment, defined by . This process together with notation is illustrated in Figure 171 2. The simulations are simulated in arbitrary copula units. In addition, the frames are only 175 dependent on and every simulated frame represents a point on the 1 curve with a finite time-bin 176 width, . Since each frame has a defined and is simulated over an array × , the complete 177 notation is, ∼ ( ) . In this notation, the pre-superscript indicates the units of the simulated 178 frame. In this case, refers to the arbitrary "copula" units. The pre-subscript, ∼, indicates that 179 no effect of detector noise has been included in the simulated frame. The indices , and refer 180 to the pixel and frame. In order to convert ∼ ( ) to physical units, the arbitrary copula units must be scaled to a Figure 2. Illustration of how frames with a defined are simulated. First individual speckles are simulated on a grid of × pixels. These individual frames, , are correlated to each other and their electric-field autocorrelation, 1 , decay according to defined from semi-infinite theory ( Figure 1). One full simulation of a theoretical 1 curve ( 1 ) consisting of frames corresponds to one experiment, . This process is repeated several times resulting in several simulations of 1 . or experimentally. According to the photon diffusion theory, in a semi-infinite geometry, the 185 measured photon current rate, Φ( ), in units of photons/second, decreases with as: 202 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint This is done by adding = / consecutive frames: Note that with the introduction of exposure time, the simulated frames drop their indexing of 204 . 205 Finally, the simulated frames are converted from photons to electrons: Where is the quantum efficiency of the camera. 207 Table 1 summarizes the introduced notation to refer to the simulated frames. The final step before using the simulations to calculate 2 is to simulate the effects of the main 210 types of detector noise on the simulated frames previously described, namely: photon shot noise, 211 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint dark signal non-uniformity (DSNU), dark current shot noise, and read-out noise [44,45]. This 212 step is illustrated in Fig. 1e. To simulate detector noise, the distribution of each of the types of 213 noise is considered, and random numbers are generated following the distribution. The notation 214 used to describe the generation of random numbers and their distributions is shown in Eq. 7 is the random number generated representing a certain intensity (in e − ) at pixel , . 217 Photon shot noise is a Poisson distributed noise source [44, 46]. Using the notation in Eq. 7, 218 the contribution of photon shot noise at each pixel , is described as: Where we have applied the definition of a Poisson distribution, ( ) = 2 ( ). In this case 220 ( ) = ∼ ( , ) (i.e. the measured intensity in e − (Eq. 6)). We have also included a new Where ( ) and 2 ( ) are the mean and variance of the DSNU specific to each detector. 228 Their values can typically be found in camera specification sheets. The variance of a logistic 229 distribution is given by 2 ( ) = ( 2 2 )/3 where is the shape parameter of the logistic 230 distribution. 231 The dark shot noise, similar to the photon shot noise (Eq. 8) is simulated by applying Poisson 232 distributed random numbers [44] to each pixel simulated in Eq 9: Finally, read out noise is simulated by assuming that it is a normally distributed noise 234 source [47]. Read out noise in CMOS cameras is added at each pixel and is independent of the 235 dark noise and the detected signal. Therefore, the contribution of the read out signal at each pixel, 236 , is simulated: where the mean and variance of the read-out signal ( ( ) and 2 ( )) are specific to each 238 detector and can be found in specification sheets or estimated from online camera simulators. 239 The total dark frame, , is then given by Putting everything together, the frames with shot noise, DSNU, dark shot noise, and read-out • ′ ′ ′ : shot noise and dark frame added, dark frame and shot noise corrected. 251 The definitions and notation for simulating detector noise is summarized in Table 2: Table 2. Table of definitions of the noise sources that are included in the simulations along with their corresponding distributions. The notation ( ; , 2 ) is used to define random numbers, , originating from a distribution, , with a mean value of, , and variance, 2 . † denotes parameters that can be found in camera specification sheets. 253 The final steps of the simulation pipeline require the calculation of 2 using the frames that 254 have been simulated. In the first step, 2 is directly calculated using the simulated frames. The will be affected by the model used for 1 , the noise is well described using the simplified single 263 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint exponential model. The definitions and notation related to 2 are summarized in Table 3. The 264 following sections will describe their calculations. corrected for semi-infinite theory (Eq. 20) Table 3. Table of definitions for 2 . Three different variations of 2 are calculated: first 2 calculated directly from the integration of the double exponential 1 from CDE. This isˆ2. Secondly, 2 calculated directly from the simulated frames whose 1 ( 1 ) follows a single exponential form. This is 2 and outlined in Section 2.7. Thirdly, the model differences due to the differences in 1 is corrected. This is 2 ′ and is outlined in Section 2.8. Moreover, 2 and 2 ′ can be calculated either spatially or temporally. Model uncorrected speckle contrast 266 So far the process for simulating the detection of speckle statistics on a 2D detector array and the 267 detector properties (Fig. 1 b to e) has been described. These steps can be repeated in order to 268 simulate several experiments ( , Fig. 2) for several different values of and therefore , for 269 calculating 2 in the temporal domain over , or for determining ( 2 ). 270 The next step in the pipeline is to use these frames to calculate values of 2 ( Fig. 1 f and g). 271 As mentioned previously, 2 can be measured spatially or temporally i.e. speckle statistics can 272 be determined spatially by using an area, , of pixels or temporally over the pixels in a set of 273 experiments, . 274 Spatial 2 is given by: . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint After the dark frame offset is corrected, the additional variance due to shot ( 2 ℎ ) and the 287 dark frame (dark and read out noise, 2 ) is corrected by subtracting their respective variances 288 from the signal variance, 2 = 2 ( ′ ′ ( , ) ) . 292 Variations in the noise correction can also be simulated. For example, the shot noise only 293 added frames, 2 , can be corrected in the following way: Where in this case, 2 = 2 ( ′ ( , ) ) and 2 ℎ = ( ′ ( , ) ) . that the values of 2 derived from the two will differ. In particular,ˆ1 describes a measurement 300 in a semi-infinite medium and a multi-scattering (diffuse) regime. Sinceˆ1 is a more realistic 301 solution to the CDE, rather than working with 2 derived from 1 , we introduce another variable, semi-infinite geometry. 307 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint Finally 2 ′ values are generated by generating normally distributed random numbers, , with 309 mean equal toˆ2 + and variance equal to 2 ( 2 ) : 2.9. Using the simulations to evaluate system performance The speckle contrast noise model was further used to design a speckle contrast system and define 339 the required detected electron count rate (e − /pixel/second) in order to accurately measure blood 340 flow in the adult human brain. An sCMOS camera by Basler (daA1920-160um, Basler AG, 341 Ahrensburg, Germany) was considered and simulated due its lightweight (15 g), compact size 342 (19.9 mm x 29.3 mm x 29 mm) and cheap price (<300e). Measurements were chosen to be 343 taken at of 2.5 cm and of 5 ms. 344 The required detected electron count rate to accurately measure 2 was determined by (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint using an 800 m core multi-mode fiber (0.22 NA). The imaged speckles had a size of Ø = 5 348 pixels. The value of of the system was previously determined to be approximately 0.2. Speckle 349 contrast data was acquired over 600 frames, and data was analyzed using an ROI of approximately 350 1100 pixels. 351 As in the setup (A) to validate the simulations, of the simulations was obtained from 2 352 recorded using a standard DCS device. In order to approximate the required detected electron Table 4. The simulations used obtained from the 1 curve recorded using DCS 362 (Figure 3 a). was simulated to be 0.2 and Ø was set to 4 pixels to agree with the values of and 363 Ø of the experimental data. Both experimental and simulation results were obtained for exposure 364 times ranging between 0.1 ms and 5 ms in order to cover a range of detected electron intensities. 365 It was ensured that the average value of the simulated detected electron intensity matched the 366 experimental data (Figure 3 b). The resulting experimental and simulated standard deviation of 367 2 is shown in Figure 3 c. The calculated signal to noise ratio of 2 in Figure 3 Table 5 were simulated. These values were chosen as they are roughly 376 the expected values when measuring in human tissue. 1 was simulated for ranging from 0.5 to 377 4.5 cm for = 5 ms. Ø was chosen to equal three pixels in order to meet the requirements of 378 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint (Figure 5 a), at =0.1 ms, the majority of the detected electron signal 420 after =2 cm originate from the detector rather than from speckles. Therefore, without applying 421 corrections, any value of 2 in this regime is not a reflection of speckle contrast, rather reflects a 422 "detector signal" contrast. 423 The bias term, (Eq. 19), is shown in Fig. 5 c and f Fig. 6 a and d. 428 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint is shown in Fig. 6 b and e, reflected as the percent error. The percent error 432 increases (accuracy decreases) with increasing reaching 5% at approximately 1.8 cm for short 433 (Fig. 6 b) and 2.5 cm for long ( (Fig. 6 e). Similarly, the precision of 2 ′ , represented as in order to be able to sample at fast enough acquisition rates while also maximizing the number 448 of detected photons (Figure 5 d). 449 In speckle contrast optical tomography (SCOT) or speckle contrast diffuse correlation tomog-450 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint corresponding to the sampling of 1100 independent speckles. In Figure 7, was changed to 459 simulate the effects of the number of independently sampled speckles on the CV of 2 ′ . 460 As expected in Fig. 7 2048×2048 pixels by choosing a larger region of pixels. 467 As observed in Fig. 6 b and e, accuracy was seen to be higher at shorter and longer , We have further demonstrated in detail (without experimental comparison) the entire simulation 482 pipeline. Finally, in the following section we will demonstrate how these simulations can be used 483 to design and optimize a speckle contrast system. 484 Speckles were simulated using the parameters specified in Table 6. These parameters were 485 derived from the experimental results ( and Ø), properties of the camera defined by the 486 manufacturer, as well as data analysis ( ). The resulting experimental and simulated percent 487 error in 2 for varying detected electron count rates is shown in Fig. 9. 488 The experimental and simulated results are in good agreement with each other and suggest 489 that for the chosen detector, a minimum detected count rate on the order between 4 to 5×10 4 490 e − /pixel/second allows us to calculate 2 with approximately 5% error. 491 Using the derived acceptable minimum detected count rate as a guide in determining the 492 accuracy of raw data signal, the same device was placed on a human subject's forehead using 493 a of 2.53 cm and of 5 ms. Data was acquired at a frame rate of 100 fps. A summary of . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Table 6. Parameters that were used to simulate synthetic speckles based on experimental data taken using a Basler (daA1920-160um) CMOS camera on a liquid phantom. contrast can account for parameters such as the speckle to pixel size and . 515 We have shown that the simulations accurately represent experimentally observed behavior 583 In the present work we have introduced a method for simulating the formation and detection 584 of dynamic speckle patterns. The main application that we have focused on was the design 585 and characterization of a speckle a contrast system capable of measuring human adult cerebral 586 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint blood flow non-invasively. To this end, the simulation method was validated on a dynamic 587 liquid phantom, the details of speckle contrast signal as a function of and were studied, and 588 finally a system designed for human cerebral blood flow was characterized and validated on an 589 adult human subject. The simulation method has been shown to be useful when identifying 590 the lower bounds of detected electron count-rate to achieve the desired accuracy and precision 591 of speckle contrast signal. As speckle contrast signal is sensitive to detector noise effects at 592 low detected electron count-rates, characterizing these limits is advisable when developing any 593 speckle contrast system. 25, 097004 (2020). 641 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted August 4, 2023. ; https://doi.org/10.1101/2023.08.03.551830 doi: bioRxiv preprint
5,549.6
2023-08-04T00:00:00.000
[ "Physics" ]
Differentiable Programming of Reaction-Diffusion Patterns Reaction-Diffusion (RD) systems provide a computational framework that governs many pattern formation processes in nature. Current RD system design practices boil down to trial-and-error parameter search. We propose a differentiable optimization method for learning the RD system parameters to perform example-based texture synthesis on a 2D plane. We do this by representing the RD system as a variant of Neural Cellular Automata and using task-specific differentiable loss functions. RD systems generated by our method exhibit robust, non-trivial 'life-like' behavior. Introduction Multicellular organisms build and maintain their body structures through the distributed process of local interactions among tiny cells working toward a common global goal. This approach, often called self-organisation, is drastically different from the way most human technologies work. We are just beginning to scratch the surface of integrating some of nature's "best practices" into technology. In 1952, Alan Turin wrote his seminal work, "The Chemical Basis of Morphogenesis" (Turing, 1952), in which he proposed that pattern formation in living organisms can be controlled by systems of chemical substances called morphogens. Simultaneous processes of chemical Reactions-Diffusion (RD) of these substances provide the sufficient biocomputation platform to execute distributed algorithms of pattern formation (Kondo and Miura, 2010;Landge et al., 2019). Even a system of just two interacting substances (e.g. Gray-Scott) can produce a great diversity of patterns and interesting behaviours (Pearson, 1993). As is often the case with complex emergent behaviours of simple systems, it is very difficult find model parameters that produce a particular, predefined behaviour. Most of the time researchers use hand-tuned reaction rules, random and grid search over the parameter space for the combinations with the desired properties. Procedural texture synthesis is one of the best known applications for this type of parameter tuning (Witkin and Kass, 1991;Turk, 1991). In this paper, we propose to use differentiable optimization as an alternative to such a trialand-error process. The task of determining the sets of parameters that lead to desired behaviors becomes even more important as we enter the realm of artificial life and synthetic biology. The work of Scalise and Schulman (2014) is a remarkable example of an attempt to design a flexible modular framework for RD-based spatial programming. The authors demonstrate (at least in simulation) a way to combine a set of basic computational primitives, implemented as RD systems of DNA strands, into multistage programs that generate non-trivial spatial structures. We argue that these programs cannot yet be called "self-organizing systems" due to two important limitations. First, they rely on a predefined initial state that defines the global coordinate system on which the program operates through chemical gradients or precise locations of chemical "seeds". Second, program execution is a feedforward process that does not imply homeostatic feedback loops that make the patterns robust to external perturbations or a changed initial state. Another very related line of research in artificial life is Lenia (Chan, 2020), which aims to find rules and configurations for space-time-state-continuous cellular automata that demonstrate life-like homeostatic and replicating behaviours. Figures in video form and a reference implementation is available here 1 . Differentiable Reaction-Diffusion Model We study a computational model that can be defined by the following system of PDEs: x 0 , . . . , x n−1 are scalar fields representing the "concentrations" n "chemicals" on a 2D plane. c i are per-"chemical" diffusion coefficients, and f θ : R n → R n is a function defining the rate of change of "chemical concentrations" due to local "reactions". Chemical terms are used here in quotes because, in the current version of our model, the function f θ need not obey any actual physical laws, such as the law of conservation of mass or the law of mass action. "Concentrations" can also become negative. Reaction-Diffusion as a Neural CA The objective of this paper is to train a model that can be represented by a space-time continuous PDE system. Yet, we have to discretize the model to run it on a digital computer. The discretized model can be treated as a case of Neural Cellular Automata (NCA) model Randazzo et al., 2020). Our model and the training procedure are heavily inspired by the texture-synthesizing Neural CA described by Niklasson et al. (2021). Similarly, we discretize space into cells and time into steps, use explicit Euler integration of the system dynamics and backpropagation-through-time to optimize the model parameters. Our model differs from the previous texture-synthesizing NCA: • CA iteration does not have the perception phase. The "reaction" part of cell update (modelled by f θ ) depends on the current state of the cell only. • There is an isotropic diffusion process that runs in parallel with "reaction" and is modelled by channelwise convolution with a Laplacian kernel. This is the only method of communication between cells. Thus, the Neural RD model is completely isotropic: in addition to translation, the system behaviour is invariant to rotational and mirroring transformations. • We do not use stochastic updates, all cells are updated at every step. Thus, the system described here can be seen as a stripped version of the Neural CA model. These restrictions are motivated by a number of practical and philosophical arguments described below. Model simplicity and prospects of physical implementation The discussion section of "Growing NCA" article by Mordvintsev et al. (2020) contains speculations about potential of physical implementation of the Neural CA as a grid of tiny independent computers (microcontrollers). Neural CA consists of a grid of discrete cells that are sophisticated enough to persist and update individual state vectors, and communicate within the neighborhood in a way that differentiates between neighbours, adjacent to different sides of the cell. This implies the existence of global alignment between cells, so that they agree where up and left are, and separate communication channels to send the information in different directions. In contrast, diffusion based communication doesn't require any dedicated structures, and "just happends" in nature due to the Brownian motion of molecules. Furthermore, states of Neural CA cells are only modified by their learned update rules and are not affected by any environmental processes. Cells have clear separation between what's inside and outside of them, and can control which signals should be let through. However, many natural phenomena of self-organisation can be effectively described as a PDE system on a continuous domain. Individual elements that constitute the domain are considered negligibly small, and the notion of their individual updates is meaningless. In the experiments section we cover some practical advantages of the proposed RD model with respect to Neural CA. In particular we demonstrate the generalization into arbitrary mesh surfaces and even volumes. Reaction-Diffusion CA update rule The discrete update rule can be written as follows: K lap is a 3x3 Laplacian convolution kernel, c i and θ are parameters that control the CA behaviour, and the coefficients r and d control the rates of reaction and diffusion, encapsulating temporal and spatial discretization step sizes ∆ t and ∆ h . By varying these coefficients we can validate if the learned discrete CA rule approximates the continuous PDE and does not over-feat to the particular discretization. During training we use r = d = 1.0. The function f θ (x) = act(xW 0 + b 0 )W 1 is a small two layer neural network with parameters θ : (W 0 ∈ R n×h , W 1 ∈ R h×n , b 0 ∈ R h ) and a non-linear elementwise activation function act (see the experiments section). In our experiments we use n = 32, h = 128, so the system models the dynamics of 32 "chemicals", and the total number of network parameters equals to 8320. Per-"chemical" diffusion coefficients c i can be learned (in this case we set c i = sigmoid(ĉ i ) to make sure that diffusion rate stays in 0..1 range), or fixed to specific values. Texture synthesis Reaction-Diffusion models are a well-known tool for texture synthesis. Typically, manual parameter tuning has been used to design RD systems that generate desired visual patterns (Witkin and Kass, 1991;Turk, 1991). We propose an example-based training procedure to learn a RD rule for a given example texture image. This procedure closely follows the work "Self-Organising Textures" (SOT) (Niklasson et al., 2021), with a few important modifications. Our goal is to learn a RD update function whose continuous application from a starting state would produce a pattern similar to the provided texture sample. The procedure is summarised in algorithm 1. Seed states SOT uses seed states filled with zero values. Stochastic cell updates provide the sufficient variance to break the symmetry between cells. We use synchronous explicit Euler integration scheme that updates all cells simultaneously, so the non-uniformity of the seed states is required to break the symmetry. We initialize the grid with a number of sparsely scattered Gaussian blobs ( fig.1). Figure 1: Random seed states. Periodic injection of seed states into training batches is crucial to prevent the model from forgetting how to develop the pattern from the seed, rather then improving already existing one. We observed that it is sufficient to inject the seed much less often than in SOT. We use R seed = 32 in this experiment. Rotation-invariant texture loss Similar to SOT, we interpret the first 3 "chemicals" as RGB colors and use them to calculate texture loss. Our texture loss is also based on matching Gram matrices of pre-trained VGG network activations (Gatys et al., 2015), but has an important modification to account for the rotational invariance of isotropic RD. Consider the texture lined 0118 from the figure 2. The target pattern is anisotropic, but RD, unlike NCA, has no directional information and cannot produce a texture that exactly matches the target orientation. We construct the rotation-invariant texture loss by uniformly sampling N rot target images rotated by angles 0 • ...360 • and computing the corresponding texture descriptions. This computation only occurs in the initialization phase and does not slow down the training. At each loss evaluation, the texture descriptors from RD are matched with all target orientations and the best match is taken for each sample in the batch. Per-"chemical" diffusion coefficients in Eq. (1) are set to c 0..7 = 1/8, c 8..15 = 1/4, c 16..23 = 1/2, c 24..31 = 1, so that the "substances" are split into 4 groups of varying diffusivity. Therefore, RGB colors correspond to slowly diffusing channels x 0..2 . We experimented with learned diffusion coefficients, but it did not seem to bring substantial improvement, so we kept fixed values for the simplicity. In all of the texture synthesis experiments we use wrap-around (torus) grid boundary conditions. RD network uses a variant of Swish (Ramachandran et al., 2017) elementwise activation function: act(x) = xσ(5.0 * x), where σ is a sigmoid function. We trained seven texture-synthesizing RD models ( fig.2). Six were using 128x128 images from DTD dataset (Cimpoi et al., 2014), and the last was using a 48x48 lizard emoji image, replicated four times. Models were trained for 20000 steps using the Adam optimiser (Kingma and Ba, 2015) with learning rate 1e-3 decayed by 0.2 at steps 1000 and 10000. We also used the gradient normalization trick from SOT to improve training stability. Training a single RD model took about 1 hour on the NVIDIA P100 GPU. Results In spite of the constrained computational model, trained RD systems were capable of reproducing (although imperfectly) distinctive features of the target textures. All models except chequered 0121 seemed to be isotropic, which manifested in the random orientation of the resulting patterns, depending on the randomized seed state. chequered 0121 always produced diamond-oriented squares, which suggests overfitting to the particular discrete grid. Below we investigate this behaviour more carefully. Witkin and Kass (1991) proposed using anisotropic diffusion for anisotropic texture generation with RD. In our experiments, we demonstrated the capability of fully isotropic RD systems to produce locally anisotropic textures through the process of iterative alignment, that looks surprisingly "life-like". Figure 3 shows snapshots of grid states at different stages of pattern evolution. We recommend watching supplementary videos to get a better intuition of RD system behaviours. Do we really learn a PDE? We decided to validate that the discrete Neural CA that we use to simulate RD system is robust enough to be used with different grid resolution than used during training. This would confirm that the system we trained really approximates a space-time continuous Figure 2: Left column: sample texture images; other columns: different 5000-step runs of the learned RD models. Almost all models replicate features of the target texture in rotation-invariant fashion. Only chequered 0121 overfitted to exploit the underlying raster grid structure to always produce diamond-oriented checker squares. Figure 4: Running RD system on a non-uniform r grid. All models except the rightmost seem to be capable of operating on a modified grid, preserving the key pattern characteristics. chequered 0121 overfitted to the particular grid resolution and is unable to produce right corners at larger scales. Sometimes it even develops instabilities and explodes. PDE without overfitting to the particular grid discretization. One way to execute the RD on a finer grid is to decrease the ∆ h coefficient in (1). This leads to the quick growth of the diffusion term, making the simulation unstable. We can mitigate this by reducing the ∆ t as well. In practice we keep d = 1.0, but decrease r, so that ∆ h = √ r. Thus, setting r = 1/4 corresponds to running on a twice finer grid, and should produce patterns magnified two times. Decrease of r may also be interpreted as reaction speed slowdown or diffusion acceleration. Fig. 4 shows results of RD system evaluation on a grid having non-uniform r: in the center r = 1, slowly decreasing to r ≈ 1/9 at the boundary. Most of the trained models were capable of preserving their behaviour independent of the grid resolution. chequered 0121 model was the only Figure 5: RD texture models can easily be applied to mesh surfaces. We treat each vertex of the Stanford Bunny model as a cell, and allow the associated state vectors to diffuse along the mesh edges. This enables consistent texturing of dense meshes without constructing UV-maps or tangent coordinate frames. Please see the supplementary videos for the system dynamics and more views. exception. The grid-overfitting hypothesis was confirmed by the fact that the model could not form right angles at the grid corners, and even developed instabilities in the fine resolution grid areas. Generalization beyond 2D plane RD-system can be applied to any manifold that has a diffusion operation defined on it. This enables much more extreme out-of-training generalization scenarios, than those possible for Neural CA. For example, applying texture synthesis method by Niklasson et al. (2021) to a 3d mesh surface would require defining smooth local tangent coordinate systems for all surface cells (for example, located at mesh vertices). This is necessary to compute partial derivatives of cell states with some variant of generalized of Sobel filters. In contrast, Neural RD doesn't require tangents, and can be applied to an arbitrary mesh by state diffusion over the edges of the mesh graph (see fig. 5). Even more surprisingly, RD models that were trained to produce patterns on a 2d plane, can be applied to spaces of higher dimensionality by just replacing the diffusion operator with a 3d equivalent. Figure 6 shows examples of volumetric texture synthesis by 2d models. Discussion Reaction-Diffusion is known to be an important mechanism controlling many developmental processes in nature (Kondo and Miura, 2010;Landge et al., 2019). We think that mastering RD-system engineering is an important prerequisite for Figure 6: 2D to 3D generalization of Reaction-Diffusion models. The models that were trained on 2D plane with 2D image loss and executed on a 3D space. For some patterns, individual slices though the volume have textures similar to the target images, while for others (e.g. the last row) the similarity is less convincing. We treat white color as transparent to visualize the structure of lizards and polka-dotted patterns. Please see supplementary videos for model dynamics. making human technology more life-like: robust, flexible, and sustainable. This work demonstrates the applicability of Differentiable Programming to the design of RD-systems. We think this is an important stepping stone to transform RD into a practical engineering tool. To achieve this, some limitations should be addressed in future work. First, it is crucial to find such optimization problem formulations that would produce physically plausible RD systems. Second, further research in the area of differentiable objective formulations is needed to make this approach applicable to a broader range of design problems for self-organizing systems.
3,892
2021-06-22T00:00:00.000
[ "Computer Science" ]
Monte Carlo calculation of beam quality correction for solid‐state detectors and phantom scatter correction at 137Cs energy Beam quality correction kQQ0 (r), which reflects the absorbed energy dependence of the detector, is calculated for solid‐state detector materials diamond, LiF, Li2B4O7, and Al2O3 for the 137Cs RTR brachytherapy source using the Monte Carlo‐based EGSnrc code system. The study also includes calculation of detector‐specific phantom scatter corrections kphan(r) for solid phantoms such as PMMA, polystyrene, RW1, solid water, virtual water, and plastic water. Above corrections are calculated as a function of distance r along the transverse axis of the source. kQQ0 (r) is about unity for the Li2B4O7 detector. LiF detector shows a gradual decrease in kQQ0 (r) with r (decrease is about 2% over the distance range of 1‐15 cm). Diamond detector shows a gradual increase in kQQ0 (r) with r (about 3% larger than unity at 15 cm). In the case of Al2O3 detector, kQQ0 (r) decreases with r steeply (about 14% over the distance range of 1‐15 cm). The study shows that some solid‐state detectors demonstrate distance‐dependent kphan(r) values, but the degree of deviation from unity depends on the type of solid phantom and the detector. PACS number: 87.10.Rt, 87.53.Bn, 87.53.Jw, 87.56.Bg I. IntroduCtIon American Association of Physicists in Medicine (AAPM) Task Group reports AAPM TG43 (1) and TG43U1 (2) recommend water as a reference medium for dosimetry of interstitial brachytherapy sources. Due to high-dose gradients near brachytherapy sources and specification of the dose parameters within few centimeters of the source, source-detector distance should be specified very accurately for dosimetric measurements. Precise positioning of detectors, reproducibility of source and detectors in reference liquid water medium, and water proofing of detectors posses a practical problem. Solid phantom materials can be easily machined to accommodate the source and detectors in a precise geometrical configuration, facilitating an accurate measurement and reproducibility in source-detector geometry. In a previously published article, relative absorbed-dose energy response corrections R for detector materials such as air, LiF, Li 2 B 4 O 7 , Si diode, diamond, and Al 2 O 3 were presented for 169 Yb and 125 I brachytherapy sources. (3) The corrections were calculated using the EGSnrcbased (4) Monte Carlo code system for liquid water, PMMA, and polystyrene phantom materials. The present study is aimed at investigating absorbed-dose energy dependence of solid-state detector materials such as diamond, LiF, Li 2 B 4 O 7 , and Al 2 O 3 at the 137 Cs energy. This investigation also includes calculation of detector-specific phantom scatter correction for different solid phantoms such as PMMA, polystyrene, RW1, solid water, virtual water, and plastic water. The EGSnrc-based (4) user-codes DOSRZnrc and FLURZnrc (5) are used in the study. A. rtr 137 Cs source The geometric details and material data of the RTR 137 Cs are from the published work. (6) The active length and active radius (active material is gold) of the source are 1.5 cm and 0.04 cm, respectively. The outer radius of the source is 1.5 mm. For Monte Carlo calculations, we have considered only the 662 keV gamma energy of 137 Cs emission, as in a previously published study by Selvam et al., (7) it was demonstrated that 137 Ba X-rays were not important. B. Phantom materials Elemental composition, mass fraction, mass density, < Z/A >, and effective atomic number (Z eff ) of water and solid phantom materials are presented in Table 1. The atomic composition and density details of the phantoms are taken from the literature. (8)(9)(10)(11) Z eff values are calculated at 662 keV using the Auto-Z eff software by Taylor et al. (12) C. theoretical background of measurement of absorbed dose to water at brachytherapy energies C.1 Dose measurements in water phantom Following discussion is based on the published study by Adolfsson et al. (13) Primary standards of absolute measurements of absorbed dose to water D w are based on water calorimetry. (14) 60 Co or megavoltage (MV) photon beam serves as a reference beam quality Q 0 for this purpose. A dosimeter, for example, ionization chamber calibrated to measure D w at the primary or secondary standards can be used in other beam quality Q (example, other clinical MV photon beams) by using the beam quality correction factor . (15)(16) The other dosimeters, such as solid-state dosimeters, can therefore be calibrated to measure D w at Q traceable to the primary standard. Note that may be calculated at a brachytherapy beam quality, Q, involving a solid-state detector. Consider a solid-state detector is used for measuring D w at Q 0 . This quantity is denoted by . The output measured by the solid-state detector is denoted by . An absorbed doseto-water calibration coefficient can be obtained by using the following the relation: The absorbed dose to the material of the sensitive detector element at Q 0 , , and are related as follows: (17)(18)(19) (2) where the function (called intrinsic energy-dependence (17)(18) ) relates and as below: (3) Let us now consider a cylindrical photon emitting brachytherapy source (beam quality Q, in this study it is 137 Cs) is immersed in a liquid water phantom. The absorbed dose to water in the liquid water phantom at r along the transverse axis of the source is denoted by . The output measured by the detector at r is . Likewise Eq. (3), absorbed dose to the detector at Q, and are related by is obtained by using the following relation: where is the beam quality correction and is given by Using Eqs. (3) and (4) in Eq. (7) gives is relative absorbed dose energy response correction. (3,(17)(18) As described in the previously published work, (3,(17)(18)(19) absorbed-dose dependence at Q, f(Q) relates absorbed dose to medium of interest (usually water), D w,Q and absorbed dose to detector, D det,Q , as below: (13) Similarly at Q 0 : (14) Equation (12) is therefore written as: (15) Equation (10) has two components: (a) , relative intrinsic energy dependence of the detector which can only be determined experimentally, and (b) , inverse of relative absorbed-dose energy response correction. Investigations on photon energy dependence of LiF:Mg,Ti TLDs were published in the 1960s and 1970s, with a summary of the results presented by Budd et al. (20) Most of the studies measured an intrinsic energy dependence that was greater than unity for photon energies below about 150 keV, relative to TLDs that had been calibrated using 60 Co photons. On average, the measured light output was about 10% higher than would be expected based solely on the absorbed-dose energy dependence. For detailed discussion on intrinsic energy dependence of TLDs, readers may consult the literature. (17) As mentioned by Adolfsson et al., (13) when an ion chamber is used, where W is the mean energy imparted to air to form an ion pair in air at Q, and W 0 is the corresponding quantity at Q 0 . The value of W is usually considered to be independent of the beam quality in MV photon and electron beams, but may take other values in beams of protons and heavier charged particles due to the increased ion density along the tracks of the heavy charged particles compared to that along electron tracks. (21) Note that if the yield of radiation-induced products in the detector is independent of the radiation beam quality (i.e., yield is constant), then . Therefore Eq. (9) becomes (16) C.2 Brachytherapy dose measurements in a solid phantom Generally, in brachytherapy, absorbed dose measurements involving solid-state detectors are carried out in solid phantoms. The absorbed dose to detector at r in the solid phantom at Q is denoted by . It is recalled that is absorbed dose to detector at Q at r in the liquid water phantom. and are related as follows: (17) where accounts for influence of solid phantom on the response of the detector, which is known as phantom scatter correction at beam quality Q. Therefore, when measurements are carried out in solid phantoms at Q, in addition to the application of (Eq. (16)), the detector response is required to be corrected for to account of phantom scatter. The final expression for obtaining absorbed dose to water in the liquid water phantom is given by (18) where is output measured by the solid detector at Q in a solid phantom at r. D.1 FLURZnrc simulations of collision kerma and mean energies for 137 Cs RTR source The approach adapted for the Monte Carlo calculations of dose ratio of detector to water is as described in the published study. (3) The source is positioned at the centre of a 40 cm diameter by 40 cm height cylindrical phantoms (liquid water and solid phantoms). The photon fluence spectrum in 10 keV energy intervals is scored along the transverse axis of the source (r = 1-15 cm) in 2 mm high and 0.5 mm thick cylindrical shells. The fluence spectrum is converted to collision kerma to water and collision kerma to detector materials by using the mass energyabsorption coefficients of water and detector materials, respectively. (11) D.2 Calculations of dose ratios at Q 0 In the published study, (3) it was demonstrated that the detector-to-water dose ratio calculated at the reference beam quality Q 0 ( 60 Co beam) at 0.5 mm depth in water phantom was independent of the detector thickness (0.1 mm-5 mm). In the present study, we calculated the above dose ratio for depths 5 cm and 10 cm along the central axis of the water phantom. We used detector dimensions of 5 mm radius × 1 mm thickness. In the Monte Carlo calculations, a parallel 60 Co beam is incident on a 20 cm radius × 40 cm height cylindrical water phantom. The beam has a radius of 5.64 cm at the front face of the phantom (field size is 100 cm 2 ). A realistic 60 Co spectrum from a telecobalt unit distributed along with the EGSnrc code system (4) is used in the calculations. This investigation produced similar dose ratios as obtained at 5 mm depth. This suggests that is independent of depth in the water phantom. We also calculated the dose ratio at depths 5 mm, 5 cm, and 10 cm in the PMMA phantom using the detector dimensions 5 mm radius × 1 mm thickness. The results obtained from the PMMA phantom compare well with the results of water phantom. We have therefore used the values of published in the previous work (3) for deriving . The parameters ECUT and PCUT electron and photon transport cutoff, respectively. ESAVE is a parameter related to range rejection technique. Table 2 presents the values of E fl as a function of r for the 137 Cs RTR source in various phantoms. As r increases E fl decreases, but the degree of decrease depends on the type of phantom. For the phantoms such as water, virtual water, RW1, and solid water, E fl decreases from about 565 keV to 260 keV when the distance is increased from 1 cm to 15 cm. In the case of plastic water phantom, E fl decreases from 570 keV to 285 keV in the above distance range. The values of E fl at 15 cm are 228 keV and 239 keV, respectively, for PMMA and polystyrene phantoms. B. Phantom scatter correction The investigation of phantom scatter also included water as a detector material. Values of calculated for the phantoms polystyrene, PMMA, virtual water, RW1, solid water, and plastic water are presented in Figs. 1 to 6. A solid phantom may be termed as water-equivalent when the value of is unity. The investigation suggests that some solid-state detectors demonstrate distance-dependent values, but the degree of dependence depends on the type of solid phantom and the type of detector. For example, the phantoms such as RW1, virtual water, and solid water almost behave like water-equivalent at all distances (1-15 cm) for all the investigated detectors (with a maximum deviation of about 2% from unity for the Al 2 O 3 detector in RW1 phantom). Polystyrene, virtual water, RW1, and solid water phantoms are water-equivalent for the diamond detector as is about unity, independent of distance (maximum deviation is about 1% in the distance range of 1-15 cm for polystyrene phantom). Whereas, for the phantoms PMMA and plastic water, increases with r for the diamond detector. The value increases to 1.0607 in PMMA and 1.0212 in plastic water at 15 cm for the diamond detector. For the LiF, Li 2 B 4 O 7 detectors, virtual water, RW1, and solid water are waterequivalent (within 1%). Note that Li 2 B 4 O 7 detector behaves like water detector at all distances for all the solid phantom materials investigated. For the Al 2 O 3 detector, the phantoms such as Polystyrene, PMMA, and RW1 show decrease in with r and the degree of decrease is higher for polystyrene phantom. For example, the value decreases to 0.9075, 0.9697, and 0.9794 at 15 cm for the phantoms polystyrene, PMMA, and RW1, respectively. The degree of decrease is higher in polystyrene phantom. Figure 7 presents the values of for the 137 Cs RTR source obtained using Eq. (16). The numerical values of this figure are given in Table 3. For the Li 2 B 4 O 7 detector, is about unity, and is independent of r. The LiF detector shows a gradual decrease in with r. The decrease is 2% over the distance range of 1-15 cm. Diamond detector shows a gradual increase in with r (about 3% larger than unity at 15 cm). For the Al 2 O 3 detector, decreases with r steeply (about 14% over the distance range of 1-15 cm). D. Influence of detector dimensions on detector response Dimensions of TLD-100 (LiF:Mg,Ti) chips reported in the literature (22)(23) are 3 × 3 × 0.9 mm 3 , and 1 × 1 × 1 mm 3 , and 3.2 × 3.2 × 0.38 mm 3 . Carbon-doped cylindrical discs of Al 2 O 3 detectors (4 mm diameter × 1 mm height) are used in radiotherapy photon beams. (24) Al 2 O 3 :C chips (2 mm long and 0.5 × 0.5 mm 2 in cross-sectional area) are used in 192 Ir high-dose-rate dosimetry. (25) The sensitive volume of the PTW/diamond detector is a disk made from natural diamond (density 3.51 g/cm 3 ) with a radius ranging from 1.0 to 2.2 mm and a thickness ranging from (27) In order to quantify the influence of detector thicknesses on the calculated response, we adapted an approach as applied in a previously published work (3) due to limitations associated with the DOSRZnrc user-code. LiF, Li 2 B 4 O 7 , and Al 2 O 3 detectors are modeled as cylindrical shells of thickness 1 mm and height 2 mm along the transverse axis of the source. The phantoms considered are water, polystyrene, and plastic water. Absorbed dose and collision kerma to these detectors are calculated at r = 1 and 15 cm. The DOSRZnrcbased collision kerma values are statistically identical to the FLURZnrc-based collision kerma values. This suggests that detector dimensions do not affect the calculated values. In the case of diamond detector, the calculations are carried out for 0.2 mm and 0.4 mm thicknesses separately (height is 2 mm). DOSRZnrc calculations using these thicknesses show collision kerma values comparable to those obtained using the FLURZnrc user-code. Whereas the absorbed dose calculated for the 0.2 mm thick diamond detector is smaller by about 1% when compared to the collision kerma. In the case of 0.4 mm thick diamond detector, both collision kerma and absorbed dose are statistically identical. IV. ConCLuSIonS Absorbed-dose energy dependence of solid-state detector materials such as diamond, LiF, Li 2 B 4 O 7 , and Al 2 O 3 for the 137 Cs RTR brachytherapy source is studied using the Monte Carlo-based EGSnrc code system. Beam quality correction , which reflects absorbeddose energy dependence of the detector, shows a gradual decrease with r for the LiF detector (decrease is about 2% over the distance range of 1-15 cm). Diamond detector shows a gradual increase in with r (about 3% larger than unity at 15 cm). For Al 2 O 3 detector, decreases with r steeply (about 14% over the distance range of 1-15 cm). Li 2 B 4 O 7 does not show energy dependence. The study shows that some solid-state detectors demonstrate distancedependent values, but the degree of dependence depends on the type of solid phantom and the detector.
3,855.6
2014-01-01T00:00:00.000
[ "Physics", "Medicine" ]
Efficient nonlinear equalizer for intra-channel nonlinearity compensation for next generation agile and dynamically reconfigurable optical networks : In this work, we propose and experimentally demonstrate a novel low-complexity technique for fiber nonlinearity compensation. We achieved a transmission distance of 2818 km for a 32-GBaud dual-polarization 16QAM signal. For efficient implantation, and to facilitate integration with conventional digital signal processing (DSP) approaches, we independently compensate fiber nonlinearities after linear impairment equalization. Therefore this algorithm can be easily implemented in currently deployed transmission systems after using linear DSP. The proposed equalizer operates at one sample per symbol and requires only one computation step. The structure of the algorithm is based on a first-order perturbation model with quantized perturbation coefficients. Also, it does not require any prior calculation or detailed knowledge of the transmission system. We identified common symmetries between perturbation coefficients to avoid duplicate and unnecessary operations. In addition, we use only a few adaptive filter coefficients by grouping multiple nonlinear terms and dedicating only one adaptive nonlinear filter coefficient to each group. Finally, the complexity of the proposed algorithm is lower than previously studied nonlinear equalizers by more than one order of magnitude. Introduction To satisfy the ever-increasing capacity demand in optical fiber communications, both the spectral efficiency (SE) and the data-rate carried by each wavelength division multiplexed channel has to increase.According to Shannon's theory of linear communication systems, the channel capacity is logarithmically proportional to signal-to-noise ratio (SNR).Therefore, the capacity can be increased by increasing the signal power.However, because of fiber Kerr nonlinearities, there is an optimum launch power limit.Further increases in input signal power beyond the optimal power levels degrades transmission performance.Consequently, fiber nonlinearities are the major remaining impairments for the next generation coherent optical fiber communication systems that ultimately limit the achievable transmission distance [1]. Following recent advances in high-speed digital signal processing (DSP) technology, along with global adoption of coherent detection techniques, various intra-channel fiber nonlinearity compensation algorithms have been proposed [2].For instance, digital backpropagation is an effective nonlinear compensation (NLC) technique, which has received considerable attention [3][4][5].It normally requires multiple computation steps per each fiber span and at least two samples per symbol.This leads to high complexity [6].Consequently, its application is limited to the transmission systems using offline signal processing. There are also reports of nonlinear frequency-domain equalizers based on closed form analytical approximations of fiber third-order Volterra kernels.These techniques simultaneously compensate both nonlinear and linear impairments.However, their major drawback is their high complexity [7,8].A Wiener-Hammerstein equalizer (for OFDM transmission) [9] and modified nonlinear decision feedback equalizer [10] was proposed as a simpler alternative.These algorithms adaptively compensate for all fiber impairments according to following formulas (presented in time-domain): where L n and NL n are linear and nonlinear equalizer memory lengths, respectively.Their nonlinear filter computational complexity grows as adaptive nonlinear filter coefficients.Therefore, their performance was investigated for single polarization systems with inline dispersion compensation and a short channel nonlinear memory length (i.e.fiber with small dispersion parameter) [9,10]. The perturbation-based nonlinear pre-compensation (or post-equalization) technique compensates the accumulated nonlinearities with only one computation step and can be implemented with one sample per symbol [11][12][13].Typically, perturbation based NLCs have lower computational complexity than blind nonlinear equalization algorithms [7][8][9][10].However, perturbation based NLC algorithms requires prior knowledge of fiber optic transmission system parameters in order to calculate perturbation coefficients [11] and 50% chromatic dispersion (CD) pre-compensation at the transmitter for efficient implementation [12,13].Development of wavelength-selective switching (WSS) technologies and flexible optical transceivers (with reconfigurable rates and modulation formats) is enabling the next generation of transparent optical networks.Remote reconfiguration of a mesh network (i.e.dynamic networking) and adaptation of the flexible optical transceivers provides optimal network utilization and agility [14,15].Thus, many sets of perturbation coefficients (depending on different transceiver configuration, CD pre-compensation and selected transmission route) have to be stored in the line card memory.In addition, identification of correct set of perturbation coefficients becomes a very difficult task once all possible network parameters are considered.In addition, for certain scenarios, the exact transmission route is unknown at the transmitter or receiver.Therefore, perturbation based NLC cannot be easily deployed in dynamically reconfigurable meshed optical network architectures and its application is restricted to point-to-point systems with well-studied a parameters. Low-complexity adaptive nonlinear equalization algorithms are highly desirable for next generation agile and flexible high-data rate fiber optic communication systems.We propose a novel nonlinear equalizer based on the first-order perturbation model with quantized perturbation coefficients.The proposed equalizer deals with the nonlinear impairment only and can be implemented in any single-carrier communication receiver after conventional single carrier DSP [16].Its computational complexity is similar to the perturbation based NLCs [11,12], but on the other hand it does not require accurate knowledge of transmission link and transceiver parameters.Here, we adopted the decision directed least mean square (DD-LMS) algorithm for learning and optimum adaptation of nonlinearity coefficients at the receiver. This paper is organized as follows: in Section 2, we review perturbation based NLC algorithms.Section 3 introduces our nonlinear equalization scheme.Section 4 describes the experimental setup used to evaluate the performance of the proposed algorithms.Section 5 then discuss the experimental results.Finally, Section 6 presents our conclusions Motivation and principles of perturbation based nonlinearity compensation The evolution of the optical field envelope in a fiber optic link is described by the nonlinear Schrödinger equation (NLSE) [17]: where (t, z) u is the optical field, 2 (z) is the perturbation due to the nonlinear effects.Under the first order approximation and based on phase-matching condition, among all possible symbol triplets (with indices m , n and k ) that cause intra-channel nonlinear impairments, only the triplets that hold the k m n = + property play a significant role.Therefore, the intra-channel four-wave-mixing (IFWM) and intra-channel cross-phase-modulation (IXPM) nonlinear-induced distortions on the transmitted symbol can be expressed as (we have removed the triplets, which result in phase rotation) [7]: Here x and y subscripts denote the two polarizations, P is the optical signal power, , m n C is the perturbation coefficient with m and n denoting the symbol index relative to the current symbol.k, / x y A is the transmitted symbol.Equations ( 4) and ( 5) imply that the nonlinear field is a linear combination of triplets consisting of transmitted symbols, weighted by , m n C coefficients.Perturbation coefficients can be numerically calculated as follow [8]: Here (z) γ denotes the fiber nonlinear coefficient, k is the scaling factor (z) f describes the power distribution profile along the link, T stands for the symbol period, (0) (0, t) g is the pulse shape with zero accumulated dispersion (z 0) = , and (0) (z, t) g is the dispersed pulse shape corresponding to a fiber length z which is calculated according to (i)fft denotes the (inverse) Fourier transform, f is frequency, 2 β and is the first-order group velocity dispersion [8]. Figure 1 demonstrates the normalized perturbation coefficients after 480 km of single-mode optical fiber (SMF).Assuming Gaussian pulses and ignoring the attenuation, analytical expressions in terms of the exponential integral function exist for the nonlinear coefficients [11]: , 0 9 3 where , τ T and L are the pulse-width, the inverse of symbol rate and the transmission distance, respectively.m and n are the symbol indices, and ( ) is the exponential integral function [18].It can be seen that for , 0, It is evident that, , m n C varies with only a single parameter m n q = ⋅ and thus, all pairs of (m, n) could share the same coefficient q C as long as their product equals a unique q .It should be noted that in the case of other symmetric pulses the perturbation coefficients still satisfy the properties stated in Eqs.(10) and (11).We also verified Eqs. ( 10) and ( 11) by numerical integration of Eq. ( 6) for a root raised cosine (RRC) pulse shaping filter. For more efficient implementation of the algorithm, quantization of the coefficients (i.e.combining multiple terms with similar perturbation coefficients) can be used to reduce the number of complex multiplications [12,13].However, there is an expected trade-off: fewer quantization levels lead to lower complexity, at the expense of reduced performance.In this case, Eqs. ( 4) and ( 5) can be simplify as: , , where k N is the total number of quantization levels and the region representative , k C was obtained using this formula: Unfortunately, conventional uniform quantization does not provide optimum performance [18] and optimized quantization levels and quantized values, k C , have to be determined.The exhaustive search method with minimum mean square error (MMSE) as criteria for the level estimation has been proposed for offline calculation of optimum quantized perturbation coefficients and the corresponding quantization levels [19]. In cases where a single fiber type is deployed throughout the transmission path, and assuming a symmetric power profile and dispersion map (which can be readily obtained by 50% CD pre-compensation at the transmitter), it can be shown that perturbation coefficients become imaginary-valued [12,13].This reduces the computational complexity by replacing the complex multipliers with real multipliers and by reducing channel nonlinear memory.However, this approach has a limited practical use when it comes to reconfigurable mesh optical networks.In addition, most of the fibers in optical networks come from the legacy networks, which contain different fiber types (ITU-T G.652, G.653, and G.655) with widely varying characteristics [20].Therefore, a symmetric dispersion map cannot be obtained by only performing 50% CD pre-compensation at the transmitter.In addition, for long haul fiber optic transmissions CD pre-compensation would largely enhance the signal's peak-to-average power ratio (PAPR), which would inevitably increase DAC quantization and clipping noises [21], in addition to enhancing the nonlinear impairment. Decision directed least mean square (DD-LMS) nonlinear filter equalizer Assuming linear impairments are compensated by conventional single carrier DSP [16], details of proposed equalizer are as follows.After rephrasing Eqs. ( 1) and ( 2) with respect to Eqs. ( 12) and ( 13), output of the nonlinear equalizer can be express as: where we define: Equations ( 15) and ( 16) imply that the nonlinear equalizer output can be express as a linear combination of symbol triplet sums i.e. 19) and ( 20) can be implemented over both polarizations independently.Averaging coefficients over two polarizations will result in a better estimation and higher noise rejection.Alternatively, for a more efficient implementation, the adaptation process can be divided between two polarizations where half of the coefficients are updated using x-polarization, and the remaining coefficients are calculated using y-polarization.In this paper, the later approach has been used. In order to identify indices of triplet symbols, which constitute the triplet sums (i.e. ), the nonlinear channel memory length has to be determined first.Pulse broadening induced by chromatic dispersion leads to multiple pulse collisions in an optical fiber.These pulses interact with each other due to fiber Kerr effect and induce nonlinear distortion on the transmitted symbols.Therefore, maximum nonlinear memory of the fiber is highly related to the CD-induced pulse broadening (normalized by symbol length) expressed by [16] Here, (z) D and tot L denote the dispersion parameters and total link length, respectively., At the launch point, accumulated dispersion and channel nonlinear memory are equal to zero.However, as a pulse propagates down the fiber, CD increases the linear and nonlinear channel memory.At the receiver, it is equals to max . CD n Therefore, we use half of the maximum pulse broadening as an approximation for effective fiber nonlinearity memory i.e. max 2 .     denotes the floor operator.We point out that accumulated dispersion can be easily extracted from the CD compensation equalizer or any CD monitoring algorithms [22].In addition, fiber nonlinearity memory can be set manually based on acceptable computational complexity and available DSP resources. denotes the absolute value operator.Notice that, for i τ ′ in accordance with Eq. ( 9) the product of m and n (i.e.q m n = ⋅ ) is used for portioning the symbols.The maximum value for q is .( 11), for the corresponding perturbation coefficients, we have . We used this property and averaging to improve estimates of the perturbation coefficients. .We observe that the proposed algorithm uses similar indices for adaptive perturbation nonlinearity compensation.Furthermore, we numerically integrated Eq. ( 6) for different system parameters, and observed that in all cases our nonlinear equalizer uses smaller set of triplet's indices in comparison to the perturbation based NLCs [12].Therefore, the proposed algorithm should have similar computational complexity to the perturbation based NLCs [11,12].We calculated the total number of total symbol triplets for our nonlinear equalizer as follow: In comparison to previously studied nonlinear equalizers [10][11][12][13], the total number of triplet indices is reduced from ( ) to ( 1) log( 1) − , which is smaller by more than one Experimental setup Figure 3 shows the schematic diagram of the experimental setup.On the transmitter side offline DSP, four 2-tuple independent pseudo-random bit sequences (PRBS) are mapped to 16QAM symbols, followed by pulse shaping at 2 samples per symbol for each polarization.A Ciena WaveLogic 3 (WL3) line card was employed, which contains four 39.5 GSa/s 6 bit digital-to-analog converter (DACs), a tunable frequency laser source, and a dual-polarization (DP) IQ-modulator.The transmitter laser was operating at 1554.94 nm.The transmitter analog frequency response is compensated using the on-board built-in DSP of the WL3.The output optical signal is then boosted to 23 dBm using an erbium-doped fiber amplifier (EDFA), and subsequently attenuated using a conventional variable optical attenuator (VOA) in order to get a desired optical launch power.The optical signal is then launched into a recirculating loop.This loop consists of four spans of 80 km of single mode fiber (SMF-28e + LL) and four inline EDFAs.Each inline EDFA has a noise figure of 5.5 dB.A tunable bandwidth and tunable center wavelength band-pass filter (T-T BPF) is inserted after the 4th span in order to reject out-of-band amplified spontaneous emission (ASE) noise accumulated during transmission.The gain of the last EDFA is adjusted (increased by 10 dB compared to the other EDFAs) in order to compensate for losses occurring inside the recirculating loop, including switches, couplers and the band-pass filter.At the receiver side, an optical spectrum analyzer (OSA) was used in order to measure the signal optical signal-to-noise ratio (OSNR) at 0.5 nm resolution bandwidth and then it was converted to a 0.1 nm noise bandwidth.The gain of the pre-amplifier EDFA was adjusted to ensure that the signal power reaching the coherent receiver was held constant at 5 dBm.Finally, a 0.8 nm BPF was used to filter out the out-of-band amplified spontaneous emission noise generated by the pre-amplifier.At the polarization-diversity 90° optical hybrid, the signal was mixed with 15.5 dBm local oscillator (LO) light from an external-cavity laser (ECL) with a linewidth of 100 kHz.The beating outputs were passed through four balanced photodetectors.A 4-channel real-time oscilloscope sampled the signal at a sampling rate of 80 GSa/s and digitized it with 8-bit resolution. Figure 4 shows the top-level block diagram of the receiver.The DSP code starts with front-end compensation, including the DC removal, IQ imbalance compensation and optical hybrid IQ orthogonalization using the Gram-Schmidt orthogonalization procedure [16].Next, the signal was resampled to 2 samples per symbol and then passed through the overlap-andsave frequency domain CD compensation and laser frequency offset compensation based on the FFT of the signal at the 4th power.Matched filtering was performed in the frequency domain using the same pulse-shaping filter used at the transmitter.Sampling frequency-offset compensation and timing recovery were carried out using a non-data-aided feed-forward symbol-timing estimator [23].Next, synchronization is performed in order to facilitate data aided modulation transparent equalization by conventional correlate and delay algorithms (i.e.Schmidl-Cox) [24].A training-symbol-aided decision directed least radius distance (TS-DD-LRD) [25] based fractionally spaced linear equalizer with 15 taps was used for fast convergence of the coefficients.The carrier phase was recovered using the superscalar parallelization based phase locked loop (PLL) combined with a maximum likelihood phase estimation [26].Next, the fiber nonlinearity is compensated using either our novel adaptive nonlinear equalizer or perturbation based post nonlinearity compensation [11] (for comparison purposes).Finally, the symbols were mapped to bits and the bit error rate (BER) was counted over 100,000 bits and a soft-decision forward error correction (20% overhead) BER threshold of Discussion and results In this section, we investigate the performance of proposed nonlinear equalizer against perturbation based nonlinear compensation at the receiver.All DSP blocks and parameters are identical for all schemes except for the nonlinearity compensation scheme.For all measurements, a root raised cosine (RRC) filter with a roll-off factor equal to 0.01 is chosen as a pulse-shaping filter for 32-GBaud dual-polarization (DP) 16QAM transmissions.Also ( 2 ) adaptive coefficients for nonlinear equalizer.We have investigated the performance of the perturbation based NLC using two different implementations: 1) without quantization of coefficients and 2) with uniform quantization of perturbation coefficients into 15 coefficients.In the experiment, the performance is investigated at 1 dBm launch power after 2560 km. Figure 5 summarizes the BER versus nonlinear equalizer memory depth curves for 32 GBaud DP-16QAM.The equalizer memory, NL eff n , is normalized by maximum CD-induced pulse broadening, max . CD n As shown in both figures, a negligible penalty is observed when max 2 and by further decreasing the memory depth the performance decreases significantly.This justifies our motivation for Eq.(23).In addition, it should be noted that in the case that max CD n is unknown at the receiver the equalizer memory can be set manually by starting from a small NL eff n and gradually increasing the value until the desired performance is reached.Next, we investigated the BER under different launch powers.The investigated distance is 2560 km.As shown in Fig. 6, when the power launched into the fiber is low, transmission is mainly limited by linear impairments and the BER is approximately the same for all algorithms.However, as the launch power increases, fiber nonlinearities become more significant and nonlinear compensation enables a lower BER than the conventional single carrier signal.The improvement is particularly significant when the launch power is larger than 1 dBm.As demonstrated in Fig. 6, performance of the proposed algorithm is better than (i) the perturbation based NLC with 25 uniform quantized coefficients, and (ii) comparable to more computationally complex perturbation based NLCs without coefficient quantization.Next, we compare the achievable transmission distance for different launch powers with a forward error correction (FEC) pre-set BER threshold of 2 2 10 − × .The results are summarized in Fig. 7.In accordance with the results in Fig. 6, the achievable transmission distances of conventional single carrier transmission is significantly smaller without nonlinearity compensation.If we investigate the maximum transmission distance of all the systems at their respective optimum launch powers, transmission reach increases from 2365 km for only linear compensation to 2818, 2726 and 2904 km for adaptive nonlinear equalizer and perturbation based NLCs with and without coefficients quantization, respectively.In addition, the optimum launch power increases by 1 dB with fiber nonlinearity compensation.Finally, Fig. 8 shows convergence of the adaptive nonlinear equalizer over the time.Equations ( 7) and ( 8) can be used to initialize the equalizer.Here, we used zero for the initial adaptive nonlinear coefficient values.We used a large step size at the beginning of the training sequence to achieve fast convergence.After 1000 and 2000 symbol equalization, the step size was divided by 2 and algorithm switched to decision directed mode.In contrast to previously proposed nonlinear equalizers [11], grouping multiple nonlinear triplets removed the requirement to use DD-LMS with multiple iterations per symbol and suboptimal convergence condition in order to increase the convergence rate and equalizer stability. Conclusion We propose and experimentally demonstrate a novel low-complexity nonlinear equalizer.We achieved a transmission distance of 2818 km for a 32-GBaud DP-16QAM system.The proposed equalizer performance is comparable to the perturbation based nonlinearity compensation and previously studied nonlinear equalization methods.Unlike digital backpropagation, the proposed equalizer operates at one sample per symbol and requires only one computation step.In addition, it allows for compensation of nonlinear and linear impairments independently.In comparison to perturbation based nonlinearity compensation, our nonlinear equalizer does not require prior calculation of perturbation coefficients, symmetric dispersion maps or a large memory to store all possible perturbation coefficients for reconfigurable network scenarios.In contrast to previously proposed nonlinear equalizers, our algorithm takes advantage of common symmetries of the perturbation coefficients and avoids replication of operations.In addition, it uses only a few adaptive coefficients by grouping multiple nonlinear terms and dedicating only one coefficient to each group.Finally, its computational complexity is smaller than previously proposed adaptive nonlinear equalization techniques by more than one order of magnitude. f and c are the bandwidth of signal, symbol duration, center frequency of channel of interest and speed of light, respectively. similar sets of indices in 2nd and 4th quadrature.In this case and based on Eq. Figure 2 shows all symbol indices that have been used in our nonlinearity compensation algorithm and the perturbation based NLC when Fig. 2 . Fig. 2. Symbol indices for the perturbation based nonlinear compensation and the proposed adaptive nonlinear equalizer after 480 km of SMF. 1 N and 2 N are equal to 5 and 10 respectively and consequently, there are 25 1 2 Fig. 7 . Fig. 7. Experimental maximum transmission distance versus launch power for 32 GBaud SC-DP-16QAM at soft FEC BER threshold of
4,921
2016-02-22T00:00:00.000
[ "Computer Science" ]
Mixed-mode cohesive laws and the use of linear-elastic fracture mechanics Small-scale cohesive-zone models based on potential functions are expected to be consistent with the important features of linear-elastic fracture mechanics (LEFM). These include an inverse-square-root 𝐾 -field ahead of a crack, with the normal and shear stresses being pro- portional to the mode-I and mode-II stress-intensity factors, 𝐾 𝐼 and 𝐾 𝐼𝐼 , the work done against crack-tip tractions being equal to ( 𝐾 2 𝐼 + 𝐾 2 𝐼𝐼 ) ∕ ̄𝐸 , where ̄𝐸 is the appropriate modulus, and failure being controlled by the toughness. The use of an LEFM model also implicitly implies that the partition of the crack-tip work into shear and normal components is given by a phase angle defined as 𝜓 𝐾 = tan −1 ( 𝐾 𝐼𝐼 ∕ 𝐾 𝐼 ) . In this paper, we show that the partition of crack-tip work in a cohesive-zone model is consistent with LEFM if the normal and shear deformations across an interface are uncoupled. However, we also show that this is not the case for coupled cohesive laws, even if these are derived from a potential function. For coupled laws, LEFM cannot be used to predict the partition of work at the crack tip even when the small-scale requirements for LEFM conditions being met; furthermore, the partition of the work may depend on the loading path. This implies that LEFM cannot be used to predict mixed-mode fracture for interfaces that are described by coupled cohesive laws, and that have a phase-angle-dependent toughness. Introduction Cohesive-zone models, originating from the work of Hillerborg et al. [1] and Needleman [2], are widely used to simulate the initiation and growth of cracks in problems ranging from the materials scale [2][3][4][5] to the structural scale, such as adhesive joints [6][7][8] and wind turbine blades [9]. In these models, the fracture process is described by a traction-separation relationship, known as a cohesive law, that comprises both a strength (peak traction) and a fracture energy (area under the traction-separation curve) [10,11]. The use of cohesive laws allows a transition between the strength-based approach to fracture of [12] and the energy-based method of [13] that underpins linear-elastic fracture mechanics (LEFM) [14][15][16]. Since [17] and [18] generalized cohesive laws to include shear tractions, cohesive-zone modelling has been extended to mixedmode fracture, with many different cohesive laws being developed. These cohesive laws can be divided into several fundamentally different groups [19]. First, there are those derived from potential functions, and those that are not derivable from potential functions. Second, there are what are termed as ''uncoupled'' and ''coupled'' mixed-mode laws. Potential-based cohesive laws are independent of the loading history. The normal and shear tractions depend only on the values of the normal and tangential openings; they do not depend on the path by which those openings are reached. As an example, micromechanical modelling can be used to show that cross-over fibre-bridging gives coupled laws for which a potential function exists [20]. For cohesive laws that cannot be derived from a potential function, the cohesive tractions and the work of the cohesive tractions depend on the loading path. Such laws can be used to model fracture processes that include history-dependent phenomena such as plasticity or frictional sliding [21]. However, attention is focused in this paper on conditions that might be consistent with LEFM, so only potential-based cohesive laws are considered in the present work; history-dependent mechanisms are excluded. In ''uncoupled'' mixed-mode cohesive laws, the normal tractions depend only on the normal openings, and the shear tractions depend only on the tangential (shear) openings. However, despite the terminology, coupling between the two modes of deformation is inherently introduced through the failure criterion [22]. This coupling generally manifests itself as a relationship between the critical normal and shear displacements. In particular, shear decreases the critical opening-displacement, and opening decreases the critical shear displacement. More details are given in Appendix. In coupled cohesive laws, the normal and shear tractions each depend on both the normal and tangential openings. It is not necessary to describe an additional mixed-mode failure criterion with such coupled laws, but the coupling should be consistent with any observed mixed-mode failure criterion. Under small-scale conditions, the driving force for crack growth in an elastic body is the gradient of total potential energy of the system with respect to the length of the traction-free portion of the crack [23]. In linear-elastic fracture mechanics (LEFM), this is designated by the energy-release rate,  [13], which is identical to the value of the -integral taken around the crack tip [23]. Fracture occurs when  = , which is identified as the toughness, and is considered to be a material property. Mixed-mode fracture in an LEFM framework is described in terms of the mode-I and mode-II stress-intensity factors, and : the amplitudes of the singular normal and shear stresses in the -dominant region near the crack tip. A phase angle describes the ratio between these two parameters as = tan −1 ( ∕ ) , and the toughness is assumed to be a unique function of the phase angle, ( ) [24,25]. Crack growth occurs when  = ( ), where the phase angle describes the ratio ∕ at fracture. Two implicit assumptions of LEFM are that the work at the crack tip, and its partition into shear and normal components are both path-independent i.e., independent of whether and are applied proportionally (simultaneously) or non-proportionally (e.g. sequentially). The use of LEFM is predicated on the assumption that any portion of a body not described as a continuum elastic medium is limited to a very small region near the crack tip, and that the macroscopic response of the body is linear-elastic. The use of LEFM as a powerful quantitative tool that is ubiquitous in engineering design is not predicated on singular stresses actually existing at the crack tip, but rather on the fact that the fracture process at the crack tip is dependent only on a macroscopic description of the -field [26]. In other words, the crack tip (and its partition) are uniquely defined by and , and independent of the cohesive length, provided this latter parameter is small enough. The implication of this is that any loading-path dependence that might exist for deformation of the crack tip potentially is inconsistent with the assumptions that underpin the use of LEFM. In the present study, we investigate this specific issue within the broad framework of small-scale fracture that is generally taken to correspond to LEFM conditions. It is emphasized again that for a fracture problem to be described by LEFM merely requires a smallscale cohesive zone. It does not require singular stresses to actually exist at the crack tip. This has been demonstrated by appropriate small-scale cohesive-zone analyses [16,27,28]. In this paper, we use small-scale cohesive-zone models with potential-based cohesive laws to satisfy one obvious requirement of path-independence, and examine whether there are additional constraints on tractionseparation laws for them to provide path-independent, mixed-mode behaviour. In particular, we are interested in whether there may be limitations on when an LEFM framework might be valid to describe small-scale fracture with uncoupled and coupled, potential-based, mixed-mode cohesive laws. Work of cohesive tractions The local work done (per unit area) against cohesive tractions across a small element of the interface, , can be decomposed into the local work done against normal tractions (designated as mode-I),  , and the local work done against shear tractions (designated as mode-II),  : where and are the normal and shear tractions, and are the normal and shear displacements. Under pure mode-I conditions ( = 0), local failure of the interface occurs when = c , where c is the normal displacement at failure. This corresponds to  = , which is defined as the mode-I toughness. Under pure mode-II conditions ( = 0), local failure of the interface occurs when = c , where c is the shear displacement at failure. This corresponds to  = , which is defined as the mode-II toughness. Of particular interest in fracture mechanics is the work done against the tractions at a cohesive crack tip (defined as the point at which the active cohesive zone ends, 1 = 0 in Fig. 1). The normal and shear displacements at the cohesive crack tip are designated by and , and the two terms for the work done against the corresponding tractions at this location are designated by  and  . When the -integral [29] is evaluated along the cohesive zone out to a region where  = 0, its value is given by [30]: where ( , ) is the potential function used for the traction-separation law. In this paper, the concept of an instantaneous cohesive length at the tip of the cohesive crack [16,28] is used. This can be defined for a homogeneous system in modes I and II as wherē= ∕(1 − 2 ) in plane strain,̄= in plane stress, and and are Young's modulus and Poisson's ratio. These are slightly different from similar quantities defined in terms of the failure parameters [1,16,31]. They have the advantages that they can be used to describe the state of the cohesive zone at any stage of loading, and they can be defined for coupled cohesive laws. The cohesive lengths can be normalized by a characteristic dimension of the geometry, such as a layer thickness, ℎ, so that = ∕ℎ. If̃is very small, one is in a small-scale regime, and the principles of LEFM are expected to apply. In particular, this means that there will be a -dominant region ahead of the cohesive crack tip, where the stresses across the interface follow an inverse square-root relationship with respect to distance from the tip, 1 . In the absence of a modulus mismatch across the interface, the normal tractions and shear tractions along 2 will be described in this region by: where and are the mode-I and mode-II stress-intensity factors. Close to the crack tip, the stresses will deviate from this relationship, with the details of the stress field being dependent on the cohesive law. Beyond the -dominant region, the stresses will deviate from this relationship, following the non-singular, elastic, stress field of the structure. The region over which the -field describes the stresses may be very small; however such a region will exist if̃is small enough. Again, we emphasize that a central tenet of LEFM is that it can be used to describe fracture if̃is small, it does not have to be zero. It is for this reason that cohesive-zone models can be used to describe LEFM under small-scale conditions [16,27,28]. Under LEFM conditions, an evaluation of the -integral in the -dominant region gives [29]: Owing to the path-independency of the -integral [29], is equal to (Eq. (2)), so that Irwin's virtual crack closure relation holds in LEFM:  = | | 2 ∕ . So, a consistent connection between LEFM models and CZM models will be that  =  , if̃is small enough for LEFM assumptions to be valid. Definitions of mode-mixedness There are several definitions of mode-mixedness in the cohesive-zone literature (Fig. 2). The one we will focus on in this paper has a direct connection with the concept of a phase angle in LEFM. It is defined in terms of the ratio of the work done against each mode of deformation, so that, at any point along the interface, the local phase angle is As 1 approaches zero, this tends to the crack-tip phase angle, which is defined as [22,32] = tan −1 The distance over which ( 1 ) is equal to decreases with decreasing cohesive length, [16,27,28]. For the special case of uncoupled cohesive laws, no modulus mismatch across the interface, and a very small value of̃, the mode-I and mode-II work done against crack-tip tractions can be identified with and through Eq. (6) as 1 The phase angle used in LEFM is defined as: Therefore, as has been shown to be the case [16,27], is expected to equal under these conditions. Furthermore, if there is a modulus mismatch across the interface, , scales with the elastic properties and cohesive length as predicted by LEFM [16,27,33]. It is noted that an alternative measure of mode-mixedness ( Fig. 2), based on the ratio of the two tractions: can vary with the choice of cohesive law. It does not have the potential advantage of , in linking crack-tip deformation to macroscopic conditions under LEFM conditions. Under LEFM conditions, the magnitude of the stresses within the field region are dictated by the stress-intensity factors. So, ( 1 ) = in this region. However, it is axiomatic to LEFM that fracture is controlled by the deformation at the crack tip, and that the -field controls this deformation through . Therefore, it would seem to be an unnecessary restriction on modelling mixed-mode fracture to impose an additional constraint on cohesive laws that the crack-tip stresses in the entire cohesive zone should be in the same ratio as the stress-intensity factors [34]. In conclusion, one expects ( 1 ) = in the field, but expects to depend on the choice of cohesive law at the crack tip. Conversely, one expects ( 1 ) to equal close to the crack tip, but for there to be no connection between ( 1 ) and in the -field. The observation that = has already been shown to be valid if the cohesive laws are uncoupled [16,27]. However, a consideration of Eq. (2) for the case when the cohesive-laws are coupled indicates that the ratio between the two quantities ( and  ) may depend on the loading path, as discussed in Ref. [35]. In such a case there may not be a unique relationship between and . This could have implications for the use of LEFM to predict the failure of interfaces if the fracture-process mechanism behaves in accordance with a coupled traction-separation law. Mixed-mode LEFM models are all predicated on an assumption that deformation at the crack tip, where fracture takes place, is uniquely defined by and , with no path dependence. If coupled laws give path-dependent deformation at the crack tip, then it would imply that the use of LEFM may implicitly require the assumption of uncoupled cohesive laws. It is the purpose of this paper to explore this point. In this context it should be emphasized that we are exploring the effects of using coupled and uncoupled laws, wheñis small enough for the problem to be in the LEFM limit. It has already been shown that, in this limit, uncoupled laws result in being equal to , provided sufficient care is taken to ensure that in finite element modelling the mesh size is small enough to observe the plateau in ( 1 ). We are interested in whether the same conclusion can be made for uncoupled laws, given the same care about mesh size and limitations oñ. This focus is in contrast to that of earlier work [27,32,36], which explored the crack-tip phase angle and mixed-mode fracture at large cohesive lengths, well away from the LEFM limit. This body of work indicates that, for large cohesive lengths, the crack-tip phase angle tends to move away from the values controlled by the local K-field to values controlled by the macroscopic loads and geometries, as suggested by Charalambides et al. [37]. For example, the paper by Conroy et al. [32] explores values of cohesive lengths that range from values slightly bigger than ones for which LEFM should unambiguously be valid to much larger values. At the lower end, the phase angle for an uncoupled law approaches the LEFM value, while the phase angle for a coupled law shows a larger discrepancy. In this paper, we explore in detail the difference between coupled and uncoupled laws, while ensuring that we are unambiguously within the range where LEFM is valid. Finite-element modelling The problem was modelled by finite-element (FE) simulations, using the commercial code ABAQUS. The finite-element domain (of radius ) and the mesh for the mixed-mode -field is shown in Fig. 3. A crack extends along the plane 2 = 0, from 1 = − to 1 = 0. The traction-separation relationships used to model the cohesive zone were specified along the crack plane from 1 = 0 to 1 = (Fig. 3). Quadratic plane-strain elements were used for the elastic solid, and quadratic cohesive elements of non-zero thickness were used in the cohesive zone. As can be seen from Fig. 3, a combination of quadrilateral and triangular elements allows for a structured increase in the size of the plane-strain elements as one moves away from the vicinity of the crack tip. The mixed-mode cohesive laws were implemented as user-defined elements. The cohesive elements had a length 2 equal to 5 × 10 −7 in the range 0 ⩽ 1 ∕ ⩽ 8.2 × 10 −3 . The mesh was then gradually increased to 5 × 10 −2 at 1 ∕ = 1. The height of the cohesive elements was equal to 5 × 10 −8 along the cohesive interface. Only positive values of were studied, so there was no issue with interpenetration. Several potential-based mixed-mode cohesive laws were tested. Particular results are presented for the laws shown schematically in Fig. 4, and discussed in more detail in Appendix: an uncoupled trapezoidal law of [22], an uncoupled linear law [16,28], and the coupled Park-Paulino-Roesler (PPR) law [38,39]. Boundary conditions The displacement components, 1 and 2 , are related to the singular field of Fig. 3 by [40]: where is the shear modulus,̄= in plane stress and̄= ∕(1 − ) in plane strain, is Poisson's ratio, and the magnitude of the stress intensity factors is | | = √ 2 + 2 . These displacement components are prescribed remotely on the boundary at = by means of a user-defined ABAQUS subroutine. The magnitude of | | is varied through incremental changes in 1 and 2 , such that is kept at the desired value. The application of displacements that match those expected in an LEFM field does not, by itself, ensure that a -controlled stress field will be established. This requires an additional condition that both ∕ and ∕ are small enough. Although the geometry of Fig. 3 is the conventional one used to describe -fields in infinite bodies with semi-infinite cracks, it must be remembered that introduces an arbitrary length scale that will determine if the cohesive zone satisfies the small-scale conditions or not. Results The results presented in this section are divided into two main classes. In the first set of results, the loading is done in such a way that remains constant throughout the loading procedure. This is described as proportional loading. In the second set of results, the loading is done in such a way that changes during the loading procedure. This is described as non-proportional loading. . The value of̃∕ for this plot is equal to 0.01375, which satisfies small-scale conditions. The excellent agreement between the numerical results and the asymptotic field can be seen from this plot for both the opening and shear tractions. The field under these conditions extends to within about 0.01 from the crack tip, with the relationship between the cohesive length and the extent of the singular field being visible from Fig. 5(b). Fig. 5 provides what might be considered to be a classic understanding of LEFM: an inverse square root relationship between stress and distance from the crack tip, with a magnitude given by and , but which breaks down near the crack tip. 3 This verifies the ability of a cohesive-zone model to describe LEFM under appropriate small-scale conditions. The variation of the phase angle, ( 1 ), with 1 , is illustrated in Fig. 6 with the same three cohesive laws (with different ∕ ratios) as the plots in Fig. 5, but with three different phase angles. As expected, tends to close to the crack tip in all cases, but generally deviates from this equality in the field. The exception is the special case of ∕ = 1, for which ( 1 ) equals for all values of 1 ∕ . This is because the ratio of the two stresses is equal to the square root of the ratio of the two modes of work in this law. Therefore, there is a special case agreement between ( 1 ) and within the field where the stresses must also scale with . For the other cohesive laws, the same agreement between the stresses and applies, but now the ratio of the stresses is not the same as the ratio of the square root of the work. This point is emphasized in Fig. 7, which shows how the traction ratios, represented by the phase angle , vary with 1 . For ∕ = 1, the traction ratio is identical to the square root of the work ratios for a linear cohesive law. Therefore, = , both near the crack tip and in the field. For other values of ∕ , = only in the field. However, it should be remembered that it is at the crack tip where fracture occurs, and where one needs a measure of mode-mixedness that can be linked to LEFM. As discussed earlier, such a measure is provided by , which equals for an uncoupled law. Similar conclusions can be drawn from calculations conducted using uncoupled laws with different shapes. Although this has been demonstrated before for beam-like geometries [16], here we show the results for a -field geometry using a trapezoidal law (described in Appendix A.1) for two values of mode-mixedness. Fig. 8 shows how the stress field evolves near the crack tip for = 45 o , and a peak traction ratiô∕̂= 2. The length of the fracture process zone is less than 1% and, in the -field zone ( 1 ∕ >10 −2 ), the normal and shear tractions are identical to the asymptotic field. In this case, the tractions of the uncoupled cohesive law are at their maximum values, established by their cohesive strengths, all the way to the cohesive crack tip, because neither law has entered the softening regime under the conditions for which the plot has been made. It should be emphasized that is small enough for LEFM to be valid, as can be seen from the stress field of Fig. 8. Fig. 9 shows how the phase angle ( 1 ) varies with 1 for = 45 o and 60 o . As before, it can be seen that the crack-tip phase angle, tends to . Away from the crack-tip region, there is no particular significance to this partition of work. However, it should be noted that, for these calculations, much of the -field is associated with the initial, linear portion of the traction-separation law. This means that, for the two cases with identical mode-I and mode-II cohesive laws, the laws look like linear laws with equal cohesive lengths. As discussed in connection with Fig. 6, this means that in the -field region the special case of ( 1 ) = is met. Coupled cohesive laws The results for the coupled cohesive law developed by Park et al. [38], which we refer to as the PPR law, are described in this section. Fig. 10 shows the normal and shear tractions ahead of the crack tip with = 45 o , and with values of and cohesive strengths corresponding to those used for Fig. 8. In this case, the cohesive-zone is fully developed, so the stresses at the crack tip are approximately zero. As with the uncoupled law, the cohesive-length scale, , is so small that the stresses are described by the asymptotic -field at distances greater than about 0.01 from the crack tip. Again, this confirms the ability of a cohesive-zone model to describe LEFM under appropriate conditions. The phase angle, , is plotted in Fig. 11 for three peak-traction ratios,̂∕̂, and two values of . These plots illustrate the effect of different parameters for the PPR cohesive law. A key difference between the results for this form of a coupled law, and the results for uncoupled cohesive laws, is that, in general, ≠ . The only situation in which = is the special case of =45 o , when the shear and normal laws are identical. A similar result that, in general, ≠ for coupled mixed-mode laws was found when several other coupled cohesive laws were explored, including those of Xu and Needleman [18], and Sørensen and Goutianos [41]. The reason for the discrepancy between and can be seen by a simple examination of the form of the equations. If, in general, = ( , ) and = ( , ), then the crack-tip phase angle, which from Eq. (2) is given by will generally depend on how varies with , and it is going to be path dependent. In particular, there is no reason why should be related to . Non-proportional loading In the previous section, we showed that = for uncoupled laws and proportional loading; but this identity was valid only for very special forms of coupled laws. In this section, we explore the effect of non-proportional loading on this relationship. Specifically, we do this by determining the evolution of the phase angle as the geometry is loaded to the same final conditions ( = 45 o ), but following two different loading paths, 1 and 2 , illustrated schematically in Fig. 12. Fig. 13 shows how the phase angle ( 1 ) varies with 1 for an uncoupled trapezoidal law at four discrete points along the two non-proportional loading paths, 1 and 2 identified in Fig. 12. It can be seen that the crack-tip phase angle, , always matches the applied value of , at all points during loading. Similar results were obtained for all the other paths and cohesive parameters that were explored. The corresponding results for the PPR cohesive law are shown in Fig. 14. In this case, it will be remembered that was not equal to for proportional loading. Here the two parameters are in closer agreement for a trajectory that starts off dominated by mode-I. However, the two are even more divergent for the mode-II dominated trajectory than for the proportional trajectory, indicating clear evidence of path-dependence for the crack-tip phase angle. This path dependence of was confirmed as being S. Goutianos et al. a general result for other paths and cohesive parameters for coupled laws. It should be emphasized that, in all cases, the total work at the crack tip remained the same. There was no path dependence to this quantity, as expected for potential-based laws. The path-dependency was only related to how the crack-tip work was partitioned between the two modes. LEFM assumptions Mixed-mode loading in an LEFM framework is completely described by the energy-release rate, , and the phase angle, . It is assumed that any small-scale deformation at the crack tip is uniquely described by these two parameters, which are both independent of the loading history. Therefore, in corresponding cohesive zone modelling, both the magnitude of the work done at the crack tip Table A. 1, witĥ=2̂. during this deformation and the partition of this work into normal and shear components can be deduced uniquely from the two parameters. 4 Mixed-mode failure criteria used in LEFM analyses are all predicated on this concept of path-independence. The use of potential-based traction-separation laws within a cohesive-zone framework, ensures the same total work is done against crack-tip tractions for any loading path under mixed-mode loading. Therefore, this class of cohesive law results in an agreement with one LEFM assumption: the energy-release rate does not depend on the loading history. However, not all potentialbased cohesive laws match the second assumption: the LEFM partition of the work at the crack tip can be described only in terms of and . The present paper confirms the earlier results of [16,27,28] that the LEFM assumption about the prediction of work is satisfied for uncoupled cohesive laws if̃≠ 0. 5 However, it is shown here that coupled cohesive laws generally result in a different partition of crack-tip work from that assumed by LEFM. Furthermore, while the total work is path independent, this partition of crack-tip work can be path dependent. This conclusion has been illustrated by the results presented in this paper, but it was also confirmed by testing other potential-based coupled cohesive laws from the literature [18,38,41], with proportional and non-proportional loading paths. Implications for LEFM mixed-mode failure criteria The implicit assumption behind LEFM mixed-mode failure criteria is that an interface separates when the energy-release rate, , exceeds a critical value, , which is identified as the toughness, and is a function of the phase angle:  ≥ ( ). This functional 4 This is rigorously correct only when the second Dundurs parameter,̃, is equal to zero. Wheñ≠ 0, LEFM cannot be used to partition the work done in deforming the crack-tip region into shear and normal components, although the total work is still given by the energy-release rate [33,42]. 5 LEFM requires an additional length parameter to represent the behaviour of uncoupled potential-based cohesive laws wheñ≠ 0 [16,27]. Table A. 1, witĥ=2̂. dependence of toughness on phase angle can take any form, including non-monotonic forms. However, in LEFM, the phase angle is defined only in terms of the geometry and the loads, and is path independent. Therefore, the toughness of an interface is implicitly assumed to be path independent. In practice, most LEFM mixed-mode fracture tests are conducted under proportional loading, so that is constant throughout a test. An envelope of toughness is developed as a function of through a series of separate tests, each one exploring a different value of . With this approach, it would not matter if the actual crack-tip phase angle, , of the fracture process was incorrectly described by , a unique mixed-mode failure envelope would always be developed that described the experimental results. This failure envelope could then be used predictively in design, under the same assumptions of LEFM and proportional loading. It would be relatively easy to develop a cohesive-law that describes such limited data. Both coupled and uncoupled laws could work; indeed, even a law not based on a potential function could work, if the issue of path dependence is not explored. However, only the uncoupled law would be consistent with LEFM assumptions. More detailed experimental studies might reveal path-dependence, violating LEFM, in which case coupled laws derived from a potential function or cohesive laws not derived from a potential function might be more appropriate. Conclusions Different types of mixed-mode, potential-based cohesive laws under small-scale conditions have been used to explore how the behaviour of the crack-tip region compares to the assumptions that underpin linear-elastic fracture mechanics (LEFM). It has been shown that the fundamental assumptions of LEFM are fully consistent with uncoupled, potential-based laws. For these types of law, not only is the work done against crack-tip tractions independent of the loading path and equal to the value of the -integral, but the partition of this work into the two orthogonal modes is also in agreement with LEFM assumptions. The crack-tip phase angle is equal to the phase angle of the surrounding -field if small-scale conditions are met. Coupled, potential-based cohesive laws result in the work done against the crack-tip tractions being path-independent and equal to that given by the -integral (consistent with LEFM). However, the partition of this work into normal and shear components does not necessarily agree with that indicated by the surrounding -field, even under small-scale conditions. In particular, the crack-tip phase angle can be path dependent. These results have implications for the interpretation of mixed-mode fracture experiments and design based on LEFM concepts. LEFM assumes that deformation at a crack tip is uniquely described by the -field. It also assumes that the local conditions for mixedmode crack advance are controlled by and , and, hence, mixed-mode failure is independent of the loading path. However, if the normal and shear deformation processes at the crack tip are coupled, these assumptions would generally be violated to some degree. A full understanding of mixed-mode failure criteria requires path dependence to be explored. In the absence of any significant path dependence being observed experimentally, uncoupled, potential-based cohesive laws with suitable empirical mixed-mode failure criteria would seem to be adequate, and, perhaps, the easiest to implement numerically. In addition, the use of path-dependent functions or coupled laws would need to be validated to ensure they did not introduce stronger path-dependence than merited by the experimental results. Only if significant path dependence that needs to be modelled is observed experimentally, would it seem to be imperative to use a coupled law, or path-dependent cohesive laws. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. A.1. Uncoupled cohesive laws Two forms of mixed-mode uncoupled cohesive laws are used in this study. The first is a special case of linear laws, for which the tractions are linearly dependent on the displacements until failure. Mathematically, these are described by where and are the stiffnesses of the two modes, which need not be identical. The peak tractions were set high enough that fracture did not occur in this study. However, they can easily be added if fracture needs to be modelled explicitly. It should be noted that a physical manifestation of a linear-elastic cohesive law could be an interface bonded by compliant brittle elastic springs. However, from a modelling perspective, linear laws have the unique feature that the instantaneous cohesive-lengths [28], do not vary during loading. Furthermore, the simplicity of linear cohesive-laws means that the results of all calculations performed with them can be expressed in terms of only four non-dimensional parameters: where the first two terms describe the cohesive laws, and the second two terms describe the remote loading. and therefore the mixed mode linear uncoupled laws are based on a potential function. The second form of uncoupled law used in this study are trapezoidal laws [22]. The tractions for these laws increase linearly with displacement until the normal and tangential displacements are 1 and 1 ; at which point the peak tractions arêand̂, respectively. The tractions remain at these levels while the relevant displacements remain less than 2 and 2 , at which point they drop linearly to zero at = c and = c . These laws can be expressed in the range −90 o ≤ ≤ 90 o as where ⟨⋯⟩ are Macaulay brackets [43]. Macaulay brackets of the form ⟨ − ⟩ 1 are interpreted as being equal to 0 when < , or equal to ( − ) when ≥ . The non-dimensional presentation of results for these trapezoidal laws is slightly more complicated than for the linear laws, because of the additional parameters required to describe the laws. The problem is completely described by ten non-dimensional groups. There are the two loading parameters, | |∕(̄√ ) and , and eight parameters that describe the cohesive laws. The values of these eight parameters that are used in this paper are given in Table A.1. Finally, it should be noted that what are termed as ''uncoupled'' mixed-mode cohesive laws are actually coupled through a failure criterion of the general form The non-dimensional exponents and are given by: . (A.18) where 1 and 1 are the normal and tangential openings corresponding to the pure mode-I and pure mode-II peak tractions, respectively. The problem of this paper is completely described by ten non-dimensional groups, including the two loading parameters, | |∕(̄√ ) and . The eight parameters used in this paper that describe the cohesive laws are given in Table A.2. The parameters are chosen in such a way so as to ensure that the shape of pure mode-I and mode-II cohesive laws are similar to the shape of the corresponding mode-I and mode-II uncoupled cohesive laws. They have identical peak tractions, critical openings and fracture energies as the corresponding uncoupled laws (Table A.1).
8,416.8
2021-07-01T00:00:00.000
[ "Physics" ]
Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things The field of photovoltaics gives the opportunity to make our buildings ‘‘smart’’ and our portable devices “independent”, provided effective energy sources can be developed for use in ambient indoor conditions. To address this important issue, ambient light photovoltaic cells were developed to power autonomous Internet of Things (IoT) devices, capable of machine learning, allowing the on-device implementation of artificial intelligence. Through a novel co-sensitization strategy, we tailored dye-sensitized photovoltaic cells based on a copper(ii/i) electrolyte for the generation of power under ambient lighting with an unprecedented conversion efficiency (34%, 103 μW cm−2 at 1000 lux; 32.7%, 50 μW cm−2 at 500 lux and 31.4%, 19 μW cm−2 at 200 lux from a fluorescent lamp). A small array of DSCs with a joint active area of 16 cm2 was then used to power machine learning on wireless nodes. The collection of 0.947 mJ or 2.72 × 1015 photons is needed to compute one inference of a pre-trained artificial neural network for MNIST image classification in the employed set up. The inference accuracy of the network exceeded 90% for standard test images and 80% using camera-acquired printed MNIST-digits. Quantization of the neural network significantly reduced memory requirements with a less than 0.1% loss in accuracy compared to a full-precision network, making machine learning inferences on low-power microcontrollers possible. 152 J or 4.41 × 1020 photons required for training and verification of an artificial neural network were harvested with 64 cm2 photovoltaic area in less than 24 hours under 1000 lux illumination. Ambient light harvesters provide a new generation of self-powered and “smart” IoT devices powered through an energy source that is largely untapped. Introduction From conservation efforts and cleantech to tracking of environmental conditions and reductions in energy usage, every imaginable facet in the quest to reduce our carbon footprint is being explored anew through the IoT (Internet of Things). The IoT, as world-spanning networks of physical devices connected to the internet, marks an ever-growing eld of technology. Such networks of autonomous smart sensing devices are poised to advance the exchange of information in smart homes, offices, cities, and factories. [1][2][3][4] It is being argued that many aspects of our life will be mediated via 75 billion IoT devices by 2025, of which the majority will reside indoors. They will collect, communicate and process real-time data to optimize services and manufacturing processes, as well as to manage resources to reduce our energy consumption. 5,6 Most importantly towards broad implementation, such IoT devices have to become autonomous, which requires a local power source with low or even zero maintenance. Therefore, it is crucial to nd an energy source that yields high efficiencies in this environment. In outdoor photovoltaics, a signicant portion of the sun's spectrum is found in the red region of the visible light and at near-infrared wavelengths, which suits the strong spectral response of crystalline silicon or GaAs-based solar cells in this wavelength domain. On the contrary, the largest part of indoor illumination spectra, most commonly originating from uorescent lamps, is found in the visible range between 400 and 650 nm. In this spectral region, diffuse ambient light provides universally available energy, which remains otherwise unused. [7][8][9][10][11][12][13][14] Photovoltaic technologies based on amorphous silicon (a-Si), [15][16][17] organic photovoltaics (OPV), 9,18,19 and dye sensitized cells (DSC) [20][21][22] have shown sufficient energy conversion in this region. DSCs are well known for their high performance in ambient light. In 2017, Freitag et al. introduced a new dye-sensitized solar cell design with Cu II/I (tmby) 2 (tmby ¼ 4,4 0 ,6,6 0 -tetramethyl-2,2 0 -bipyridine) as a redox relay, capable of successfully regenerating dyes at only 0.1 eV overpotential. Strikingly, under 1000 lux indoor illumination their solar to electrical power conversion efficiency was found to be 28.9%, outperforming conventional silicon and even GaAs based photovoltaics in ambient conditions and thus paving the path to applications in IoT devices. 20,23 To enable large area and sustainable production, the liquid electrolyte in DSCs needs to be replaced by a solid charge-transport material, however, current commonly used organic hole transport materials (such as spiro-MeOTAD) are limited in conductivity, stability and tunability. 24 Contrarily, copper coordination complex-based hole transport materials (HTMs) demonstrated a new concept for solid-state DSCs (ssDSCs) with a stable and record-breaking solar cell efficiency of 11.7%. 25 Considering the co-sensitization of dyes as a strategy to shape the TiO 2 /dye/electrolyte interface rather than the traditional approach of panchromatic extension of the spectral response, 26 we designed DSCs that maintain a high photovoltage specically under ambient light. Unfavourable electron back-transfers from the photoanode to the Cu II/I (tmby) 2 electrolyte are supressed, and as a result we recorded a photovoltage of 910 mV, translating into a PCE of 34.0%, 32.7% and 31.4% under 1000, 500 and 200 lux of uorescent light, respectively. Such photovoltaic conversion efficiencies deem DSCs the power sources of choice for IoT devices and wireless network sensors in ambient environments. IoT devices equipped with an array of these photovoltaic cells and a small energy buffer operate autonomously and therefore do not require long-term maintenance, such as battery replacements. 27 Further, the use of light-driven, autonomous devices leads to a paradigm shi of energy usage: unlike battery-supported systems, which contribute to 10 billion dry-cell batteries produced annually, all surrounding energy can be harvested and used to the maximum of its availability. 28 Implementing articial intelligence directly on-device bene-ts such IoT sensor networks to a large extent (Fig. 1). IoT devices with pre-trained articial neural networks (ANN) can directly infer or classify information about their surroundings, rather than communicating information through wireless networks. Reduction of the overall communication in the network is benecial especially upon execution of heavy computational tasks such as advanced image recognition. [29][30][31][32] As an additional advantage, on-device machine learning enables IoT devices to adapt to changing environments. In particular, they can self-optimize their energy consumption, perform demanding computations when ambient light is strongest and adaptive sleep during other times. [33][34][35] Therefore, the combination of machine learning, environmental sensing, and photovoltaic cells as power sources lead to benecial synergies. While the concept of machine learning on autonomous light-powered IoT nodes has been discussed broadly, 27,36,37 complete pilot implementations are yet to be reported. We demonstrate that our photovoltaic cells provide sufficient power from ambient light to an IoT node capable of sensing and communicating data within a wireless network, even when experiencing longer periods of darkness and hence no available energy. Photovoltaic cells were then used as a power source to train an articial neural network on an IoT device and to use said neural network to infer information. Such self-powered and smart IoT devices employing machine learning are set to dene technology for the next decadesbased on distributed energy harvesters as power sources. Solar cell fabrication Generally, the fabrication of solar cells followed procedures as described in our previous reports. 40,41 On cleaned (RBS solution, water, ethanol, UV-ozone) Nippon sheet glass (Pilkington, St. Helens, UK), 10 U sheet resistance, a dense TiO 2 layer was deposited via spray pyrolysis at 450 C from a 0.2 M titanium tetraisopropoxide, 2 M acetylacetone solution in isopropanol. Subsequently, 0.25 cm 2 (0.5 cm  0.5 cm), 3.2 cm 2 (4 cm  0.8 cm) or 8 cm 2 (8 cm  1 cm) TiO 2 photoanodes were screen-printed (Seritec Services SA, Corseaux, Switzerland) from DSL 30 NRD-T (Dyesol/GreatCellSolar, Queanbeyan, Australia) colloidal (30 nm) TiO 2 paste (4 mm). Aer brief drying at 120 C, a scattering layer (Dyesol/GreatCellSolar WER2-0, 400 nm) was screen-printed onto of the mesoporous lm (4 mm), followed by gradual heating towards a 30 minute sintering step at 450 C. The substrates were post-treated with a 13 mM aqueous TiCl 4 solution for 30 min at 70 C and then sintered again at 450 C for 30 min. Aer cooling, titania lms were immersed into dye solutions for 16 h, which were prepared as reported in literature: 42,43 0.1 mM XY1 with 1 mM chenodeoxycholic acid in chloroform/ethanol 3 : 7 (similar for XY1b); 0.5 mM L1 in acetonitrile; 0.1 mM D35 in acetonitrile : tert-butanol; 0.1 mM Y123 with 1 mM chenodeoxycholic acid in acetonitrile : tertbutanol. The sensitizer solutions for XY1:D35 and XY1b:Y123 were mixed according to literature procedures. 20,21 The mixing ratios for the sensitizer solutions of XY1 : L1 were studied according to Table S2. † PEDOT counter electrodes were manufactured via electro-polymerization of 3,4-ethylenedioxythiophene from a 0.01 mM aqueous solution with 0.1 M sodium dodecyl sulphate as previously studied in our laboratory. 44 The redox electrolyte solutions for liquid DSCs were prepared with 0.2 M Cu(tmby) 2 TFSI and 0.04 M Cu(tmby) 2 TFSI 2 , 0.1 M lithium bis(triuoromethanesulfonyl) imide and 0.6 M 4-tert-butylpyridine in acetonitrile. For photovoltaic cells powering IoT devices, propionitrile served as electrolyte solvent. Cells were assembled using ThreeBond (Dusseldorf, Germany) 3035B UV glue and cured with a CS2010 UV-source (Thorlabs, Newton, NJ, USA). The electrolyte was vacuum-injected through a hole in the counter electrode which was then sealed with a thermoplastic lm and a glass cover slip. Solid-state DSCs were generally fabricated in a similar 'sandwich' layout. Aer electrolyte injection, cells were le to dry in ambient atmosphere for 72-96 hours. Devices were then sealed as described above before characterization. Solar cell characterization Current-voltage measurements were carried out in ambient air under AM 1.5G illumination using a self-calibrating Sinus-70 solar simulator (Wavelabs, Leipzig, Germany). An X200 source meter (Ossila, Sheffield, UK) was used to assess solar cell performance (scan speed 100 mV s À1 ). A mask was employed to conne the active solar cell area to 0.16 cm 2 . Ambient light characterization was carried out with a Warm White 930 18 W uorescent tube (OSRAM, Munich, Germany), and a PGSTAT 100 potentiostat (Metrohm Autolab, Utrecht, The Netherlands) was utilized to record the current-voltage characteristics. The lamp spectrum is illustrated in the ESI, Fig. S6. † The stabilized illumination intensity was calibrated with a commercial lux meter (Clas Ohlson, Insjön, Sweden) before measurements. Values of illumination intensity were cross-checked with lux meters from different manufacturers. The entire active photovoltaic area of the devices was used during indoor characterization to mimic diffuse light conditions. Incident photon-to-current conversion efficiency (IPCE) IPCE spectra were recorded with an ASB-XE-175 xenon light source (10 mW cm À2 ) (Spectral Products, Putnam, CT, USA) and a CM110 monochromator (Spectral Products, Putnam, CT, USA). The photocurrent was measured with a U6 digital acquisition board (LabJack, Lakewood, CO, USA). The setup was calibrated with a certied silicon reference cell (Fraunhofer ISE, Munich, Germany). Photocurrents were integrated based on the spectral distribution of sunlight (AM 1.5G). 45 Electron lifetime measurements Electron lifetimes were investigated with a 1 W white LED (Luxeon Star, Lethbridge, Canada). Kinetics in the solar cell were probed by applying square-wave modulations to the light intensity. The solar cell response was tracked by a digital acquisition board (National Instruments, Austin, Texas) and tted with rst-order kinetic models. Photoinduced absorption spectroscopy (PIA) PIA spectra were recorded using square-wave-modulated blue light (1 W, 460 nm) (Luxeon Star, Lethbridge, Canada) for excitation, while as a probe a white light (20 W tungstenhalogen) was used, which was focused on a SP-150 monochromator (Acton Research Corp., Birmingham, AL, USA) with a UV-enhanced Si-photodiode. At sample location, pump and probe light intensities were estimated about 80 W m À2 and 100 W m À2 , respectively. The sample response was assessed with a SR570 current amplier and a SR830 lock-in amplier (Stanford Research Systems, Reamwood, CA, USA). Transient absorption spectroscopy (TAS) TAS spectra were recorded using a frequency-tripled Q-switched Nd-YAG laser as pump and a xenon arc lamp (continuous wave) as probe light source. The laser system was set to 520 nm excitation wavelength with a S12 Quanta-Ray optical parametric oscillator (Spectra Physics, Santa Clara, CA, USA) to provide 1 mJ, 13 ns pulses at an operating frequency of 10 Hz. The sample was positioned at an angle of 45 between the light sources, yielding a 0.35 cm 2 cross-sectional active area. The sample response was analysed with an L920 detection unit (Edinburgh Instruments, Livingston, United Kingdom) containing a monochromator, an R928 photomultiplier and a TDS 3052B oscilloscope (Tektronix, Beaverton, OR, US). Raman spectroscopy Raman spectra were collected using an InVia Renishaw Raman in confocal mode using a 50 objective, a frequency doubled Nd:YAG laser operating at 532 nm, and a Rayleigh line lter cutting 80 cm À1 into the Stokes part of the spectra. A 2400 lines mm À1 grating was used and the 520.5 cm À1 line from Si was used as a calibration giving a resolution of 1 cm À1 . Raman spectra were recorded at different spots to conrm material homogeneity. Similar results were obtained upon repetition of the measurements with varying laser intensities. MNIST training A Raspberry Pi Zero was used to benchmark the training of the machine learning. The training script was executed automatically aer booting the system and results were displayed on an e-ink display. Four DGHQ 5.5 V 5 F supercapacitors with a total capacitance of 20 F were charged and the Raspberry Pi Zero was then powered from the supercapacitors using a commercial DC-DC boost converter with a standard USB cable. Wireless sensor node An energy harvesting circuit using a diode, a AVX 6.0 V 0.47 F supercapacitor and a physical switch was used to power an ATmega328P microcontroller within its operational voltage. Arduino was used as the development platform for availability of reference implementations and libraries. The microcontroller was programmed to run at 8 MHz to allow for lowvoltage operation. An nRF24L01+ wireless transceiver was used due to its low power requirements. No voltage regulator was used to limit energy losses to the microcontroller and the leakage current of the used supercapacitor (Fig. S10A †). The wireless receiver side was running continuously, deserializing and logging the incoming data packages into a database for visualization and data analysis. On the transmitter side, adaptive sleep was implemented using a PID-loop, using the internal voltage of the microcontroller as a set point. 2500 mV was selected as a set point for the internal voltage. Values for the proportional (K p ), integral (K i ) and derivative (K d ) terms were experimentally evaluated and were the same for all experiments Intermittent sleep intervals were multiples of 10 seconds, with a maximum of 240 sleep cycles (i.e. 40 minutes). Benchmarking structure All benchmarks used the same code structure, where the current-voltage prole of the microcontroller is acquired to determine the number of its sleep cycles. The workload was always executed before serializing the data package and transmitting it to the receiver. The pseudocode running on the energy harvesting circuits is outlined in Fig. S10B. † Heartbeat benchmark The heartbeat benchmark sent a wireless communication package to the receiver, followed by 250 ms of sleep to limit the number of communication packages. Dhrystone benchmark The Dhrystone benchmark was based on version 2.1 of the original Dhrystone code, with further modications to allow execution on the ATmega328P. The result of the Dhrystone benchmark was calculated based on the time the microcontroller was going to sleep, resulting in an average MIPS (million instructions per second) rating. While the workload was always executed at the congured speed of 8 MHz, longer sleep times lead to lower average performance values. MNIST inference benchmark The MNIST inference used a pre-trained network to infer the result of a given image. The network was pre-trained on a PC and compiled into the source code. An MNIST image was requested during the startup of the wireless node. The computation of the network was executed 100 times to be able to calculate the average required energy and number of photons per inference. MNIST robustness To verify the robustness of the neural network, a 16  10 sample of MNIST digits was printed on paper using a laser printer. Images of the printed MNIST digits were then manually acquired using a USB camera. A simple imaging pipeline was then applied to the images before sending the data to the Arduino for inference: color conversion from RGB to grayscale, resizing to 14  14 pixels using nearest neighbor interpolation as well as edge-ltering of high values using a manually determined threshold. 127 out of 160 images were predicted correctly, yielding an inference accuracy of 79.4%. Code availability The reported accuracies of the machine learning algorithms can be reproduced using the source code available from GitHub: https://github.com/freitaglab/LightToInformation. Code 46 and models 47 have been published on Zenodo. Efficient power generation under ambient light Co-sensitization of dyes to extend the photoresponse of DSCs has been studied intensively throughout the literature ( Fig. 2A). High solar cell performances are commonly achieved by a spectral combination of the broad photocurrent collection of small-transition-energy dyes with the large photovoltage generated by dyes with a large transition energy. Towards ambient light conversion, however, there are several other factors to consider. First, the spectral conversion response of the DSC should be judiciously tuned to the source of ambient lighting rather than aiming for a sole broadening of the absorption domain. 20 Secondly, at low light intensities, the suppression of recombination processes plays a crucial role in the DSC performance. To avoid undesired back-transfers of electrons aer their injection into the TiO 2 conduction band, the adsorbed dyes need to protect the TiO 2 surface from electronic interaction with the electrolyte. 48 In particular, copper coordination complexes are known to show high recombination rates with electrons in the FTO and TiO 2 in comparison to their cobalt-based counterparts. Strikingly, the photovoltage of the co-sensitized DSCs largely exceeded the photovoltage generated by either dye alone. Devices based on sensitizer XY1 reached 1000 mV whereas the yellow L1 dye, despite the larger transition energy E 0-0 of 2.64 eV, only generated a V OC of 910 mV. In such case of a single sensitizer, the oxidized species of the redox mediator, here Cu II (tmby) 2 , can approach uncovered spots on the TiO 2 and FTO surface and lead to electron recombination. The two sensitizers used in this study complement each other sterically in terms of TiO 2 surface coverage due to the large difference in molecule size (Fig. S1 †). The much smaller L1 dye molecules can occupy the surface area between larger XY1 molecules. As a result, a denser monolayer is formed, which passivates the surfaces of FTO and TiO 2 . Consequently, electron back-transfer from the TiO 2 conduction band or FTO surface to the redox mediator is Table 1. This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 2895-2906 | 2899 Edge Article Chemical Science suppressed. Long electron lifetimes across the TiO 2 /XY1:L1/ Cu II/I (tmby) 2 interface further conrm supressed recombination in the co-sensitized DSCs ( Fig. 3 and S4 †). 48,49 Due to the large electronic transition energy in the L1 dye, a larger quantity of high-energetic electrons is injected into the TiO 2 conduction band, thus raising the TiO 2 Fermi energy. As a consequence, the open-circuit voltage of the cell increased to 1080 mV. Lowering the illumination to 10% sunlight causes a small drop in the V OC of XY1:L1-sensitized solar cells to 980 mV, while leading to an increase in PCE up to 13.7% (Fig. 2B and Table S1 †). Complementary light absorption of the two sensitizers XY1 and L1 allows for more effective photon collection and results in a greater number of electrons in the TiO 2 conduction band. In the spectral region around 380-430 nm, DSCs employing a sole red sensitizer suffer from competitive light absorption by the orange Cu II/I (tmby) 2 electrolyte, which inltrates the mesoporous dye/TiO 2 scaffold. The yellow dye L1 complements the absorption of the red/purple dye XY1 in the greento-blue region around 400 nm. In this wavelength domain, incident-photon-to-current-conversion efficiency (IPCE) spectra of devices solely sensitized with the red dye XY1 indicate a reduced photocurrent collection (Fig. 2C). The L1 dye (l 0-0 of 404 nm) adds optical density around 400 nm and counters competitive light absorption by the Cu II/I (tmby) 2 electrolyte. As a result, a larger number of photons is absorbed and DSCs with XY1:L1 as co-sensitizers exhibit a photon conversion efficiency above 80% over a broad spectral range from 350 to 630 nm. In addition, we found that both XY1 and L1 sensitizers are rapidly regenerated by the Cu II/I (tmby) 2 electrolyte (Fig. S5 †). In our study, the combination of XY1 and L1 dyes outperformed previously studied prominent cosensitizers XY1:D35 (ref. 20) and XY1b:Y123 (ref. 21) (11.0% and 10.9% power conversion efficiency, respectively; Fig. S2, Table S1 and S3 †). Performance of photovoltaic devices was tested under ambient lighting with an OSRAM 930 18 W uorescent tube. Due to close matching of the sensitizer composition to the lamp spectrum ( Fig. S6 and Table S2 †), XY1:L1 co-sensitized cells maintained a V OC of 910 mV and collected 147 mA cm À2 of photocurrent density with a ll factor of 0.77 at 1000 lux of illumination (Fig. 2D, S2C, S3A, † Table 1, S3 and S4 †). The cells generated 103.1 mW cm À2 , corresponding to 34.0% power conversion efficiency, which, to the best of our knowledge, ranks amongst the highest in literature and atop DSC reports. The 97.0 mW cm À2 steady-state power output of the cells under load potential was identied to translate to 32.0% conversion efficiency (Fig. S11A †). At lower light intensities of 500 and 200 lux, the cells converted 49.5 and 19.0 mW cm À2 at 32.7% and 31.4% power conversion efficiency, respectively (Fig. S3B † and Table 1). Mathews et al. estimated that between 0.1 and 10 mW of power are required to operate common components of IoT devices, such as wireless data transfer. 14 To provide such amount of power, efficient DSCs need to be manufactured beyond laboratory scale. As shown in Fig. S7A, † we assembled solar cells with active areas of 3.2 cm 2 as well as 8 cm 2 . No signicant performance drop was observed when characterizing larger cells under 1000 lux uorescent light as the photovoltage remained above 900 mV even for 8 cm 2 cells with only a slight decrease in photocurrent collection ( Fig. S7B and Table S5 †). The 3.2 cm 2 cell reached a power output of 332 mW or 33.2%, while the 8 cm 2 cell converted a total 740 mW at 30.6% power conversion efficiency. The DSCs showed stable power outputs beyond evaporation of the electrolyte. As for the Cu II/I (tmby) 2 electrolyte, its gradual drying in ambient atmosphere lead to the formation of a solid hole transporting material (Fig. S8 †). 25,[50][51][52] We measured Raman spectroscopy directly inside the 'sandwich' solar cell to investigate the Cu II/I (tmby) 2 hole transport material (Fig. 4). A broad molecular vibration band, signicant of the creation of an amorphous state, arises around 1100 cm À1 , unknown to either Cu II/I (tmby) 2 , dye-sensitized TiO 2 or TFSI counterions. Raman spectra further show a depletion of the Cu II species in the solidied material and point towards accumulation of DSCs based on the Cu II/I (tmby) 2 hole conductor show an increase in photovoltaic performance upon drying of the electrolyte; their devices maintained a power conversion efficiency above the initially recorded value aer 40 days of unsealed ambient storage. Further, they noticed only a minor drop in power output aer 200 hours of constant illumination. 51 Zhang et al. further conrmed the durability of 'Zombie' solid-state DSCs during their 1000 hours stability testing. 25 We monitored the evolution in device performance of XY1:L1-sensitized solar cells and found that, in agreement with previous reports, the formation of a solid-state hole conducting material leads to an increase in photocurrent, enhancing the total photoconversion efficiency of the cells under simulated sunlight ( Fig. S9A and Table S6 †). 50 Partially inchoate penetration of the porous TiO 2 layer by the amorphous Cu II/I (tmby) 2 hole transport material leads to a slight drop in photovoltage. Nonetheless, devices maintained a power conversion efficiency of 30.0% under 1000 lux uorescent light (Fig. S9B †) aer evaporation of the electrolyte solvent, indicating high robustness for long time use, irrespective of sealing problems. In addition to evaluating the evolution of device performances, we carried out a twelveday case study with our DSCs powering a wireless IoT device exposed to illumination and dark intervals. We observed no drop in the power supplied by the DSC array; the reader is here referred to the ensuing discussion of Fig. 6. Articial intelligence on autonomous IoT devices Articial intelligence (AI) has found entry into many research elds from speech and image recognition, robotics, and autonomous cars to medical diagnosis, biology, and materials science. 56,57 Machine learning (ML) was established in 1959 as a subeld of AI research. 58,59 By utilizing methods from statistics Fig. 4 Raman spectra of Cu II/I (tmby) 2 TFSI (2) and MgTFSI 2 (powders), DSCs sensitized with the XY1:L1 sensitizer combination as well as (sensitized) DSCs containing amorphous Cu II/I (tmby) 2 TFSI (2) hole transport material. The two latter spectra were recorded inside the 'sandwich' DSC. Spectra are offset for clarity. and probability theory, the machine learns rules and strategies directly from data rather than having them imposed by a programmer. A substantial increase in available computational power accelerated the advance of ML in recent years. Large amounts of data can be easily collected and exchanged from many different sources over the internet. The IoT makes use of this infrastructure and gathers data from a variety of low-cost sensing devices. Data processing and ML are usually executed on large servers, trying to achieve smart behaviour of the overall system. However, running a server for data acquisition and learning in many cases counteracts the energy savings achieved with said smart behaviour. 60 Networks of IoT devices strongly benet from the possibility to perform ML directly on the device. With a pre-trained model, the device can predict a global state solely from its locally gathered data and therefore reduce the need of communication within the network. Moreover, ML provides the possibility to predict quantities of interest by using only a small and easily accessible number of parameters. Directly accessing such quantities would require much more complicated systems, if they could be accessed at all. Furthermore, ML can help to reduce the number of devices needed to identify the global state of a system. Therefore, the combination of environmental sensing and inference through machine learning is ideally suited to adapt to the natural constraints of a uctuating power source like the presented solar cell. However, microcontrollers typically used in IoT nodes have very limited memory and processing power, constraining the possibility of training ML models directly on single IoT nodes. For an adaptive, self-learning IoT network it is thus necessary to provide a base station with sufficient computational power. Table 2 Investigated neural network (NN) structures with corresponding weight and bias count, size to store the network and achieved accuracy. The quantized two-layer NN was used on the IoT nodes. The maximum difference in accuracy between quantized NN and its full precision correspondence was 0.07% and was observed in both directions, better and worse than the full precision network Layers Weights and biases Here, we assessed the possibility to power both IoT nodes and a base station solely by harvested ambient light. An array of eight serial 8 cm 2 photovoltaic cells (with a total of 64 cm 2 ) illuminated with 1000 lux uorescent light was used to power a Raspberry Pi Zero as a base station, using supercapacitors with a total capacitance of 20 F as an energy buffer. We used TensorFlow to design and train articial neural networks. 61 As a benchmark example, we implemented a neural network to categorize handwritten digits from the MNIST dataset. 62 Image data was pre-processed and reduced in size to ensure that the trained network suits the limited memory capabilities of microcontrollers in the sensor network (i.e. an Atmega328P). The neural network consisted of an input layer with 196 neurons, a densely connected hidden layer with 32 neurons using a rectied linear unit activation and a densely connected layer with 10 neurons and somax activation as output layer (Fig. 5). One training epoch with 60 000 MNIST images and one verication run with 10 000 images resulted in an inference accuracy exceeding 90% ( Table 2). The required 152 J were, in our example, charged within less than 24 hours at 1000 lux illumination, equalling 4.41  10 20 photons or 7.32  10 À4 einstein per training epoch. Aer training the neural network, the obtained weights and biases were post-processed and deployed to remote devices in the sensor network. Using 4 byte oating-point numbers for the 6634 weights resulted in 26 kB required memory to store the neural network. Converting oating-point numbers to 1 byte xed-point numbers reduced the size by a factor of four, to the detriment of precision for each weight. 63 Nonetheless, when evaluating the predictive power of the quantized network, the loss in accuracy was found no larger than 0.1% with respect to the predictive power of the full precision network ( Table 2). The accuracy of the network using camera-acquired printed MNIST-digits was 80%. The quantization of neural networks is a crucial step to make ML inferences on low-power microcontrollers possible. Powering IoT nodes with dye-sensitized light harvesters We consequently demonstrate a prototype of a fully selfpowered intelligent IoT node inferring information based on a pre-trained articial neural network, based on an ATme-ga328P microcontroller. Five serial 3.2 cm 2 XY1:L1-sensitized cells (total 16 cm 2 ) illuminated with 1000 lux uorescent light provided the device with energy. We equipped the board with a AVX 6.0 V 0.47 F supercapacitor to serve as an energy buffer (Fig. S10 †) and a rectifying diode to prevent capacitor discharging into the cells during dark intervals. Fig. S11B † shows an example of the charging curve of a serial array of photovoltaic cells (total 25.6 cm 2 ) charging energy into a AVX 5.5 V 1.5 F supercapacitor. The microcontroller was congured at 8 MHz, which allows for an operating voltage of 1.8-5.5 V. Wireless communication was established through a low-power nRF24L01+ transceiver. For detailed information the reader is referred to Fig. S10. † All benchmarks executed a workload inside a PID-control loop using the internal microcontroller voltage (which is equivalent to the capacitor voltage) as a set point, determining intermittent sleep intervals. In addition to MNIST machine learning inference, three benchmarks were executed as core workloads: heartbeat for continuous wireless communication, Dhrystone MIPS for assessment of computational performance assessment, and temperature sensing for day-night testing. 62,65 All benchmarks were executed on fully untethered harvesters and results were wirelessly transmitted to a power-connected base station. The transmitted 12 byte serialized data package contained information about the package length, a package identier, internal voltage, sensor data, message count, and the number of sleep cycles. The heartbeat benchmark contained no sensor data and executed no further workload. Data was continuously sent at 282 ms per data package, of which 250 ms were an intentional sleep interval, giving an effective execution time of 32 ms. The internal voltage increased to an equilibrium between energy harvest and consumption, determined by the increased power consumption of the microcontroller at higher operating voltages, slower energy charging of the buffer and general leakage. The Dhrystone MIPS benchmark was used in an adapted version in order to be executable on the microcontroller. The average VAX MIPS were calculated on-chip and included the calculated sleep time. An average computational performance of 0.413 VAX MIPS was recorded over a period of 24 hours. During that period, a total of 19 hours 52 minutes of sleep was protocolled, leading to an effective 4 hours 8 minutes or 17% active runtime of the microcontroller under full CPU-load. Machine learning capabilities were benchmarked using a pre-trained two-layer network to categorize images of handwritten digits from the MNIST dataset, which were received wirelessly. Inferences were averaged over 100 computations before transmission. The computation of each inference consumed 0.947 mJ of energy in our pilot experiment, translating to 2.72  10 15 photons or 4.51  10 À9 einstein per inference and pose an important benchmark for future approaches. 16 cm 2 of photovoltaic area provided sufficient energy for one inference within just 581 ms of 1000 lux illumination. We extended the benchmark to a simulated day-night indoor environment with 16 hours of 1000 lux illumination and 8 hours of darkness for twelve days, measuring the temperature as workload (Fig. 6). As an initial observation, the operating voltage on the microcontroller exhibits the same pattern of voltage decay during dark periods and voltage rise under illumination for the duration of the entire experiment. As a result, we conclude that the DSCs provide a constant amount of energy and exhibit excellent stability under 1000 lux illumination. On average, the wireless sensor transmitted data every 16 seconds during illumination intervals, well-ranging within common battery-driven wireless sensors. During 'night' intervals, the microcontroller used the energy stored in the AVX 6.0 V 0.47 F supercapacitor to ensure data transmission to the base station in intervals of minutes. The microcontroller operated continuously without shutting down, thus removing the requirement to save data to non-volatile memory. While 1000 lux was chosen as a standard premise for this feasibility study, in many cases indoor IoT devices will not be illuminated with more than 200-500 lux (Fig. S3B † and Table 1). 14,66 Nonetheless, as demonstrated in this pilot example, the operation at an equilibrium between execution of computational load and intermittent sleep cycles ensures that the IoT device adapts to the available light energy. It is without question worth noting that, certainly at illumination intensities dropping below 200 lux, the array of photovoltaic cells will slow down certain core computational workloads such as the energyconsuming training of an articial neural network. Nonetheless, the large photovoltage of 840 mV at 200 lux allows the photovoltaic cells to provide a voltage within the operating range of many microcontrollers even at such low light intensities. Meanwhile, the harvested photocurrents follow a linear dependency on the intensity of illumination. As a result, the photovoltaic cells will continue to steadily charge the energy buffer. Conclusions As a communicating and autonomous system of interconnected devices, a self-powered IoT embodies technological sustainability. An enhanced exchange of light and information in intelligent wireless sensor networks looks poised to transform the implementation of IoT devices. Through elimination of recombinative electron transfers across the photoactive interface, dye-sensitized solar cells based on dyes XY1 and L1 maintain over 910 mV of photovoltage under 1000 lux of uorescent light, translating into 34.0%, 32.7% and 31.4% conversion efficiency at 1000, 500 and 200 lux of uorescent light. Employing small arrays of photovoltaic cells, core workloads of IoT devices such as sensing and wireless communication were benchmarked. The possibility of inferring information from learned models was tested through the implementation of an articial neural network: In our experimental set up, 152 J or 4.41  10 20 photons were required to train and verify an articial neural network; 0.947 mJ or 2.72  10 15 photons were subsequently required to compute one inference. wrote the manuscript and all authors contributed to compiling the manuscript. Conflicts of interest Authors declare no competing interests.
8,297.8
2020-02-13T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science", "Computer Science" ]
Low-Profile Wideband Solar-Cell-Integrated Circularly Polarized CubeSat Antenna for the Internet of Space Things This paper proposes a low-profile wideband circularly polarized (CP) antenna using solar cell patches as radiation elements and a sequentially rotated feeding network for CubeSat applications. To realize a wide axial ratio (AR) bandwidth with a compact size, a sequentially rotated feeding network was designed by modifying a quadrature hybrid coupler and a rat-race coupler that has a small change in phase difference even when the frequency changes. A wideband CP patch array antenna was designed by combining a C-shaped slot-coupled solar cell patch in conjunction with a novel feeding network. The overall size of the proposed CP CubeSat antenna is 100 <inline-formula> <tex-math notation="LaTeX">$\times 100\times7.2$ </tex-math></inline-formula> mm<sup>3</sup> (<inline-formula> <tex-math notation="LaTeX">$0.83\,\,\lambda _{\mathrm {o}} \times 0.83\,\,\lambda _{\mathrm {o}} \times 0.06\,\,\lambda _{\mathrm {o}}$ </tex-math></inline-formula> at 2.5 GHz). Solar cells occupy 79% of the antenna area, enabling efficient energy harvesting. The −10 dB impedance bandwidth is 1.98–3.0 GHz, which is a fractional bandwidth of approximately 41.0%. The 3-dB AR and 3-dB gain bandwidths are 1.98–3.0 GHz (41.0%) and 1.82–2.98 GHz (46.6%), respectively. The proposed CP solar patch array antenna demonstrates a constant radiation pattern within the −10 dB impedance bandwidth. The proposed CubeSat antenna is suitable for use in an Internet of Space Things (IoST) autonomous communication system. I. INTRODUCTION The Internet of Things (IoT) refers to things or people connected through a network, so they can effectively exchange information through an embedded communication system. The IoT is recognized as a key driving force for 5G/6G wireless communication due to its ubiquitous characteristics, which can operate anytime and anywhere, as well as its application-oriented operation, which can connect numerous physical points [1]. It is expected that more than 70 billion devices will be connected by 2025, which poses many challenges to the practical realization of the IoT. Therefore, countries around the world are seeking the evolution of a hyper-connected society based on IoT technology [1]- [3]. The associate editor coordinating the review of this manuscript and approving it for publication was Tutku Karacolak . In line with this trend, various types of IoT devices are being developed. With the rapid increase of wide-area IoT and short-range IoT, the number of devices connected to the network is expected to increase explosively in the future. Currently, connectivity for IoT solutions is realized through a variety of terrestrial networks, including but not limited to wireless personal area networks and low-power wide-area networks. However, there are still many areas where it is difficult to provide coverage due to financial problems, complex environments, and rugged terrain. To this end, the concept of the Internet of Space, which utilizes Low Earth Orbit satellites as a possible solution, has been proposed. The Internet of Space Things (IoST) is a system that enables mobile communication anywhere in the world using low-orbit satellites located at altitudes of 160 to 2,000 km, as shown in Fig. 1 [4]. The rapidly changing IoT environment makes it difficult to VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ implement global satellite communication based on existing medium and large satellites requiring a long development schedule and high development costs. Considering the shortcomings of existing medium and large satellites, a new type of small satellite called CubeSat is actively being developed to implement a new IoST [5], [6]. A CubeSat is a cubic-shaped standardized small satellite weighing less than 1.33 kg with dimensions of 10 cm × 10 cm × 10 cm, defined as 1 U [7], [8]. A CubeSat can be used alone or in groups of multiple units. CubeSats have the advantage of greatly reducing costs because commercial off-theshelf components are extensively used, and the development and distribution cycle is very short because standardized devices are implemented. In addition, CubeSat orbits can respond much more actively to satellite disturbances because the number of CubeSats used in orbit is large [7]. A CubeSat is very small, and many components must share limited space. Consequently, the antenna of CubeSat must be efficiently utilized within a small area. A solar cell is used as the power source for CubeSat; many studies on solarcell-integrated antennas are being conducted to efficiently utilize the restricted surface area of CubeSat [9]. A solar cell-integrated antenna using a transparent electrode was proposed [10]- [12]. An antenna using a transparent electrode, such as indium tin oxide (ITO), has a simple design. However, ITO is expensive and results in low antenna efficiency due to the low conductivity of thin conductors. An antenna structure combining a slot with a solar cell was proposed [13]- [15]. The structure in which the slots are arranged between the gaps of the solar cell array is also simple in design. However, it is difficult to change the antenna's structure for optimal performance [13]. An antenna with a slot inserted by cutting the solar cell's structure can be designed into various shapes. Therefore, it is easy to obtain the desired antenna characteristics. However, the amount of output current collected by the solar cell decreases because the area of the solar cell is reduced by the presence of the slot [14]. Antennas using solar cells as metasurfaces have been proposed [16], [17]. A metasurface antenna can realize high gains and wide bandwidths. However, as the operating frequency increases, the size of the solar cell used as the metasurface unit cell becomes very small. Therefore, connecting each solar cell for direct current collection is complicated. An antenna using solar cells as a patch, proposed in [18]- [20], is simple in design and can achieve high gains and wide bandwidths. In addition, the efficiency of the solar cell is not reduced, and the solar cell's energy harvest is easy. However, to design a solar cell patch antenna with a wide bandwidth, the overall size of the antenna must be increased. Circularly polarized (CP) antennas are widely used as satellite antennas because they are less affected by the installation direction of the transmitting and receiving antennas, the multipath effect, and the Faraday effect in the ionosphere [21]- [26]. Many kinds of research have investigated the design of CP antennas [27]- [39]. The sequentially rotated feeding network is widely used as a feeding structure for CP antennas because of its small size and simple structure [27]- [33]. However, the conventional sequentially rotated feeding network has low isolation between output ports. This narrows the axial ratio (AR) bandwidth of the antenna. The phase difference between the output ports changes as the frequency varies, thus changing the antenna's radiation pattern. To realize a wide AR bandwidth and a constant radiation pattern, a sequentially rotated feeding network using a phase circuit, such as a Schiffman phase shifter, has been presented [34]- [38]. The sequentially rotated feeding network using the Schiffman phase-shifter displays a small phase difference between the output ports over a wide frequency range, and the AR bandwidth is wide due to the high isolation between output ports. However, the Schiffman phase-shifter requires a Wilkinson power divider to improve the isolation at the expense of power loss. In addition, the Schiffman phase shifter consists of long transmission lines, which causes the Schiffman phase shifter to be very large. This, in turn, limits its application to a small antenna feeding structure. In [39], a sequentially rotated feeding network using a branch-line coupler was put forward. The branchline coupler has high isolation and little change in the phase difference between output ports over a wide frequency range. In addition, the branch-line coupler can be easily miniaturized by bending the transmission lines of the coupler. Thus, it is easy to use in the antenna's feeding structure. In this paper, we propose a CP antenna with low-profile, high-gain, wideband characteristics using a solar cell patch as the radiating element and a sequentially rotated feeding network implemented with a modified branch-line coupler for use in CubeSat. The proposed antenna has a wide bandwidth and little variation in its radiation pattern. In addition, by using a solar cell as the radiating element and placing it on CubeSat's surface, the efficiency of the solar cell is not reduced, and solar cell energy harvesting is simple because few solar cells are used. II. WIDEBAND PATCH ANTENNA An aperture-coupled patch antenna has a wider impedance bandwidth than a microstrip line-fed patch antenna. Energy harvesting is simple when solar cells are used as a patch because the patch and feeding structure are not directly connected. Due to these advantages, a wideband patch antenna was designed using the aperture-coupled method. A. C-SHAPE SLOT-COUPLED SOLAR CELL INTEGRATED PATCH ANTENNA A single patch antenna with low-profile and broadband characteristics was first designed. The substrate used for the antenna design was a ROGERS RO4003C (ε r = 3.38, tanδ = 0.0027). The thickness of the substrate was 0.508 mm. Fig. 2 shows the structure of the single-patch antenna. The antenna consists of substrates 1 and 2, a reflector to reduce back radiation, and foams 1 and 2 to support the antenna structure. A rectangular silicon solar cell patch was placed on top of substrate 1. Foam 1 is inserted between substrates 1 and 2 to support both substrates. The ground plane and slot are printed on the top side of substrate 2. Instead of the conventional straight, narrow slot, a C-shaped wide slot with good impedance matching performance was used [40]. The microstrip line and the tuning stub of the antenna are printed on the bottom side of substrate 2. A thin microstrip line with a width W fa is a transmission line connecting a single patch antenna and a feeding network, and a wide strip line with a width W st is a tuning stub for a single patch antenna. The tuning stub with a wide line width has less impedance change in a wide frequency range; thus, a wider impedance bandwidth can be realized [41]. The impedance bandwidth was further widened by generating modes with varying polarization directions by adjusting the positions of the slots and feed lines [42]. A reflector is placed at the bottom of the antenna to reduce back radiation, and foam 2 is inserted between substrate 2 and the reflector to support substrate 2. The design and analysis of the antenna were performed using the ANSYS High-Frequency Structure Simulator (HFSS). The design parameters of the single patch A 2 × 2 CP array antenna using the designed single patch antenna was implemented, and its characteristics were demonstrated. After arranging the four patch antennas in a given 10 cm × 10 cm space, a phase difference of 90 • was set for each antenna to achieve circular polarization. When the array antenna was designed using four single-patch antennas, the characteristics of the single antenna changed. ANSYS HFSS was used to optimize the array antenna. Fig. 6 depicts an optimized 2 × 2 CP array antenna. The design parameters of the optimized array antenna are as follows: W = 100 mm, Fig. 7 shows the characteristics of the array antenna. Fig. 7(a) demonstrates the reflection coefficient of the antenna. The −10 dB impedance bandwidth of the array antenna is 1.89-2.89 GHz, which is a fractional bandwidth of 41.8%. The −10 dB impedance bandwidth of the array antenna is wider than that of a single antenna. Fig. 7(b) shows the antenna gain and AR. The single-patch antenna has a problem in which the cross-polarization level increases and the polarization direction changes as the frequency increases. However, the gain and AR are not affected when the single patch antennas are configured as a CP array antenna. III. SEQUENTIALLY ROTATED FEEDING NETWORK To realize a CP antenna with a wide AR bandwidth and little change in the radiation pattern even with frequency changes, a sequentially rotated feeding network consisting of modified branch-line couplers with a small size and wideband characteristics is used. A. MODIFIED QUADRATURE HYBRID COUPLER Fig. 8 shows a modified quadrature hybrid coupler designed with 100-input and output ports. By modifying the quadrature hybrid coupler, the area occupied by the transmission line is greatly reduced [39]. Fig. 8(a) is a diagram of a conventional quadrature hybrid coupler with a large circuit area, limiting its use as an antenna-feeding structure. Fig. 8(b) is a diagram of a quadrature hybrid coupler in which the empty space inside the circuit is reduced by folding the transmission line with 0.707 Z o characteristic impedance. Fig. 8(c) is a schematic of the modified quadrature hybrid coupler with reduced spacing achieved by bending the conventional quadrature hybrid coupler 90 • . The proposed modified quadrature hybrid coupler has a much smaller circuit area than the conventional coupler. Fig. 8(d) illustrates a feeding structure with two input ports and four output ports designed using two modified quadrature hybrid couplers. Load resistors are connected to the isolated port of each coupler. When power with the same phase is supplied to two input ports, the phase of the output power of output ports 1 through 4 is 0 • , 90 • , 0 • , and 90 • , respectively. The design parameters of the modified quadrature hybrid coupler are as follows: L hc = 19.2 mm, W c1 = 1.15 mm, W c2 = 0.75 mm, g c = 0.25 mm, W cf = 0.3 mm, and L c = 3.0 mm. Fig. 9 demonstrates the characteristics of the couplers described in Fig. 8. Fig. 9(a) shows the reflection coefficient of each coupler. The −10 dB impedance bandwidth of the proposed coupler is slightly wider than that of the conventional coupler; the impedance matching is very good. Fig. 9(b) depicts the difference in output power between the output ports of the couplers. The frequency range where the power difference between the output ports of the proposed coupler is less than 2 dB is wider than that of the conventional coupler. Fig. 9(c) shows the phase difference between the output powers of the couplers. The proposed coupler shows little phase difference change over a wide frequency range. By modifying the coupler, the area that the circuit occupied is greatly reduced, and the performance is still good. B. MODIFIED RAT-RACE COUPLER An additional phase difference of 180 • is required to implement circular polarization using the feeding structure in Fig. 8(d). A 180 • coupler with a modified rat-race coupler was designed to implement an additional phase difference. Fig. 10(a) shows a conventional rat-race coupler. The ratrace coupler can be used as a power divider with a phase difference of 180 • over a wide frequency range. However, the area occupied by the circuit is very large, and it is not suitable for use as a feeding structure for miniaturized antennas. Fig. 10(b) displays the structure in which the isolated port is removed from the conventional rat-race coupler. In the rat-race coupler, the power input to port 1 is not transmitted to port 4. Therefore, even if port 4 is removed, the rat-race coupler can be used as a power divider with a phase difference of 180 • . In the proposed coupler, port 4 was removed, and the transmission line width was changed so the coupler had a 50-input port and 100-output ports. The coupler consists of a pair of transmission lines with a characteristic impedance of 100 with a length of λ/4 and a pair of transmission lines with a characteristic impedance of 70.7 with a length of λ/2. Fig. 10(c) is a modified rat-race coupler in which the size of the rat-race coupler depicted in Fig. 10(b) is reduced by folding the transmission line. In the proposed coupler, the circuit area is greatly reduced to 1/6 of the size compared with the conventional rat-race coupler. The design parameters of the modified rat-race coupler are as follows: L r1 = 14.5 mm, L r2 = 12.0 mm, W r1 = 0.6 mm, g r1 = 0.6 mm, g r2 = 0.9 mm, L r3 = 6.0 mm, L r4 = 4.4 mm, L r5 = 3.05 mm, W r2 = 0.3 mm, g r3 = 0.4 mm, and d r = 2.45 mm. Fig. 11(a) shows the reflection coefficient of the couplers depicted in Fig. 10, and Fig. 11(b) shows the output power difference of the output ports of each coupler. Fig. 11(c) demonstrates the phase difference between the output ports of each coupler. The proposed coupler has a slightly narrower −10 dB impedance bandwidth compared to the conventional rat-race coupler. However, the output power and phase difference between the output ports are smaller than the conventional rat-race coupler within the impedance bandwidth. C. PROPOSED SEQUENTIALLY ROTATED FEEDING NETWORK By combining the previously designed modified quadrature hybrid coupler and the modified rat-race coupler, a novel sequentially rotated feeding network with one input port and four output ports was designed. Fig. 12 reveals the conventional and proposed sequentially rotated feeding networks. The area occupied by the proposed sequentially rotated feeding network is 19.9 × 19.9 mm 2 , which is smaller than the area of the conventional sequentially rotated feeding network of 21 × 21 mm 2 . Fig. 13 shows the amplitude characteristics of the S-parameters of the conventional and proposed VOLUME 10, 2022 feeding networks. Fig. 13(a) portrays the S-parameter of the conventional feeding network, and Fig. 13(b) shows the S-parameter of the proposed feeding network. The conventional sequentially rotated feeding network has a low reflection coefficient over a very wide frequency range, and the transmission coefficient of each output port is constant. The proposed sequentially rotated feeding network has good performance characteristics within 2.00-3.00 GHz. Fig. 14 shows the phase characteristics of the S-parameters of the conventional and proposed feeding networks. Fig. 14(a) shows the phase difference between the output ports of the conventional feeding network, and Fig. 14(b) demonstrates the phase difference between the output ports of the proposed feeding network. Because the conventional feeding network implements the phase difference based on the length of the transmission line, the electrical length of each transmission line changes at a frequency outside the center frequency, thereby changing the phase difference between the output ports. Conversely, because the proposed feeding network generates a constant phase difference over a wide frequency range, the phase difference between the output ports is stable, even at a frequency outside the center frequency. IV. FABRICATION AND MEASUREMENT We designed the 2 × 2 solar cell patch-integrated CP antenna by combining the proposed feeding network with the antenna designed in Section 2. Fig. 15 presents the geometry of this antenna. On the top surface of the antenna, there are four solar cell patches and a substrate supporting the solar cell, and the performance of the solar cells determines the DC performance of the antenna. Four RF decouplers, each consisting of a pair of inductors, were added for solar cell energy harvesting. One part of the RF decoupler is connected to the bottom contact of the solar cell, and the other part is connected to the grid. To implement DC energy harvesting, a pair of metal wires were connected to the end of the RF decoupler, and an inductor was used as the RF choke to prevent the RF signal from leaking along the wire. The middle substrate has a ground plane, including a C-shaped slot on top, and the feeding network is printed on the bottom. The reflector is located at the bottom of the antenna. A coaxial cable was used to feed the antenna. The inner conductor of the coaxial cable is connected to the input port of the feeding network, and the outer conductor is connected to the reflector. Metal pins connect the ground plane with the reflector surrounding the feeding network, so the reflector has the same electric potential as the ground plane. The designed antenna was fabricated, and its performance was measured. Fig. 16 shows the fabricated antenna. A Rohde & Schwarz ZVA 67 vector network analyzer was used to measure the antenna reflection coefficient, and the antenna radiation pattern was measured in the MTG anechoic chamber. Fig. 17 displays the simulated and measured antenna characteristics. The measured result of the −10 dB impedance bandwidth of the antenna is 1.98-3.0 GHz, a fractional bandwidth of 41%. The 3-dB gain bandwidth and AR bandwidth are 1.82-2.98 GHz and 1.98-3.0 GHz, respectively, which are fractional bandwidths of 46.6% and 41.0%, respectively. The measured results are very similar to the simulation results. Fig. 18 shows the antenna radiation patterns at 2.2 GHz, 2.5 GHz, and 2.8 GHz. The simulated and measured antenna gains at 2.2 GHz are 8.5 dBic and 8.7 dBic, respectively. The simulated and measured gains at 2.5 GHz are 8.4 dBic and 8.1 dBic, respectively, and the simulated and measured gains at 2.8 GHz are the same at 8.8 dBic. In the proposed antenna, the radiation patterns of the antenna do not change significantly, even when the frequency changes, and the radiation patterns in the xz-and yz-planes are very symmetrical. The simulation and measurement results of the antenna are summarized in Table 1. V. EFFECTS OF THE SOLARCELL ON ANTENNA PERFORMANCE The electrical properties of a solar cell are greatly affected by the amount of incident light. When a solar cell is exposed to light, the electrical conductivity of the solar cell increases significantly [43]. A change in the electrical conductivity of a solar cell may affect the characteristics of the antenna. Fig. 19 shows the measured reflection coefficient of the antenna with different light intensity values. To validate the effect of light intensity, the reflection coefficient was measured under light intensities of 0.0, 250.4, 499.5, 749.0, and 999.4 W/m 2 (with a 100 W halogen lamp). As shown in Fig. 19, the measured reflection coefficient differs slightly depending on the light intensity values. However, the shape of the curves in the figure is almost identical, indicating that the light intensity has little effect on the reflection coefficient of the antenna. VI. COMPARISON The proposed antenna was compared with other solar cellintegrated antennas. The solar cell-integrated CP antenna presented in [10] consists of a transparent electrode radiation element and a conventional sequentially rotated feed network. The electrical size of the antenna is 3.33 × 3.33 × 0.66 λ 3 o , and the −10 dB impedance bandwidth is over 30%. The 3-dB gain bandwidth is approximately 16.3%, and the AR bandwidth is approximately 20%. The peak gain is 17 dBic. However, the gain and AR bandwidths are narrow, and the antenna is very large. The solar cell-integrated microstrip slot array antenna presented in [13] used a slot antenna as a radiation element and implemented CP using a sequentially rotated feed network. The electrical size of the antenna is 2.71 × 2.71 × 0.001 λ 3 o . The −10 dB impedance, 3-dB gain, and 3-dB AR bandwidths are over 17.3%. However, the peak gain is only 6.6 dBic, a very low gain compared to its large size. In [17], a CP patch antenna with a solar cell metasurface is presented. The electrical size of the antenna is 0.87 × 0.87 × 0.076 λ 3 o , and the −10 dB impedance bandwidth is 19.1%. The 3-dB gain bandwidth is approximately 33%, and the AR bandwidth is about 16.1%. The peak gain is 8.8 dBic. Although the gain was high compared to the size, the metasurface composed of solar cells requires many inductors for energy harvesting and is difficult to implement. A CP antenna using a solar cell patch antenna and a quadrature hybrid coupler is presented in [18]. The electrical size of the antenna is 0.67 × 1.14 × 0.016 λ 3 o , and the −10 dB impedance bandwidth is 12.5%. The peak gain of this antenna is almost 6 dBic. However, the 3-dB gain and AR bandwidths are very narrow. A low-profile solar cell-integrated patch antenna is proposed in [18]. The electrical size of the antenna is 1.6 × 1.2 × 0.024 λ 3 o . The −10 dB impedance bandwidth and 3-dB gain bandwidths are 15.5% and 21.3%, respectively. This antenna has a peak gain of 9.4 dBi, which is low compared to the size of the antenna. It also has a low form factor of 55.4%. A solar cell patch antenna with two aperture-coupled feeds is presented in [20]. The electrical size of the antenna is 1.31 × 1.31 × 0.060 λ 3 o . The −10 dB impedance bandwidth and 3-dB gain bandwidth are very narrow, at 6.8% and 8.4%, respectively. This antenna has a high peak gain of 10.8 dBi, with a low form factor of 45%. The antenna proposed in this paper has an electrical size of 0.83 × 0.83 × 0.06 λ 3 o , a −10 dB impedance bandwidth of 41.0%, and a 3-dB gain and AR bandwidths of 46.6% and 41.0%, respectively. In addition, the solar cell occupies 79% of the antenna area. Thus, it has a higher form factor compared with that of conventional CP solar cell-integrated antennas ( [13]: 73.5%, [17]: 56.3%, [18]: 79.0%, [19]: 55.4%, [20]: 45.0%). Here, the form factor is defined as the ratio of the total area utilized by the solar cell for energy harvesting to the given surface area of the antenna. Hence, it has superior performance characteristics compared to conventional solar cell-integrated antennas. In addition, there is no problem with the beam tilt due to changes in frequency. The performance characteristics of the conventional antennas and the proposed antenna are summarized in Table 2. VII. CONCLUSION In this paper, we propose a 2 × 2 sequentially rotated CP array antenna that combines a solar cell patch antenna having wideband characteristics with a novel sequentially rotated feeding network that boasts a stable phase difference over a wide frequency range. The proposed antenna has wide gain and AR bandwidths, and the radiation pattern is stable even when the frequency changes. The electrical size of the proposed antenna at the center frequency of 2.5 GHz is 0.83 × 0.83 × 0.06 λ 3 o . The −10 dB impedance bandwidth is 1.98-3.00 GHz. The 3-dB gain bandwidth and AR bandwidth are 1.82-2.98 GHz and 1.98-3.0 GHz, respectively. Moreover, the proposed antenna has a high form factor of 79%. Due to these advantages, the proposed antenna is suitable not only for CubeSats but also for other satellite applications. Therefore, it is useful for implementing an IoST autonomous communication system.
6,143.2
2022-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Autonomous Fire Fighting Robot with Smart Monitoring System The fire incident is one of the man-made disasters in Malaysia due to the frequency of reported fire cases recently. The majority of all cases that involved domestic buildings had very catastrophic effects such as property losses, permanent disability to the victim, and death. We are all aware of how fast fire can spread when it rages. Therefore, owners have to be alert at all times to any factor that might lead to any small fire and that is not possible to achieve. As humans, we might get distracted by other surrounding activities but not for robots. Thus, a Fire Fighting Robot has been developed and was controlled by Arduino Uno. It is designed in a smaller size in order to ease small location entry, fully equipped with high sensitivity sensors to achieve the required research objectives of searching, detecting, and extinguishing the fire. The combination of ultrasonic and flame sensors creates a perfect guide for the robot to work effectively. This autonomous robot will search and locate the fire and send a notification to the user through Blynk application; before the extinguishing process occurs. Blynk application will also provide a monitoring platform for the user to receive information including live streaming and flame data. We tested the performance of the robot by varying the distance and size of fire source to the robot. The graph trend line showed that the time taken was linearly proportional to the distance and size of fire. We managed to obtain the equation of the dependent variable from simple linear regression. From the R 2 value of 0.9888 and 0.9865, we can say that it has a strong relationship between both variables. To summarize, the experimental result has proved its capabilities as a reliable fire protection system by searching and extinguishing a fire in time. Introduction Firefighting is a very dangerous and high-risk job in saving human life. A fire-fighter has to be alert and well prepared at any time so that they are able to reach the fire in a short time and safely extinguish the fire. The quick action by them can prevent further damage and reduce fatalities of a burning area (Wang et al., 2011). A house is a place for shelter, and certainly not a place where a person has tragically been taken away from his life. With the advanced technology, the gap between firefighting and machines have been finally bridged allowing for a more effective and operative method of fire extinguishing (Xin et al., 2018). Over the years, fire cases in Malaysia from all aspects have grown rapidly. Majority of all the cases that involved domestic buildings had very catastrophic effect such as assets losses, serious injuries to fire victims, and death (Woodrow, 2010;Ramsay et al., 2018). Moreover, according to the Director-General of Fire and Rescue Department Malaysia, about 6,000 premises are destroyed by fire every year with 40 percent of these involve private houses (Muhamading, 2016). The increase of death counted due to fire accidents mainly in residential buildings is enough to trigger us to be aware not only of robbery but also on fire safety. The contribution to the causes of fire in the household includes cooking, smoking, and candles (Ahrens, 2018;Kobes, 2017;Ahrens, 2017a). Therefore, the main purpose of this project is to contribute to the growth of automation systems by developing an automatic fire extinguisher robot. The robot is to protect human life, wealth, and environment from fire accidents. This is very crucial whenever unexpected fire accident occurs while people who live in the house is either sleeping or not present in the house. The robot will not only help in detecting fire but also notify the user so that they will be prepared with another alternative such as calling the firemen. Fire spread is very fast and it doubled every minute (Wrack, 2010). This eventually will lead to the forming of smoke rapidly causing to why most victims cannot survive. Hence, the initial detection of fire plays an important role since it gives a higher chance of survival. Background of Study Fire is a phenomenon of burning expressed in light, flame, and heat. It is a process in which substances chemically combine with oxygen and fuel, which generally gives an output of lively light, heat, and smoke (Pyne, 2016). During the fire situation, in order to stop the gas supply to the fire, one solution can be done which is fuel surface temperature must be lowered. To ensure this, direct extinguishment where direct water is aimed at the base of fire would give more chance in putting out the fire as water will evaporate and energy from the fuel itself will be extracted (Lambert, 2013). This results in a temperature drop. Fire Extinguisher It is one of the active fire protection devices. It is convenient, portable equipment by directing the nozzle to the fire to extinguish a small fire but not fighting a large or spreading fires (Lyman, 2018). The method requires the user to manually take full control during extinguishing fire. When the user has the knowledge to use the fire extinguisher properly, it can save lives and property because this equipment can control the fire rage until the fire unit arrives for further extinguish process (Kodur et al., 2019). Extinguishers are very sensitive because they need to be checked regularly and require routine care to make sure the content inside is still in perfect condition. Sprinkler System An automatic fire sprinkler system is another fire protection that is created with sensors for it to be activated when sensing any fire in the optimum distance. It is normally mounted to the ceiling at a high place for a better sensing area. Sprinklers are the best option from its reliable outputs for any buildings' fire protection. They rely on steady water supply and extinguish any fires that ignite around them (Ahmed et al., 2015). However, there is still inconsistency in sprinkler where the system fails to be activated when there is fire (Abdelghany, 2019). The reason for that is, it may not activate because the heat from a fire is still not enough to trigger the heat sensor in the sprinkler. Nevertheless, a 100% effective sprinkler system is still not assuming a whole reduction in loss. In order for sprinklers to activate, it must reach a certain level of temperature and by the time it does, the fire size has already been out of control and destroy many valuable things (Frank et al., 2013). In the fire situation, it is tested that half of the sprinkler's ineffectiveness is due to water does not reach fire (Ahrens, 2017b) Smoke Detector A smoke detector functioning as a detector where a fire is sensed through smoke sensing. The difference between each type of detector is the way they sense fire. This model is the best choice at detecting smoky fires that start with smoke before finally breaking into flames. They have proven much more effective in detecting fires in residential homes. If we occupied them, we still have to take care by checking the batteries (Warmack et al., 2012). The popularity of smoke detector gives a big impact to the fall in fire deaths over 40 years. According to the National Fire Protection Association (NFPA) in the United States, the fire death rate is over twice as high in homes that have a problem of working smoke alarm or no alarm at all. It is recorded among the houses having a smoke alarm, 43% of the fire cases are where it is failed to give alarm sound, not functioning, and missing batteries (Ahrens, 2019). Hence, checking the alarm works or not is very important because we cannot predict any unwanted cases (Moinuddin et al., 2017). Comparison The main reason for this proposed work is to provide an automatic fire-extinguishing system which eliminates the other pre-described disadvantages. While fire alarm systems are designed to provide warning against fire, they still do not guarantee warning or protection. Noted that the fire extinguisher needs manpower to put out the fire which depending on user sensitivity. Hence, the user may take a longer time to extinguish fire since it depends on them. If the user is not around the residence, surely this will lead to complete loss. In solving this overlooked problem, a robot equipped with three high sensitivity temperature sensors is designed to search for a small fire. This robot can work continuously for 24 hours in searching for the fire which is more productive and consistent. From early prevention, we can avoid unwanted accidents. Despite that, the work proposed by (Ahrens, 2017b) is less reliable then handling the small fire. This is because the liquid in the bulb of sprinkler will heat up and explode when only the temperature reaches 155 to 165 degrees Celsius, then only it triggers the release of water. Thus, it only triggers as it reaches a certain point of temperature were at that time the fire can probably rage and spread rapidly. At the moment surely it does have consequences to the building area. The method does extinguish the fire at an optimum time but the building may not be as its initial condition. To overcome this, a firefighting robot will be developed and try to search for any small fire to extinguish before it rages. Besides, it can avoid loss of cost suffered in the event of a fire. Besides, the work described by (Warmack et al., 2012) cannot be expected to sense fires due to several reasons. The amount of smoke present may be insufficient to alarm smoke detectors because smoke occurs when there is incomplete combustion. That means at the point heavy smoke occurs, there is not enough oxygen present. To overcome this unwanted situation, the robot which moves at every corner of the house will surely detect small fire. It is very sensitive to dust particles and insects, meaning that regular maintenance is needed which is the opposite of the robot where it only needs to make sure its battery life at full power. Furthermore, the smoke alarm should be tested weekly to make sure all sensors and transmitters are working properly which is going to be a burden (NEMA, 2017). Methodology The development of the Fire Fighting Robot can be described in several parts such as prototype design, hardware design, and robot flowchart. This section will be discussed in detail on all the components used in this project including hardware and software. These two relate to each other in constructing this robot mechanism. Fig. 1 shows the final prototype of the robot. The robot has a strong acrylic base to hold all the electronic components steadily. It is facilitated by the two gear motors and one caster wheel. The robot is stable during the operation and capable of rotating up to 360 degrees with no problem. For the body structure of the robot, we equipped with lightweight and soft pine wood. Moreover, the specialty that pine wood offers is the ability to stand temperature up to 250 degrees before it ignites (Tsoumis, 1991). Thus, the robot can move at great speed with high endurance. Other mechanical parts including hose stand, servo stand, sensor guard, and camera enclosure make the robot firmer hold the component. The position of the ultrasonic sensor and flame sensors are at the front of the robot to obtain an accurate result for the system. Besides, the guard is installed to protect the flame sensors to strike any near obstacles while operating the process. Additionally, the camera OV2640 is installed at the front side to capture the whole situation during the process and stream directly to the user smartphone. Hardware Design From the block diagram in Fig. 2, it shows the proposed work of Fire Fighting Robot which consists of a power supply, Arduino Uno R3, flame sensors, DC motors, ultrasonic sensor, servo motor, camera, Wi-Fi module and water pump. A regulated supply of 11V DC has been provided from the power supply unit in order to power this circuit. Fig. 1: Final prototype of Fire Fighting Robot. Arduino Uno R3 Arduino Uno controls all the actions and commands of this project. All the digital and analog input and output pins are connected to the microcontroller board. Thus, the use of Arduino is simpler and straight forward as it can be compatible with any computer. Infrared Flame Sensor IR based flame sensor has chosen since it has all the capabilities to detect fire (Prasojo, I. et al. 2020). Analog pins are used to detect the exact wavelength of a different light. The sensor can give high sensitivity to the flame spectrum with a wavelength varied between 700nm to 1000nm. We have increased our sensitivity to detect fire by using three flame sensors where the angle is about 60 degrees each. Ultrasonic Sensor Ultrasonic sensor will help the robot movement to avoid any collision with an obstacle that blocks the way. Since we will not be assisting the robot during the process and put full trust in the robot, the obstacle avoidance system is a compulsory feature. The transmitter sends a high-frequency sound signal at every time ranging from 2cm to 400cm. When an object is found, the signal will reflect back and the receiver will receive it. The time calculated in between the process of transmission and reception can determine the actual distance to that object with known velocity in air. DC Motor with Motor Driver During the detection of fire presence, two 6V DC motors are used to move near the fire through the motor driver L298N module. It can control both speed and spinning direction of two DC motors making it ideal for building a two-wheel robot (Khechiba, 2016). The two geared rotating wheels of the robot will facilitate the movement of the robot. Servo Motor This motor attached to the spray nozzle for rotating it back and forth, from right to left during extinguishing the fire and object avoidance. It gives an important role so that the area of the fire is fully covered by spraying water to the area. The standard rotation servo can rotate 180⁰ with precision like the stepper motors which we will use in our robot. Water Pump The 5V water pump works using the water suction method which drains the water through its inlet and released it through the outlet (Charkhawala et al., 2020). This will be submerged in the tank full of water to provide enough pressure. When the supply is ON, the water will be suck at the inlet part and flow out of the outlet through the pipe. ESP32 Cam with OV2640 In order to provide more security to the proposed work, a camera is installed on top of the robot to capture and give live recording before, during, and after the fire extinguishing process. It will live streaming to the user smartphone by Blynk application through the Wi-Fi module. Fig. 3 shows the operation of the Fire Fighting Robot which depends on the user when they want to operate it. As the user switches ON, the robot will start with DC motor 1 and 2 simultaneously operate by rotating the wheels to move. In order to give extra security, the camera also functioning as the switch is ON. It will directly live stream to the Blynk application. The motors will keep rotating to search for any infrared light which is fire. At the same time, as the ultrasonic sensor detects any presence of an obstacle in front of the robot, it will rotate left or right depends on the condition to avoid any collision to the robot. Hence, the flame and ultrasonic sensors will work together to keep the DC motors updated. Fig. 3: Flowchart of the Fire Fighting Robot If one of the flame sensors detects a fire, firstly it will send an alert message to the user to notify them that there is a fire in the house through their smartphone. After that, it will start to search the accurate position of the fire from information given by those three flame sensors. If the flame sensor on the right detects a certain amount of infrared light, the robot will rotate to the right to ensure that the fire always in line with it. The same process if the left sensor detects the infrared light. The robot is adjusted until the fire is in front of the robot and move to the second section, B. In section B, the robot will measure the distance between the fires and will move forward until it obtains the optimum distance which is 20 cm before extinguishing it. If yes, DC motor 1 and 2 will automatically stop and the extinguishing process starts. The water tank equipped with sufficient water and a motor pump will give enough pressure to pump the water out. Water will flow out through a rubber tube as a spray system and servo motor will help in maximizing the covered area. The water pump and servo motor will work together and put out the fire until no heat is detected by the flame sensor. After finished, another alert message is sent so that the user to keep him or her updated to the current situation. Users can still observe the recording from the camera to check the causes of the fire. Finally, the robot will back to section A, which is searching the fire again. Result and Discussion To measure the performance of the robot, we tabulated a few data. First, we varied the distance of the robot from fire. We used a fire starter as a fire source. Next, we also analysed when the fire is double up at a fixed distance. Thus, we measured the time taken by the robot to detect the fire and extinguish it. Here, we can analyse the effectiveness of this smart fire protection. Note that the route that we used to examine the robot performance consists of three turns before reaching to fire. Ability to Extinguish Fire at Varied Distance Firstly, we tested the performance of the robot by varying the distance of the fire source to the robot. We prepared ten different distances starting from 0.5 meters to 5 meters. To ensure a more accurate result, we repeated the experimental process three times and calculated the average of each distance. Fig. 4 shows the graph of average time taken to extinguish a fire which depends on the distance to the fire. During the experiment, we fixed the size of fire which we only used one fire starter. Thus, we can focus on the fire distance variable. From our observation, the robot took only 60.49 seconds to extinguish a fire at a distance of 5 meters while 23.38 seconds for 0.5 metre. The graph trend line showed that the time taken was linearly proportional to the distance of fire. We managed to obtain the equation of the dependent variable from simple linear regression. From the R 2 value of 0.9888, we can say that it has a strong relationship between the variables. Hence, we can conclude that the further the robot distance to the fire, the longer the time taken it took to extinguish the fire. This variable is very crucial since practically we cannot expect where the fire would be. Hence, the ability of the robot to automatically search and extinguish the fire in a short time could be a lifesaver. Ability to Extinguish Fire at Varied Size of Fire We also analysed the performance of the robot by varying the size of the fire. This indicates how fast it can extinguish fire according to the variable. During the process, we tested ten sets with different sizes of fire by increasing the amount of fire starter from 1 piece to 10 pieces. The same process was repeated three times for more accurate results. We fixed the distance of the robot to the fire at only 2.5 meters so that we can only examine the fire size factor. Fig. 5 shows the graph of average time taken to extinguish a fire which depends on the size of the fire. Our observation found out that there was a difference of 22 seconds to extinguish fire between 1 piece and 10 pieces of fire starters. Moreover, it took less than 1 minute to extinguish 10 fire starter which is a top-notch performance. The graph also showed that the time taken was linearly proportional to the size of the fire. From the R 2 value which is 0.9865, we can say that it has a strong relationship between the two variables. Thus, we can conclude that the bigger the size of the fire, the longer the time taken it took to extinguish it. Fire size depends on what material it started to burn. Fire may rage out of control if it is not extinguished in optimum time. Therefore, the efficiency of the robot plays an important role to quickly put off the fire. Blynk Application We prepared a smart monitoring system where the user could easily monitor in case of fire situation to occur at any time. Fig. 6 shows result from the flame sensor where during fire detection, the value drops from 1024 to 0. This data runs continuously as long as the robot is working and it can be reviewed by multiple timelines. Moreover, the application includes notification so that the user can be notified if there is a fire in their houses. Thus, users can prepare further action in case of fire rage. This notification alerts them during the early detection of fire. Fig. 8 respectively show notification received during fire detection and after the fire has been successfully extinguished. Finally, a camera for the live streaming feature completes the security of the robot. Hence, the user able to check the fire status and the possible causes of fire. Fig. 9 and Fig. 10 show live streaming during fire detection and after the fire has been successfully extinguished, respectively. Conclusion In a nutshell, the project aims to provide security of home and building which is important to human life. Fire causes tremendous damage and losses of human life and property. There are many possibilities a fire can start in any household and the consequences may affect our life. We develop an intelligent multisensory based security system that contains autonomous firefighting system. The features that are available consist of object avoidance and fire extinguishment with smart monitoring using a smartphone makes it a more secure system. The Fire Fighting Robot has proved its performance and ability to extinguish a fire by varying distances of the robot to fire and the sizes of fire. The result showed that the range of time is acceptable and comparable to human sensitivity. The time factor is always being the indicator of this type of accident. The longer it takes to react; the worst the accident can happen. However, we have a few recommendations for future improvement that can advance the robot performance. First, a higher number of the flame sensor may increase the sensitivity to the fire since they will cover most of the area with less movement. Furthermore, a Wi-Fi controlled feature could improve its security because the user can control the movement of the robot whenever they want.
5,440.4
2021-12-25T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
On Bäcklund and Ribaucour Transformations for Hyperbolic Linear Weingarten Surfaces abstract: We consider Bäcklund transformations for hyperbolic linear Weingarten surfaces in Euclidean 3-space. The composition of these transformations is obtained in the Permutability Theorem that generates a 4-parameter family of surfaces of the same type. The analytic interpretation of the geometric results is given in terms of solutions of the sine-Gordon equation. Since a Ribaucour transformation of a hyperbolic linear Weingarten surface also gives a 4-parameter family of such surfaces, one has the following natural question. Are these two methods equivalent, as it occurs with surfaces of constant positive Gaussian curvature or constant mean curvature? In this paper, we obtain necessary and sufficient conditions for the surfaces given by the two procedures to be congruent. Introduction A surface M contained in the Euclidean space R 3 whose mean and Gaussian curvatures, H and K, satisfy a relation of the form α + 2βH + γK = 0, α, β, γ ∈ R, is called a linear Weingarten surface.The development of the theory of these surfaces started in the early 19 hundreds.More recent results obtained by several authors can be found in [1], [8], [10], [15]- [17].If the linear Weingarten surface satisfies β 2 − αγ < 0, then it is said to be hyperbolic.In this case, without loss of generality, we may assume that α = 1.Moreover, if β = 0, and γ = 1 then M is a pseudo-spherical surface, i.e., K = −1, and there is a well known theory on Bäcklund transformations for pseudo-spherical surfaces studied by Bäcklund [2,3] and on composition of such transformations called Permutability theorem obtained by Bianchi [4] . In this paper, we study an extension of the concept of pseudo-spherical line congruence, called a hyperbolic linear Weingarten congruence.Namely, we consider a diffeomorphism between surfaces M and M ′ such that at corresponding points p ∈ M and p ′ ∈ M ′ , the straight line determined by these points has a constant angle φ with the normal N p and a constant angle ρ with the normal N ′ p ′ .Moreover, we assume that the segment pp ′ has constant length r and N p has a constant angle θ with N ′ p ′ .Then M and M ′ are hyperbolic linear Weingarten surfaces satisfying, respectively, 1 + 2βH + γK = 0 and 1 + 2β ′ H ′ + γ ′ K ′ = 0, where β 2 − γ = (β ′ ) 2 − γ ′ < 0. We observe that whenever φ = ρ = π/2, then the theory coincides with the classical results for pseudo-spherical surfaces. The Integrability Theorem shows that given such a surface M there exists a 3-parameter family of surfaces M ′ , satisfying 1 + 2β ′ H ′ + γ ′ K ′ = 0, associated to M by a hyperbolic linear Weingarten congruence.The surfaces M ′ are said to be associated to M by a Bäcklund transformation for hyperbolic linear Weingarten surfaces. The Permutability theorem shows that the composition of such transformations is commutative when one chooses the parameters apropriately.In this case, starting with a hyperbolic linear Weingarten surface M satisfying 1 + 2βH + γK = 0, one gets a 4-parameter family of surfaces M * , satisfying 1 + 2βH * + γK * = 0, with the same constants β, γ of the surface M . Therefore, starting with a hyperbolic linear Weingaten surface M in R 3 , satisfying 1 + 2βH + γK = 0, one gets a 4-parameter family of hyperbolic linear Weingarten surfaces with the same constants β, γ, either by the composition of Bäcklund transformations or by Ribaucour transformations.Hence, it is natural to ask if these two methods are equivalent, i.e., if the surfaces obtained by these two methods are congruent.In this paper, we will show that in general the families obtained by these procedures are distinct, in contrast with what happens in the case of constant positive Gaussian curvature (see for example Tenenblat [18]) and surfaces of nonzero constant mean curvature (Jeromin-Pedit [12]).In the particular case K = −1, necessary and sufficient conditions were established by Goulart-Tenenblat [11], for a composition of Bäcklund transformations to be congruent to a Ribaucour transformation. The analytic interpretation of the Bäcklund transformation gives an integrable system of equations, in terms of ψ and 2 parameters, whose solutions ψ ′ give new solutions of the sine-Gordon equation.By considering ψ ′ and ψ ′′ two distinct such solutions, the analytic permutability theorem gives a superposition formula that provides an algebraic expression for new solutions ψ * , which depend on 4 parameters.Moreover, the Ribaucour transformation gives an integrable linear system in terms of ψ and a constant C R , whose solutions ψ depend also in 4parameters and satisfy the sine-Gordon equation.The solutions ψ * and ψ obtained by these procedures are distinct. The paper is organized as follows: In Section 2, we introduce the hyperbolic linear Weingarten congruence and we prove Bäcklund Theorem for hyperbolic linear Weingaten surfaces, the Geometric Integrability Theorem and the Geometric Permutability Theorem.In Section 3, considering the correspondence between such surfaces and solutions of the sine-Gordon equation, we prove the Analytic Integrability Theorem and we state the Analytic Permutability Theorem, whose proof is given in the Appendix.In Section 4, we start recalling some results on Ribaucour transformation.Then we obtain necessary and sufficient conditions for the hyperbolic linear Weingarten surfaces, obtained by the composition of Bäcklund transformations, to be congruent to those obtained by Ribaucour transformations.These conditions are given in terms of the first fundamental forms i.e., in terms of the corresponding solutions of the sine-Gordon equation. Bäcklund transformations for hyperbolic linear Weingarten surfaces in R 3 -Geometric Theory In this section, we introduce the concept of hyperbolic linear Weingarten congruence and we study a Bäcklund transformation for hyperbolic linear Weingarten surfaces in R 3 .Moreover, we also prove the integrability and the permutability theorems for these transformations. For each p ∈ M and p ′ = l(p) ∈ M ′ with p ′ = p, denote by v = v(p) the unit vector in the direction of the straight line passing through p and p ′ .Let N p (resp.N ′ p ′ ) be the unit vector normal to M (resp.M ′ ) in p (resp.p ′ ).We say that l is a hyperbolic linear Weingarten congruence with constants (r, θ, φ, ρ), where r > 0, 0 < θ < π, 0 < φ, ρ ≤ π 2 , if the distance between p and p ′ is constant equal to r, the angle between N p and N ′ p ′ is θ, the angle between N p and v is φ and the angle between N ′ p ′ and (−v) is equal to ρ. Remark 2.4.When φ = ρ = π/2, then the direction of the line congruence is tangent to both surfaces M and M ′ and it reduces to the so called pseudo-spherical line congruence of surfaces in R 3 . The following theorem justifies the definition of a hyperbolic linear Weingarten congruence, for a diffeomorphism l as in Definition 2.3.Moreover, it reduces to the classical Bäcklund Theorem between pseudo-spherical surfaces when φ = ρ = π/2.Theorem 2.5.(Bäcklund Theorem for hyperbolic linear Weingarten surfaces) Let M and M ′ be two surfaces imersed in R 3 .Suppose there exists a hyperbolic linear Weingarten congruence l : M −→ M ′ with constant (r, θ, φ, ρ) as in Definition 2.3.For any p ∈ M and p ′ = l(p) ∈ M ′ , suppose that the normal vectors N p and N ′ p ′ and the vector v = v(p) are not coplanar.Then M and M ′ are hyperbolic linear Weingarten surfaces.More precisely, the Gaussian curvature K (resp.K ′ ) and mean curvature H (resp. H ′ ) of M (resp.M ′ ) satisfy the relation (2.5) Since the vectors e ′ 3 , e 3 , v are not coplanar then a 32 = 0. Using (2.4), we obtain 1 a 32 {a 13 ω 1 + r sin ρω 13 }. Therefore, it follows from the second equation of (2.5) that where (2.7) Differentiating (2.6) it follows from the structure equations, the definition of mean and Gaussian curvatures and the Gauss equation that On the other hand, we kwon that dω 12 = −K(ω 1 ∧ ω 2 ).Therefore, the mean and Gaussian curvature of M satisfy The constants c 1 , c 2 , c 3 , c 4 defined in (2.7) imply that (2.9) In other words, M is a linear Weingarten surface satisfying 1 + 2βH + γK = 0, where β and γ are given by (2.1).Interchanging φ and ρ in the previous computations, we obtain that M ′ is also a linear Weingarten surface satisfying 1 + 2β ′ H ′ + γ ′ K ′ = 0, where β ′ and γ ′ are given by (2.2).Moreover, using the constants a 31 and a 32 defined by (2.4), we have We conclude this section by establishing some notation and some identities that will be used throughout this paper. The Geometric Integrability Theorem The Geometric Integrability Theorem, that we prove below, shows that given a hyperbolic linear Weingarten surface M satisfying (2.6) there exists a family of surfaces M ′ associated to M by a hyperbolic linear Weingarten congruence. Theorem 2.8.(Geometric Integrability Theorem) Let M ⊂ R 3 be a hyperbolic linear Weingarten surface with Gaussian curvature K and mean curvature H satisfying 1 + 2βH + γK = 0. We consider real numbers r > 0, 0 < θ < π and 0 < φ, ρ ≤ π 2 satisfying (2.1).Let p 0 ∈ M and let v 0 ∈ R 3 be a unit vector whose angle with N p0 (normal to M at p 0 ) is φ.Suppose that v T 0 , the tangential component of v 0 , is not a principal direction.Then there exists a linear Weingarten 2) and a hyperbolic linear Weingarten congruence l with constants (r, θ, φ, ρ) between neighborhoods of p 0 in M and l(p 0 ) in M ′ , such that the straight line connecting p 0 to l(p 0 ) is in the direction of v 0 . Proof: Since M is a hyperbolic linear Weingarten surface satisfying 1 + 2βH + γK = 0 then, taking real numbers r > 0, 0 < θ < π e 0 < φ, ρ ≤ π 2 such that (2.1) is verified, we have Thus, we can consider the real constants b 1 , b 2 , b 3 and c 1 , c 2 , c 3 , c 4 defined by (2.11) and (2.13), respectively.The idea is to apply Frobenius theorem to construct an orthonormal frame {e 1 , e 2 , e 3 } adapted to M , defined in a neighborhood of p 0 , whose dual and connection forms satisfy such that e . Let ℑ be the ideal generated by the 1-form Differentiating and using the structure equations we have By hypothesis, the constants r, θ, φ, ρ satisfy (2.1) and M is a hyperbolic linear Weingarten surface such that 1 + 2βH + γK = 0. Thus, dζ = ζ ∧ µ, i.e., ℑ is closed under exterior differentiation.By Frobenius theorem, the equation ζ = 0 is integrable.Therefore, there exists an adapted frame {e 1 , e 2 , e 3 } such that (2.20) holds in a neighborhood of p 0 , with initial condition e 1 (p . Since the angle between v 0 and N p0 = e 3 (p 0 ) is equal to φ and the unit vectors e 3 (p 0 ), e 1 (p 0 ) and v 0 are coplanar then v 0 = sin φe 1 (p 0 ) + cos φe 3 (p 0 ).Define, in this neighborhood, the vector function v = sin φe 1 + cos φe 3 . By hypothesis, e 1 (p 0 ) is not a principal direction hence we can assume, by continuity, that e 1 is not a principal direction on an open subset V of this neighborhood. We consider V parametrized by X : Differentiating and using the structure equations, we obtain Since e 1 is not a principal direction and r sin φ = 0 we conclude that M ′ = X ′ (U ) is a regular surface and z 1 , z 2 are tangent to M ′ .Moreover, e ′ 3 = b 1 e 1 +b 2 e 2 + cos θe 3 is a unit vector normal to M ′ .Consequently, M ′ is related to X(U ) by a hyperbolic linear Weingarten congruence, l with constants r, θ, φ, ρ.Using Theorem 2.5, we conclude that M ′ is a hyperbolic linear Weingarten surface satisfying 1 + 2β ′ H ′ + γ ′ K ′ = 0, where β ′ , γ ′ are given by (2.2). ✷ Observe that Theorem 2.8 shows that given a hyperbolic linear Weingarten surface M in R 3 there exists a 3-parameter family of surfaces M ′ associated to M by a hyperbolic linear Weingarten congruence.The three parameters are determined by the unit vector v 0 and the four constants (r, θ, φ, ρ) satisfying two conditions given by (2.1). The Geometric Permutability Theorem In this section, we consider the composition of Bäcklund transformations for hyperbolic linear Weingarten surfaces in R 3 .We observe that applying a Bäcklund transformation to a surface in R 3 satisfying 1 + 2βH + γK = 0, we obtain new surfaces of the same type but with different constants β and γ.We will now consider a composition of such transformations so that the surface obtained by this composition has the same constants as the surface we started with.This is obtained by imposing certain conditions on the parameters and in this case, the composition is commutative. Proof: Let X be a local parametrization of M in a neighborhood of p. Since l 1 : M → M ′ and l 2 : M → M ′′ are hyperbolic linear Weingarten congruences, we have that and M ′′ at p 1 and p 2 , respectively.By hypothesis, Observe that finding hyperbolic linear Weingarten congruences l * 1 and l * 2 as required by the theorem is equivalent to obtaining unit vector fields u 1 , u 2 satisfying (2.33) We consider new orthonormal frames {e ′ 1 , e ′ 2 , e ′ 3 } adapted to M ′ and {e ′′ 1 , e ′′ 2 , e ′′ 3 } adapted to M ′′ given by (2.34) Define the vector fields (2.35) The idea is to show that these vectors u 1 , u 2 satisfy equation (2.33).Initially, using (2.27), we observe that Similarly, using (2.34), (2.35), (2.28), (2.21) and the constant δ given by (2.22), we have Moreover Therefore, equation (2.33) is equivalent to the following linear system where δ is given by (2.22), E 11 and E 12 are given by (2.26) and the real numbers a ′ ij and a ′′ ij defined by (2.28) are given by (2.4), taking We observe that, as a consequence of (2.4), a ′ 33 = cos θ 1 and a ′′ 33 = cos θ 2 .Then using (2.25) we conclude that the third equation of the linear system (2.36) is satisfied.Substituting the expressions of F 11 and F 12 given by (2.31) and using equations (2.21)-(2.28),(2.4) and (2.30), we conclude that the first and the second equations of this linear system are also satisfied. Analytic interpretation of Bäcklund transformation In this section we will present an analytic interpretation of the Geometric Integrability Theorem (Theorem 2.8) and of the Geometric Permutability Theorem (Theorem 2.9) given in the previous section.We start recalling that given a hyperbolic linear Weingarten surface in R 3 satisfying 1 + 2βH + γK = 0, then D = γ − β 2 > 0 and there exists a solution ψ of the sine-Gordon equation where C βγ is a real constant defined by sin Conversely, given a solution ψ of equation (3.1),where C βγ is a real constant defined by (3.2), there exists a hyperbolic linear Weingarten surface in R 3 satisfying 1 + 2βH + γK = 0, parametrized by lines of curvature, whose first and second fundamental forms are given by ) with For more details, see Tenenblat [19]. C. Goulart Let ψ be a solution of the sine-Gordon equation (3.1),where C βγ is a real constant defined by (3.2).We consider the hyperbolic linear Weingarten surface M ⊂ R 3 satisfying 1+2βH +γK = 0. Let r > 0, 0 < θ < π and 0 < φ, ρ ≤ π 2 be real numbers satisfying (2.1) and (2.12).Using the Geometric Integrability Theorem, we can construct an orthonormal frame {e 1 , e 2 , e 3 } tangent to M , locally defined, with dual forms ω 1 , ω 2 and connection forms ω 12 , ω 13 , ω 23 associated to this frame satisfying the Bäcklund transformation (2.6), where c 1 , c 2 , c 3 and c 4 are given by (2.13).Moreover, the correspondence between hyperbolic linear Weingarten surfaces and solutions of the sine-Gordon equation allows us to conclude that the Bäcklund transformation (2.6) is equivalent to the system of partial differential equations where and c 1 , c 2 , c 3 , c 4 are given by (2.13).Using these real numbers we define the following constants In fact, in this case, On Bäcklund and Ribaucour Transformations 23 ✷ The following theorem provides an analytic interpretation of the Geometric Integrability Theorem (Theorem 2.8). Proof: Differentiating the first equation of the system (3.6) with respect to x 2 and subtracting from the derivative of the second equation with respect to x 1 , we obtain 25 where we used the fact that ψ is a solution of the sine-Gordon equation (3.1).Thus, using (3.6) and the relations given by (3.8) and (3.14), we have Similarly, differentiating the first equation of (3.6) with respect to x 1 and subtracting from the derivative of the second equation with respect to x 2 , we obtain where we used the fact that ψ is differentiable.Therefore, using (3.6) and the relations given by (3.9) and (3.16), we have ie, ψ ′ is a solution of the sine-Gordon equation (3.12).The functions ψ ′ obtained by integrating (3.6) depend on 3-parameters, namely the initial condition ψ ′ (x 0 1 , x 0 2 ), and four constants (r, θ, φ, ρ) satisfying two equations given by (2.1).✷ Definition 3.4.Let ψ be a solution of the sine-Gordon equation (3.1).We say that a function ψ ′ is associated to ψ by a Bäcklund transformation BT (r, θ, φ, ρ) if ψ ′ is a solution of the system (3.6). Analytic Interpretation of the Permutability Theorem Let ψ be a solution of the sine-Gordon equation (3.1),where C βγ is the constant given by (3.2) and β, γ are constants such that γ − β 2 > 0. The Geometric Permutability Theorem (Theorem 2.9) and the correspondence between hyperbolic linear Weingarten surfaces and solutions of the sine-Gordon equation allows us to construct a new solution ψ * of the sine-Gordon equation (3.1).The analytic interpretation of the Permutability Theorem (Theorem 3.5) will allow us to obtain ψ * algebraically.This is the content of our next result.However, the proof of this theorem is highly technical and, therefore, it will be presented in the Appendix.Theorem 3.5.(Analytic Permutability Theorem) Let ψ be a solution of the sine-Gordon equation (3.1),where C βγ is the real constant given by (3.2) and the real numbers β, γ are such that γ − β 2 > 0. We consider real numbers r i > 0, 0 < φ i , ρ i ≤ π 2 and 0 < θ i < π (i = 1, 2) with θ 1 = θ 2 , satisfying (2.1) and (2.23).Let ψ i , i = 1, 2 be solutions of equation (3.12), associated to ψ by the Bäcklund transformations BT (r i , θ i , φ i , ρ i ), where C β ′ γ ′ is the constant given by (3.13) and β ′ , γ ′ are given by (2.2), when r = r i , θ = θ i , φ = φ i and ρ = ρ i .Then there exists a unique solution ψ * of the sine-Gordon equation (3.1) associated to ψ i by BT (r j , θ j , ρ j , φ j ), 1 ≤ i = j ≤ 2.Moreover, ψ * is determined algebraically by Observe that in Theorem 3.5 the constants β ′ and γ ′ defined by (2.2) are independent of i since (2.23) is satisfied. The composition of Bäcklund transformations and the Ribaucour transformation for hyperbolic linear Weingarten surfaces in R 3 We consider a hyberbolic linear Weingarten surface in R 3 parametrized by orthogonal lines of curvatures X(x 1 , x 2 ) satisfying 1 + 2βH + γK = 0, where β and γ are real constants such that β 2 − γ < 0. There are two methods which provide 4-parameter families of linear Weingarten surfaces, with the same constants β and γ, associated to the surface X(x 1 , x 2 ).Namely, the composition of Bäcklund transformations, as we have seen in the previous sections and the Ribaucour transformation.In general the surfaces obtained by these two methods are not congruent.In fact, by starting with the pseudo-sphere, Goulart-Tenenblat [11] proved, with an explicit example, that a composition of Bäcklund transformations is not a Ribaucour transformation.In this section, we will determine necessary and sufficient conditions for the hyberbolic linear Weingarten surfaces constructed by using these two methods, to be congruent. Ribaucour Transformation We state the main concepts and results of the theory of Ribaucour transformations for surfaces in R 3 , in particular for linear Weingarten surfaces, that will be used in the following subsections.More details of the theory can be found in [6] or [8].Definition 4.1.Let M and M be orientable surfaces in R 3 and let N and Ñ be their Gauss maps.We say that M is associated to M by a Ribaucour transformation if, and only if, there exists a differentiable function h defined on M and a diffeomorphism l : 3 and the diffeomorphism l preserves lines of curvature. We say that M and M are locally associated by a Ribaucour transformation if for all p ∈ M there exists a neighborhood of p in M that is associated to a open subset of M by a Ribaucour transformation.Similarly, we define parametrized surfaces associated by such transformations. The Ribaucour transformation is characterized in terms of a differential equation which must be satisfied by map h of the definition (see [6] or [8]). Theorem 4.2.Let M be an orientable surface in R 3 , without umbilic points and let N be its Gauss map.We consider {e i }, i = 1, 2, orthonormal principal direction vector fields and −λ i the corresponding principal curvatures, ie, dN (e i ) = λ i e i .A surface M is locally associated to M by a Ribaucour transformation if, and only if, there exist parametrizations X : and Ñ is a Gauss map of M given by where and h satisfies the differential equation where ω ij are the connection forms associated to {e i }. We observe that the differential equation (4.2) is of second order and highly non linear.The proposition below shows how the problem of obtaining the function h can be linearized. Conversely, if Ω and W satisfy (4.3) and Observe that Ω i i = 1, 2 are the covariant derivatives of Ω.Moreover, considering Z i defined by (4.1), one can show that Z i = Ω i /W (see [8]). Next theorem shows that, by imposing an additional condition, the Ribaucour transformation of a linear Weingarten surface, satisfying α+2βH +γK = 0 provides a family of surface this same type, with the same constants α, β, γ. C. Goulart Theorem 4.4.(Corro-Ferreira-Tenenblat [8]) Let M be a surface of R 3 , without umbilic points and let M be associated to M by a Ribaucour transformation, such that the normal lines at corresponding points intersect at a distance h.Assume that h = Ω W is not constant along the lines of curvature and suppose that the functions Ω and W satisfy the additional condition where α, β, γ, C R = 0 are real constants.Then M is a linear Weingarten surface satisfying α + 2β H + γ K = 0 if, and only if, M satisfies α + 2βH + γK = 0, where H and K (resp.H and K) are, respectively, the mean and Gaussian curvatures of M (resp.M ). Observe that we are denoting by C R the constant of the Ribaucour transformation. Theorem 4.5.( Corro-Ferreira-Tenenblat [8]) Let M ⊂ R 3 be a linear Weingarten surface satisfying α + 2βH + γK = 0, with no umbilic points.Let e i , i = 1, 2 be orthonormal principal direction vector fields.Let ω i , ω ij and ω i3 be the dual and the connection forms.Then the system is integrable, for any constant C R = 0. On a simply connected domain, any solution, whose initial conditions satisfy (4.4), satisfies (4.4) identically.If M is locally parametrized by X : U ⊂ R 2 → M and Ω, W is a non trivial solution of (4.5) satisfying (4.4), then each surface of the family is a linear Weingarten surface, locally associated to X by a Ribaucour transformation, satisfying α + 2β H + γ K = 0, where H and K are the mean and Gaussian curvatures of X. Remark 4.6.Considering Z i given by (4.1), since Z i = Ω i /W , we can rewrite condition (4.4) as where i, j = 1, 2, g i = |X xi |, −λ i are the principal curvatures of M and C R = 0 is a real constant. and h = Ω W . Necessary and sufficient conditions Given a hyberbolic linear Weingarten surface M in R 3 , satisfying 1 + 2βH + γK = 0, one can consider the surfaces M associated to M by Ribaucour transformations as in Theorem 4.5 and the surfaces M * associated to M by composition of Bäcklund transformations as in Theorem 2.9.We will determine necessary and sufficient conditions for M and M * to be congruent. Substituting (4.10) into (4.11) and using (4.9), we obtain that the first fundamental form of X is given by Ĩ = g2 We obseve that the first fundamental form of X * is given by we define the functions Using the Analytic Permutability Theorem (Theorem 3.5), we observe that ϕ = tan ψ * −ψ 4 .Therefore, Considering a hyperbolic linear Weingarten surface M immersed in R 3 , our next theorem establishes the necessary and sufficient conditions for a composition of Bäcklund transformations and a Ribaucour transformation of M to be congruent.Theorem 4.10.Let M ⊂ R 3 be a linear hyperbolic Weingarten surface satisfying 1 + 2βH + γK = 0, parametrized by lines of curvature X(x 1 , x 2 ).Let X * (x 1 , x 2 ) be a surface associated to X by a composition of Bäcklund transformations as in Theorem 2.9.Let X(x 1 , x 2 ) be a hyperbolic linear Weingarten surface associated to X by a Ribaucour transformation as in Theorem 4.5, such that the normal lines at corresponding points intersect at a distance h(x 1 , x 2 ).Then, with the notation of Remark 4.9, X and X * are congruent if, and only if, h is one of the following functions where ϕ and Λ are given by (4.14) and D = γ − β 2 . Proof: We observe that the first fundamental form of a linear Weingarten surface determines its second fundamental form.Considering the notation established in Remark 4.9, let g1 , g2 and g * 1 , g * 2 given by (4.12) and (4.15), respectively.Since the fundamental forms of X * are determined by the solution ψ * of the sine-Gordon equation (3.1) given by (3.17), then X and X * are congruent if, and only if, g1 = ±g * 1 and g2 = ±g * 2 .Observe that the equality gi = ±g * i (i = 1, 2) is a quadratic equation for h in terms of ϕ. Corollary 4.11.Under the same conditions as in Theorem 4.10, if β = 0 and γ = 1, i.e. if the surfaces X, X * and X have Gaussian curvature equal to -1, then X and X * are congruent if, and only if, h is one of the following functions where ϕ and Λ are given by (4.14). Appendix We now prove the analytic version of the permutability theorem (Theorem 3.5), for the Bäcklund transformations BT (r i , θ i , φ i , ρ i ), i = 1, 2. In order to achieve our goal, we need to prove some lemmas.We define the real numbers L ℓ (1 ≤ ℓ ≤ 6) below, Applying some trigonometric identities, we obtain (5.5).Analogously, we prove that △ 3 = −△ 2 and △ 4 = △ 1 .✷ With the aid of the lemmas above, we will obtain the analytic interpretation of the permutability theorem for linear Weingarten hyperbolic surfaces in R 3 . Proposition 4 . 3 . If h is a nonvanishing function, defined on a simply connected domain, which satisfies equation (4.2) then h = Ω W , where Ω and W are nonvanishing functions satisfying Proposition 4 . 8 . ) Let M ⊂ R 3 be a linear Weingarten surface satisfying α + 2βH + γK = 0.If M is associated to M by a Ribaucour transformation as in Theorem 4.5, then the first fundamental form of M is given by Ĩ ) and D = γ − β 2 (see (3.2)-(3.5)).Remark 4.9.For later use, let us establish the following notation C. Goulart 2.1.Bäcklund Theorem for hyperbolic linear Weingarten surfaces Definition 2.1.We say that M ⊂ R 3 is a Weingarten surface if there exists a differentiable function relating the mean and Gaussian curvatures H and K of M .A surface M is said to be linear Weingarten if H and K satisfy a linear relation, i.e., there exist real constants α, β, γ such that α 2 , e 3 } and {e ′ 1 , e ′ 2 , e ′ 3 } be orthonormal frames adapted to M and M ′ , respectively such that, for every p ∈ M , e 3 (p) = N p , e ′ 3 (p ′ ) = N ′ p ′ and the sets {v, e 1 , e 3 } and {v, e ′ , it follows from (2.26), (2.27) and (2.22), that r 2 v 2 = δr 1 sin φ 1 E 11 e 1 + δr 1 sin φ 1 E 12 e 2 + r 2 cos φ 2 e 3 .
7,341.6
2018-02-19T00:00:00.000
[ "Mathematics" ]
A New Artificial Intelligence-Based Model for Amyotrophic Lateral Sclerosis Prediction , Introduction Amyotrophic lateral sclerosis disease occurs due to the gradual defcit of motor neurons either in the brain or the spinal cord [1][2][3][4].Te development of unknown genes or pathophysiological processes is considered the main cause of the disease [3,5,6].ALS is a complex disorder since it afects the whole body and causes paralysis.Tis disease is very rare and unfortunately being diagnosed lately.Physicians rely on various syndromes to identify the disease in its early stages, such as behavioral defcits or cognitive dysfunctions [1,[7][8][9][10].If the disease is diagnosed behind time, then it could afect the treatment plan negatively [2,5].Te efcient ways to predict and diagnose ALS disease are to look for related biomarkers and perform robust clinical evaluations using biological data [1,11,12].Physicians have found that genes play a substantial role since it is believed to be a cause [3,[13][14][15][16].In addition, the disease can be developed or occur from composite interrelations between various factors, such as genes, age, and sex [3,16,17]. ALS disease afects the UMNs and LMNs networks which results in dysfunctions in the bulbar, thoracic, and cervical segmentations [1,3,6].Tese dysfunctions cause an increasing weakness in the skeletal muscles, which are involved in limb movements [3,[18][19][20].Bulbar onset, spinal onset, and cervical onset are multiple phenotypes of ALS [3,5].Patients who are diagnosed with ALS sufer from the loss of speaking memory.Tese patients are not likely to face a neurologist at the beginning of the diagnostic phase since it is hard to predict it early if no proper clinical evaluations are performed [1][2][3][4].Te clinical evaluations should spot signs of dysfunctions in the bulbar, thoracic, and cervical segmentations [2,3]. ALS is a rare disease, which occurs globally and is common among people aged between 40 and 70 [21].It is found that 5%-10% of positive diagnosed cases occurred due to mutations in C9orf72, SOD1, and FUS genes, while the remaining were sporadic [21,22].Tis disease afects people of all ethnicities and races [21,22].Numerous signs and symptoms can be associated with ALS disease, such as muscle weakness, twitching, atrophy, and cramps [22].In addition, difculty in speaking and swallowing, hyperrefexia, emotional, and cognitive changes, and respiratory symptoms are common signs of being afected by this disease [21].Physicians use various clinical assessments to diagnose ALS disease and these assessments are electromyography, nerve conduction analyses, magnetic resonance imaging (MRI), blood and urine tests, lumbar puncture or spinal tap, genetic testing, and muscle biopsy [22][23][24]. Increasing age, genetics, environmental factors such as exposure to pesticides, herbicides, lead, and mercury, smoking tobacco, physical trauma, and medical conditions such as primary lateral sclerosis, autoimmune diseases, and frontotemporal dementia are considered risk factors for ALS [23,24].Healthcare providers and physicians apply diferent methods, such as medications like riluzole, baclofen, and tizanidine to manage symptoms, physical and occupational therapy, speech and swallowing therapy, breathing support, nutrition support, psychological and emotional support, hospice, and palliative care to treat ALS disease [21,22,24].Currently, physicians and researchers face several challenges, which can afect patients directly.Tese challenges are complications in timely diagnosis, speedy progression, dearth of a cure and inadequate treatment alternatives, difculty of care, multifaceted genetics, inadequate research funding, and narrow access to clinical trials and rehabilitation services [24].Table 1 provides a piece of general medical information about ALS. Currently, various articles have been published using artifcial intelligence (AI) technologies to address ALS prediction and the stratifcations of patients [2][3][4].Tese articles have provided favorable outcomes; however, using these approaches in healthcare facilities is limited due to some unseen challenges, such as generalizing the methods to work with unseen subjects [1][2][3][4][5].It is crucial to have a suitable method that can be applied or deployed on any dataset.Magnetic resonance imaging (MRI) is considered one of the technologies that are used in clinical evaluations to diagnose ALS as stated in Table 1.Te biggest challenge in diagnosing ALS is the limited availability of datasets [2].Tis research ofers a new deep-learning approach using a developed UNET architecture to predict ALS. Research Motivations and Contributions. To be consistent with the Saudi Vision 2030 and provide a reliable diagnosis tool to predict ALS are the motivations of this research.Tis study aims to predict ALS disease using the UNET architecture on a utilized dataset.Te following points list the contributions of this research: (1) Develop a new deep-learning approach based on the UNET model to predict ALS disease and its development rate.(2) Te developed approach is integrated with some data preprocessing tools to robust the outcomes.(3) Te implemented model is evaluated on a dataset using various characteristics. Tis article is organized as follows: the related work is given in Section 2, and the suggested approach is described in Section 3. Section 4 provides a deep evaluation and its discussion.Section 5 concludes the article. Literature Review Interested researchers have developed various solutions to either identify ALS disease or estimate its progression rate.In this section, several works will be covered and discussed. In [6], Pancotti et al. explored the advantages of using deep-learning methods to predict the ALS development rate.Te authors performed the investigation on a dataset using three architectures.Tese architectures were a feed-forward neural network (FFNN), a convolutional neural network (CNN), and a recurrent neural network (RNN).In the frst architecture, the authors used three hidden levels with a dropout regularization layer.Te utilized hidden layers took their inputs from selected static and longitudinal features.In addition, a linear activation function was deployed, and the mean squared error (MSE) was evaluated as the loss function.For the second architecture, the inputs were divided into two parts, the longitudinal and a statical residual.11 × 3 was the size of every input for the convolutional neural network.Te last architecture was used for the longitudinal data only.Te authors evaluated their models on a dataset from the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database using two parameters, which were the root mean squared deviation (RMSD) and Pearson correlation coefcient (PCC).On the other hand, the proposed approach uses the same dataset on the UNET architecture to predict the disease and evaluate its progression rate.Various performance quantities are utilized for evaluation purposes.Te proposed method reached an acceptable level of accuracy, which was found to range from 82% to 87%. Faghri et al. in [7] applied supervised, semisupervised, and supervised machine-learning models on ALS patients to fnd the number of ALS subtypes to better understand this disease and study its heterogeneity.Te authors obtained data from ALS patients in Italy between 1995 and 2015.In total, 2858 records were studied.Uniform manifold approximation and projection (UMAP) was the unsupervised model and neural network UMAP was the semisupervised International Journal of Intelligent Systems method, while an ensemble learning based on LightGBM was the supervised model.Tis method identifed subtypes and provided useful insight into the ALS substructure, while the proposed approach in this study is able to determine whether a patient has ALS or not.Moreover, the estimation of the ALS development rate exists. In [9], Huang et al. developed a model to predict ALS using a pattern analysis method.Tis model was implemented based on comorbidities and indicators of electronic medical records (EMRs).Te authors analyzed these EMRs and later performed a comparison with healthy controls to fnd the associated comorbidities and select them.Tese selected associated comorbidities were used to build a machine-learning model and construct a new Weighted Jaccard Index (WJI) to develop a prediction system using two levels of comorbidities, which were single disease codes and clustered codes.Te authors used WJI in four diferentmachine-learning methods to predict ALS disease.Tese four models achieved 83.7% accuracy.In addition, other performance indicators were evaluated as well.Te authors used a dataset from NHIRD in Taiwan.Tese data were collected between 1996 and 2013.Te authors defned two groups as follows: positive and negative to represent ALS patients and healthy people.Te healthy people were used to select healthy control parameters to build the prediction approach.Te negative records were collected according to the matching gender and age attributes based on the selected healthy controls.Various experiments were performed to select the associated comorbidities and applied statistical analysis on these associated comorbidities to fnd the best healthy controls to implement the prediction model.Te developed model categorized 162 ALS patients accurately.Te proposed approach in this article used the UNET architecture to extract features from the utilized dataset and compute weight for every characteristic to predict the ALS disease and measure its progression rate. Te suggested method achieved a good accuracy between 82% and 87%, which is better than what was achieved by the method in [9]. Imamura et al. in [12] implemented an artifcial intelligence-based approach to diagnose ALS using induced pluripotent stem cells (iPSCs).Te authors used images of spinal motor neurons (SMNs) to develop the model and analyze it using a convolutional neural network (CNN).Tis method reached 97% of the area under the curve, which was the main performance indicator.Te authors trained their model using a VGG-16 neural network.Tis approach nearly achieved an average of 84% accuracy, while the proposed technique in this article utilized an artifcial intelligence-based method to predict ALS using the UNET structure and reached an accuracy between 82% and 87%.Tis range is better than what was reached in [12]. Problem Statement. Various solutions to identify the ALS disease or predict its development rate based on artifcial intelligence technologies were developed, such as in [6,7,9,12,13].Tese works were either to identify the disease or predict its progression rate.None proposed both.In addition, some works provided no information about accuracy.Due to these reasons, this article proposes a model to identify ALS and predict its progression rate using an artifcial intelligence solution based on the UNET structure. Dataset. Te utilized dataset in this study was obtained from the GitHub repository [25].Tis dataset contains over 1,500 records of ALS patients and healthy people.Tese records were split into more than 30 columns.Te columns refer to various information, such as the patients' IDs, gender, time of visits and diagnosis, and laboratory results.Many data were International Journal of Intelligent Systems missing, so the dataset was cleaned and preprocessed before being utilized in the proposed approach.Several tables were constructed in the training, testing, and analysis stages.Table 2 provides details about the used data in this research and the number of data that were assigned for training, validation, and testing. Te Proposed Methodology . Tis part provides a full explanation of the proposed approach.Tis approach takes its inputs from the constructed tables and performs some preprocessing operations to prepare data to be completely utilized.Figure 1 presents a block diagram of the proposed model.Te block diagram shows that the developed method consists of three main phases, which are the preprocessing phase, the neural network, i.e., UNET, and the evaluation of the implemented method by fnding the performance quantities.An internal architecture of the developed UNET is shown in Figures 2 and 3, respectively.Initially, input data are segmented as shown in Figure 2 and then the segmented data are processed to extract features and categorize results to produce outputs as illustrated in Figure 3. Te Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-R) is the most common technique worldwide to evaluate ALS disease.It measures 12 daily activities based on scores from 0 to 4, where 0 refers to complete loss of being able to perform an activity and 4 represents normal ability.Tis scale is used in this study to predict the development rate of ALS.Since the scale ranges from 0 to 4 for each activity, then the maximum value is 48.Te sum of all activities represents a score on the scale.Various characteristics are used, such as the number of visits after being diagnosed with ALS, onset type, and onset age.Eighteen features, also referred to as characteristics, are utilized in this research to categorize data and determine the progression rate of ALS.Tese features include the twelve measured activities and other factors in the utilized dataset.Several statistical parameters are used in this study, such as maximum, minimum, mean, variance, covariance, and standard deviation.Tese parameters were applied to healthy people to determine the healthy controls (parameters) in the proposed method.Tese parameters are compared with ALS patients in the developed method. As shown in Figure 1, data in the dataset go through several operations in the preprocessing phase to prepare data to be used without any issues and to avoid overftting, which could occur due to high dimensionality.Te utilized data are divided into 70% for training, 10% for validation, and the rest for testing purposes to evade unfairness.During the training session, the 5-fold cross-validation technique was deployed to speed up the process, confrm the model solidity, and optimize the hyperparameters of the proposed approach.7,500 bootstraps were applied to compute the confdence intervals.Table 3 lists the applied settings of hyperparameters in the proposed method. After the required tables were constructed, the remaining clean and useful data were divided into two classes.One class was allocated for ALS patients and another class for healthy people.Tese two classes underwent data incorporation to produce complete sets of medical records. In addition, a statistical analysis was performed on diferent disease codes after counting them one by one to support developing the proposed method.A threshold to represent the minimum number of ALS patients was set.During the segmentation stage, as shown in Figure 2, the proposed model measures a weight for each characteristic and feeds these weights to the categorization stage to predict ALS and compute its progression rate.Figure 4 illustrates a distribution of the ALSFRS-R score within a year of the training set.Tis distribution represents a slope versus the counted score.Tis slope is utilized to evaluate the progression rate of ALS.Features with high weights get higher attention and are inserted into a group called importance characteristics.Tis group is used in the validation and testing sets to predict the disease.Table 4 shows a sample of the obtained weights for 5 records.Te frst column refers to patients' IDs and the second column to the calculated weights. Various performance indicators were utilized to evaluate the developed approach.Tese performance indicators were accuracy, precision, sensitivity, F-score, cross-entropy loss (CEL), Dice, and Jaccard.In addition, four performance metrics were required to compute the previous performance indicators.True positive (TP), true negative (TN), false positive (FP), and false negative (FN) were the required metrics.Te following equations show how the performance indicators are determined in the proposed model.CEL � − N i�1 q xlog(P). ( N refers to the number of classes being evaluated in this study, N � 2. q represents a binary indicator, which is computed in the proposed system, and P is the probability.Tis quantity provides a clear sight of how far the proposed model is from the needed results.Hence, the lower the value, the better results are achieved. (1) Precision (PRC) is computed as displayed in the following equation: (2) Sensitivity (SEN) is evaluated as shown in the following equation: (3) Accuracy (ACR) is computed using the following equation: ACY � (TP + TN) (4) F-score is determined via the following equation: International Journal of Intelligent Systems (5) Dice (DIC) is calculated as shown in the following equation: (6) Jaccard Index (JI) determines an overlap area between the detected fre area and the ground truth label.Tis quantity is computed as illustrated in the following equation: where TL refers to the true label and PL represents the predicted label.Moreover, the nominator refers to the intersecting objects, while the dominator denotes the number of alliances between two groups. Results and Discussion Tis section provides an analysis to predict ALS disease and its development rate through several experiments.In addition, an evaluation of any signs that afect the proposed To confrm the association and correlation between data and their actual classes, the data were distributed evenly into three sets as shown in Table 1.Te developed deep-learningbased approach was examined and evaluated using the MATLAB platform, which was installed on a machine.Tis machine was running with Windows Pro 11 using an Intel Core I7 8 th Gen., 16 GB RAM, 64-bit operating system, 2 GHz. Predicting Results . Due to the difculty of fnding the ALS dataset, we trust the utilized data and work on them with confdence.A hundred healthy people from the training and testing sets were selected as the control group in this study.All related data for the control group were identifed and counted as well.Te implemented method was training using 1231 records as listed in Table 2.A comparison of similarities between the two constructed groups was conducted using statistical analysis.Tis procedure shorted inputs of two groups by deleting unwanted values.Te estimated average values of all considered performance indicators are shown in Table 5. Te model accomplished 85.21% accuracy and 86.05% F-score, while precision and sensitivity were 84.86% and 84.43%, respectively.Tese outcomes were obtained using 6500 iterations; however, increasing the number of iterations enhanced the model's accuracy by nearly 6.8%.Moreover, the required processing time increased signifcantly, which is considered a side efect.Te developed approach calculated individual accuracy for the three main onsets, namely, spinal, bulbar, and limb.Tese results are illustrated in Figure 5. Te implemented approach identifed the bulbar type more than the other two types due to its data availability in the utilized dataset. During the training stage, the running time was nearly 27 minutes, which was signifcantly higher as the proposed model went through three main stages.Tese stages were the preprocessing, segmentation, and identifcation.Te last two stages consumed most of the running time.In order to minimize the execution time, the patch size of each segmented data was decreased partially by 30%-50% and the achieved running time was noticeably good.Te execution time went down from 27 minutes to 18 minutes.Figure 6 reveals the maximum attained results of all the considered performance indicators. Computing the running time of the developed approach to categorize input data in seconds, the number of applied variables within the method and the number of foatingpoint operations per second (FLOPS) were crucial; thus, they were measured and evaluated.Tese assessments express the calculation complexity of the presented model.Both FLOPS and the number of parameters were in millions.Table 6 shows these results.Te approach created massive FLOPS and the number of variables due to the internal structure of the internal and the number of used characteristics.Nevertheless, the fnal outcomes were favorable and promising.Te running time refers to the achieved time after shortening the patch size by nearly 45%.Tables 7 and 8 reveal the yield grouping outcomes on the testing set and a comparison assessment between several developed models [6,7,9,12,13,15,18] and the proposed approach, respectively.Te identifcation results are ALS and healthy.Te comparative assessment evaluation involves the deployed tool, accuracy, F-score, and Dice.Table 8 shows that the presented algorithm in this study produces International Journal of Intelligent Systems promising results and surpasses some implemented methods in the literature.Te attained results in Table 7 reveal that the suggested method identifed nearly 84% of the data appropriately. Estimation of the Progression Rate. Te developed approach estimates the development rate of the ALS disease if a patient is predicted to be diagnosed with the disease.Tis is performed by drawing the slope of the ALSFRS-R score for only the predicted ALS patients.Figure 8 illustrates the slope diagram.It says that the survival rate probability decreases as time goes on.In addition, by the end of the frst year, the survival rate becomes 30% and the death is ensured by the end of the coming years. Exploring the efect of decreasing the number of utilized features was conducted in this research.Te number of characteristics was reduced to seven features only.Tese features were selected based on the achieved values of the ALSFRS-R score, which were Q1 speech, Q3 swallowing, Q4 handwriting, Q6 dressing, Q7 turning in bed, Q8 walking, and Q9 climbing.We noticed that the considered performance indicators went down dramatically by more than 40%.Tis shows that the number of utilized characteristics plays a considerable role. Discussion. In this research, an artifcial intelligencebased solution to predict ALS disease and its development rate is presented using one dataset from the GitHub repository.It is good to mention that this dataset does not represent a typical distribution.Nevertheless, it supported this study and provided favorable information.Te presented algorithm generated promising outcomes since its accuracy lies in an acceptable range from 82% to 87%.Tis range is better than what was achieved in [9,18].In addition, the utilized features contributed to the prediction system and the estimation of the progression rate.Te proposed Explaining and interpreting AI structures are difcult.However, these methods can be deployed and used to support and assist physicians in their diagnosis to provide good treatment plans.Identifying ALS disease and its progression rate were the main aims of this study.Various deep-learning technologies were applied.However, their results were undesired.Tus, these results were neglected.We believe that this occurred due to the limited data availability and how the methods were deployed and interacted with the used features.To prove the efcacy of the presented approach and its suitability, several statistical tools and performance indicators were applied and evaluated.Furthermore, the prediction algorithm was analyzed using diferent confgurations.To improve the fndings, the Adam optimizer tool was adopted and it showed a key role in enhancing accuracy by 4.78% and reducing the execution time by less than 7%.Among the implemented works, the authors in [13] achieved the highest accuracy, while the proposed model attained moderate outputs but no specifc solution could provide an absolute ALS diagnosis.Te presented method in this study can be deployed to identify the considered disease early and it is cost-efective.However, the execution time is considered high and can be seen as a disadvantage. Conclusion In this article, an artifcial intelligence-based algorithm to predict ALS and its development rate is presented.It is obvious that the system's accuracy is increased if the quality of utilized data is good enough to let the model pulls out features without any issues.Te quality of data can be improved by performing some required operations, such as cleaning and removing all associated entries of missing values.Increasing the number of used features in the prediction algorithm enhances its fndings if these characteristics are trained well.Even though the applied dataset was small, the outputs of the prediction model are higher than 80%, which is acceptable.Tese outputs were compared with other AI solutions and showed promising conclusions.Te presented approach is very cost-efective; however, its running time is a drawback, and this can be minimized by reducing the number of utilized layers and their associated parameters in the segmentation and learning phases.Moreover, the computed value of the false positive rate increases if the utilized dataset contains symptoms that are similar to ALS disease.Te proposed algorithm shows that the detection of the disease in its early stage can be realized.Tis detection can provide a good plan for treatment and quality of life for diagnosed patients.In addition, the implemented approach can be applied by healthcare providers to support and aid physicians in diagnosing ALS properly. Future work is projected to enhance the identifcation outputs and minimize the running time for the whole process.In addition, decreasing the complexity of the prediction algorithm is another intention of the projected future work. Figure 4 : Figure 4: Te slope distribution of the ALSFRS-R score. Figure 7 Figure 7 demonstrates two sample graphs of outcomes, which are a chart of cross-entropy and the receiver operating characteristic curve (ROC) for all the three sets, namely, training, validation, and testing.Tables7 and 8reveal the yield grouping outcomes on the testing set and a comparison assessment between several developed models[6,7,9,12,13,15,18] and the proposed approach, respectively.Te identifcation results are ALS and healthy.Te comparative assessment evaluation involves the deployed tool, accuracy, F-score, and Dice.Table8shows that the presented algorithm in this study produces Figure 7 : Figure 7: (a) Te chart of the cross-entropy result.(b) Te achieved ROC curve. Table 2 : Details of the utilized data. Table 4 : Sample of the calculated weights. Table 5 : Te results of the performance indicators. Table 6 : Te assessment results of the computation complexity. Table 8 : Te conducted assessment results.
5,518.8
2023-12-31T00:00:00.000
[ "Medicine", "Computer Science" ]
High Strain Rate Quasi-Superplasticity Behavior in an Ultralight Mg-9.55Li-2.92Al-0.027Y-0.026Mn Alloy Fabricated by Multidirectional Forging and Asymmetrical Rolling To explore new approaches to severe plastic deformation and the ductility of a multicomponent magnesium–lithium alloy, an ultralight microduplex Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy was made by novel multidirectional forging and asymmetrical rolling, and the superplasticity behavior was investigated by optical microscope, hot tensile test, and modeling. The average grain size is 1.9 μm in this alloy after multidirectional forging and asymmetrical rolling. Remarkable grain refinement caused by such a forming, which turns the as-cast grain size of 144.68 μm into the as-rolled grain size of 1.9 μm, is achieved. The elongation to failure of 228.05% is obtained at 523 K and 1 × 10−2 s−1, which demonstrates the high strain rate quasi-superplasticity. The maximum elongation to failure of 287.12% was achieved in this alloy at 573 K and 5 × 10−4 s−1. It was found that strain-induced grain coarsening at 523 K is much weaker than the strain-induced grain coarsening at 573 K. Thus, the ductility of 228.05% is suitable for application in high strain rate superplastic forming. The stress exponent of 3 and the average activation energy for deformation of 50.06 kJ/mol indicate that the rate-controlling deformation mechanism is dislocation-glide controlled by pipe diffusion. Introduction The Mg-Li alloy, the lightest nontoxic metallic alloy, has been investigated extensively in recent years [1] and has the potential for application in spaceflight, military, 3C electronic, and automobile industries on account of its excellent specific weight-to-density ratio, excellent specific strength, good damping performance, and excellent electromagnetic shielding capability. In particular, Mg-Li alloys have been used in satellites in the aerospace sector in China. When the space vehicle enters or leaves the atmosphere, it sustains high temperature [2]; when the space vehicle flies on the moon, it can sustain a severe temperature difference of 423 K [3]. Thus, it is necessary to study the high-temperature deformation behavior or superplasticity of Mg-Li alloys. In the past, studies on the superplasticity of Mg-Li binary alloy [4], Mg-Li-Zn system [5][6][7] and Mg-Li-Zn-Al system [8] alloys were reported because of the higher plasticity of the Zn element than Al element in such alloys. However, there are rare reports on the high-temperature superplasticity behavior of the Mg-Li-Al system alloy. Thus, a novel multicomponent Mg-Li-Al-Y-Mn alloy was designed, and its high-temperature behavior was studied. Superplasticity is the capability of materials to exhibit large ductility and requires (i) fine and ultra-fine grain sizes of less than 10 µm; (ii) temperature of more than 0.5T m , where T m is the absolute melting temperature, and a certain strain rate [9]. Superplasticity forming is especially suitable for the manufacture of complex components such as thin-wall and high-rib components. To realize superplasticity, grain refinement is essential Experimental Procedures The melting and casting process of alloy ingot adopts Jackson's flux-argon atmosphere protection method. The analyzed composition of the ingot was Mg-9.55Li-2.92Al-0.027Y-0.026Mn. After milling of the ingot surface, the milled ingot was homogenized at 473 K for 16 h. The ingot was cut into billets with dimensions of 40 × 32 × 22 mm 3 . Our previous reports on multidirectional forging are available elsewhere [8,22]. The schematic sketch of multidirectional forging and asymmetrical rolling is shown in Figure 1. The cuboid billets were forged at 523 K by alternatively changing the pressing direction for six passes on a 3000 kN hydraulic press. Then, the forged billets were asymmetrically hot rolled at 523 K to plates 4 mm thick with a reduction of 81.82% and cold rolled to sheets 2 mm thick with a reduction of 50%. The asymmetrical speed ratio was 1.2. The multidirectional forging specimens for microstructural observation were taken from the central section of the cuboid. The asymmetrical rolling specimen for microstructural observation was taken from the rolled sheet along the longitudinal rolling direction. thinning were 4 kV ± 3 for 5 min, 3 kV ± 3 for 10 min, and 2 kV ± 2 for 10 min. The temperature of ion thinning ranged from 143 K (−130 °C) to 173 K (−100 °C). FEI Tecnai F30 field emission transmission electron microscope was used for dislocation observation. Figure 2 shows the optical microstructures of the present alloy under different processing states. As shown in Figure 2a, the as-cast structure consists of white acicular and plate-like α-Mg phase and gray β-Li phase, where the α-Mg phase is distributed in the matrix of the β-Li phase. The average grain size is 144.68 μm. The measurement method of average grain size is to use IPP software and obtain the linear intercept grain size. As shown in Figure 2b, after six-pass multidirectional forging, both phases are refined due to shear stress caused by the pressing stress that keeps changing its loading direction. The Dog-boned specimens for tensile deformation and microstructural observation were taken along the longitudinal rolling direction and made by spark discharge processing. The specimen dimensions were 13 × 3 × 2 mm 3 . After being annealed at 448 K for 60 min and held at designated testing temperatures for 15 min, the tensile tests were performed on Shimidazu-AG-Xplus 100 kN tester in the temperature range of 423~573 K and strain rate range of 1 × 10 −2~5 × 10 −4 s −1 . Initial Thermomechanical Processing Microstructures Optical examination specimens were cut, ground, and polished by the conventional metallographic method. The etched solution was a solution of 10 vol.% hydrochloric acid + 90% vol.% ethyl alcohol or a solution of 5 g picric acid + 5 g acetic acid + 10 mL deionized water + 100 mL absolute alcohol. Optical microstructures were observed on Olympus-DSX500 optical microscope. Image-pro-plus (IPP) software was used to measure the grain size. The high temperature tensiled specimens were mechanically ground and polished to 80 µm. Then, discs with 3 mm diameter were punched. Chemical twin-jet and liquid nitrogen cooled ion thinning were conducted to prepare the samples for transmission electron microscopy (TEM) observation. The electrolyte was a solution of 10% HClO 4 + 90% ethyl alcohol. The operating voltage of twin-jet was 12 V. The current was 30 mA. The temperature of electrolytic polishing was 233 K (−40 • C). The operating parameters of ion thinning were 4 kV ± 3 for 5 min, 3 kV ± 3 for 10 min, and 2 kV ± 2 for 10 min. The temperature of ion thinning ranged from 143 K (−130 • C) to 173 K (−100 • C). FEI Tecnai F30 field emission transmission electron microscope was used for dislocation observation. Figure 2 shows the optical microstructures of the present alloy under different processing states. As shown in Figure 2a, the as-cast structure consists of white acicular and plate-like α-Mg phase and gray β-Li phase, where the α-Mg phase is distributed in the matrix of the β-Li phase. The average grain size is 144.68 µm. The measurement method of average grain size is to use IPP software and obtain the linear intercept grain size. As shown in Figure 2b, after six-pass multidirectional forging, both phases are refined due to shear stress caused by the pressing stress that keeps changing its loading direction. The average grain size is 11.72 µm. As shown in Figure 2c, after multidirectional forging (523 K), asymmetric hot rolling (523 K), cold rolling, grains were greatly refined under imposed rolling stress, and banded or elongated grains are clearly visible. The average grain size measured by IPP software is 1.9 µm along the vertical rolling direction. As shown in Figure 2d, the banded α-Mg grains shorten, and some equiaxed grains appear due to static recrystallization in the alloy sheet annealed at 448 K for 60 min. The average grain size is 4.14 µm. As shown in Figure 2e, after the sample is kept at 573 K for 15 min, pronounced grain coarsening occurs because of high-temperature grain boundary migration. The average grain size in the gripping section of the specimen is 18.26 µm. Figure 3 shows the engineering stress-strain curves of this alloy at different temperatures and strain rates. In most cases, the engineering strain or elongation to failure increases with decreasing strain rates from 1 × 10 −2 to 5 × 10 −4 s −1 . The engineering stress decreases with the increase in tensile temperatures from 423 to 573 K. This is because, with the decrease in strain rate and the increase in temperature, the tensile time prolongs and thermal activation accelerates, dislocation density decreases, and stress decreases. As shown in Figure 3b, the ductility or elongation of 146.39% is obtained at 473 K and 1 × 10 −2 s −1 , which demonstrates the low temperature and high strain rate quasisuperplasticity. As shown in Figure 3c, the ductility or elongation of 228.05% is obtained at 523 K and 1 × 10 −2 s −1 , which demonstrates the high strain rate quasi-superplasticity. As shown in Figure 3d, the maximum ductility or elongation of 287.12% was obtained in this alloy at 573 K and 5 × 10 −4 s −1 . thermal activation accelerates, dislocation density decreases, and stress decreases. As shown in Figure 3b, the ductility or elongation of 146.39% is obtained at 473 K and 1 × 10 −2 s −1 , which demonstrates the low temperature and high strain rate quasi-superplasticity. As shown in Figure 3c, the ductility or elongation of 228.05% is obtained at 523 K and 1 × 10 −2 s −1 , which demonstrates the high strain rate quasi-superplasticity. As shown in Figure 3d, the maximum ductility or elongation of 287.12% was obtained in this alloy at 573 K and 5 × 10 −4 s −1 . Figure 2c, slight dynamic grain coarsening occurs at 523 K and 1 × 10 −2 s −1 . In addition, compared to Figure 2d, pronounced dynamic grain coarsening occurs at 573 K and 5 × 10 −4 s −1 . This means that 287.12% elongation is obtained in this coarse grained microstructure. Furthermore, the average grain sizes of α-Mg and β-Li phases are 2.03 and 6.91 μm, respectively, at 523 K and 1 × 10 −2 s −1 . The average grain sizes of α-Mg and β-Li phases are 9.64 and 26.27 μm, respectively, at 573 K and 5 × 10 −4 s −1 . As the temperature increases and the strain rate This means that 287.12% elongation is obtained in this coarse grained microstructure. Furthermore, the average grain sizes of α-Mg and β-Li phases are 2.03 and 6.91 µm, respectively, at 523 K and 1 × 10 −2 s −1 . The average grain sizes of α-Mg and β-Li phases are 9.64 and 26.27 µm, respectively, at 573 K and 5 × 10 −4 s −1 . As the temperature increases and the strain rate decreases, the average grain sizes of the dual phase increase. This indicates the occurrence of phase coarsening. Figure 5 presents the TEM images of stacking faults in the present alloy at 523 K and 1 × 10 −2 s −1 and 573 K and 5 × 10 −4 s −1 . Some stacking faults exist in the alloy. Since the samples for TEM examination have been exposed to the ambient environment for 365 days, with dislocations existing in high energy and non-equilibrium state, when the high temperature tensile test was conducted, they reacted and dissociated to become the current low-energy and equilibrium states as shown in Figure 5. That may be the cause that the stacking faults appear. This indicates the activity of dislocation glide when the high temperature tensile test was performed. This interesting discovery of stacking faults in the present alloy is the first time in the study of a Mg-Li alloy system deformed at elevated temperature. Figure 5 presents the TEM images of stacking faults in the present alloy at 523 K and 1 × 10 −2 s −1 and 573 K and 5 × 10 −4 s −1 . Some stacking faults exist in the alloy. Since the samples for TEM examination have been exposed to the ambient environment for 365 days, with dislocations existing in high energy and non-equilibrium state, when the high temperature tensile test was conducted, they reacted and dissociated to become the current low-energy and equilibrium states as shown in Figure 5. That may be the cause that the stacking faults appear. This indicates the activity of dislocation glide when the high temperature tensile test was performed. This interesting discovery of stacking faults in the present alloy is the first time in the study of a Mg-Li alloy system deformed at elevated temperature. Figure 5 presents the TEM images of stacking faults in the present alloy at 523 K and 1 × 10 −2 s −1 and 573 K and 5 × 10 −4 s −1 . Some stacking faults exist in the alloy. Since the samples for TEM examination have been exposed to the ambient environment for 365 days, with dislocations existing in high energy and non-equilibrium state, when the high temperature tensile test was conducted, they reacted and dissociated to become the current low-energy and equilibrium states as shown in Figure 5. That may be the cause that the stacking faults appear. This indicates the activity of dislocation glide when the high temperature tensile test was performed. This interesting discovery of stacking faults in the present alloy is the first time in the study of a Mg-Li alloy system deformed at elevated temperature. m values range from 0.169 to 0.423, most of which lie between 0.2 and 0.3, indicating that the dominant deformation mechanism is dislocation creep. The m value of 0.375 (stress exponent n = 1/m = 2.66 ≈ 3) corresponds to the maximum elongation to failure of 287.12%, indicative of the occurrence of quasi-superplasticity or superplasticity-like behavior. The stress exponent n = 2.66 ≈ 3 reveals that dislocation viscous glide governs the rate-controlling process under this condition. Establishment of Power-Law Constitutive Equation at Elevated Temperature The power-law constitutive equation at elevated temperature is generally expressed as [23] where ε is the steady-state deformation rate, A is a dimensionless constant, G is the shear modulus, a function of temperature, b is the magnitude of Burgers vector of dislocation, k is Boltzmann's constant, T is the absolute temperature, d is the grain size, p is the grain size exponent, σ is the applied stress, is the threshold stress, n is the stress exponent (1/m, m-strain rate sensitivity index), D0 is the frequency factor for diffusion, Q is the activation energy for deformation, and R is the universal gas constant. Here, power-law constitutive modeling is performed to elucidate the deformation mechanism at elevated temperature and is suitable for the application in superplastic forming process control. In order to determine the threshold stress, n-value, and Q-value, true stress and true strain formulae are used on the basis of Figure 3. True stress = Engineering stress × (1 + Engineering strain); True strain = Ln (1 + Engineering strain). Figure 7 shows the linear fitting of -1/n relation in this alloy to determine the threshold stress and stress exponent. At the true strain of 0.2, the values of threshold stress are determined using linear fitting of -1/n relation. When n = 4 and 5, the threshold stresses become negative. Hence, n = 4 and n = 5 are excluded. When n is 3, the determination coefficient, R 2 , is 0.9858 with the best correlation, which is higher than the determination coefficient, R 2 , of 0.9759 when n = 2. Thus, the true stress exponent is determined to be 3. Establishment of Power-Law Constitutive Equation at Elevated Temperature The power-law constitutive equation at elevated temperature is generally expressed as [23] . ε is the steady-state deformation rate, A is a dimensionless constant, G is the shear modulus, a function of temperature, b is the magnitude of Burgers vector of dislocation, k is Boltzmann's constant, T is the absolute temperature, d is the grain size, p is the grain size exponent, σ is the applied stress, σ 0 is the threshold stress, n is the stress exponent (1/m, m-strain rate sensitivity index), D 0 is the frequency factor for diffusion, Q is the activation energy for deformation, and R is the universal gas constant. Here, power-law constitutive modeling is performed to elucidate the deformation mechanism at elevated temperature and is suitable for the application in superplastic forming process control. In order to determine the threshold stress, n-value, and Q-value, true stress and true strain formulae are used on the basis of Figure 3. True stress = Engineering stress × (1 + Engineering strain); True strain = Ln (1 + Engineering strain). Figure 7 shows the linear fitting of σ-. ε 1/n relation in this alloy to determine the threshold stress and stress exponent. At the true strain of 0.2, the values of threshold stress σ 0 are determined using linear fitting of σ-. ε 1/n relation. When n = 4 and 5, the threshold stresses become negative. Hence, n = 4 and n = 5 are excluded. When n is 3, the determination coefficient, R 2 , is 0.9858 with the best correlation, which is higher than the determination coefficient, R 2 , of 0.9759 when n = 2. Thus, the true stress exponent is determined to be 3. As per Equation (1), the deformation activation energy is given by As dislocation creep is predominant in this alloy, p is equal to zero [24]. Young's modulus of Mg is given by E = 48,700 − 8.59T − 0.0195T 2 [25]. The relationship between Young's modulus E, Poisson's ratio υ, and shear modulus G is given by where Poisson's ratio υ of Mg is 0.28 [26]. As dislocation creep is predominant in this alloy, p is equal to zero [24]. Young's modulus of Mg is given by E = 48,700 − 8.59T − 0.0195T 2 [25]. The relat between Young's modulus E, Poisson's ratio υ, and shear modulus G is given by where Poisson's ratio υ of Mg is 0.28 [26]. Figure 8 shows the fitting curves of ln( ) − 1/ at various strain rat activation energy for high temperature deformation of the present alloy is in the r 47.13~53.68 kJ/mol. Average experimental activation energy is 50.56 kJ/mol. Analysis of the Processing Principle of Our MDF + Asymmetrical Rolling Approach The method of MDF + asymmetrical rolling is proposed in this manuscript and put into effect via experimental forming. The total imposed strain during the MDF + asymmetrical rolling processing is 6.2. The principle behind the combined forming is that the accumulated strain, 6.2, through MDF + asymmetrical rolling is much larger than the accumulative strain of MDF, 3.6, and the strain of asymmetrical rolling, 2.6. Here, the strain of asymmetrical rolling, 2.6, is calculated using the true strain formula in reference [27]. Analysis of the Processing Principle of Our MDF + Asymmetrical Rolling Approach The method of MDF + asymmetrical rolling is proposed in this manuscript and put into effect via experimental forming. The total imposed strain during the MDF + asymmetrical rolling processing is 6.2. The principle behind the combined forming is that the accumulated strain, 6.2, through MDF + asymmetrical rolling is much larger than the accumulative strain of MDF, 3.6, and the strain of asymmetrical rolling, 2.6. Here, the strain of asymmetrical rolling, 2.6, is calculated using the true strain formula in reference [27]. Thus, the grain refinement of MDF + asymmetrical rolling is superior to the grain refinement of simple MDF and simple asymmetrical rolling. That is the advantage of MDF + asymmetrical rolling. Compared to the average grain size of 5.5 µm in our previous work [28] in Mg-6.4Li-3.6Zn-0.37Al-0.36Y alloy and the average grain size of 3.75 µm in our previous work [8] in Mg-10.2Li-2.23Zn-2.1Al-0.2Sr processed by MDF+ symmetrical flat rolling, the average grain size in Figure 2b is 1.9 µm in the present alloy (Mg-9.55Li-2.92Al-0.027Y-0.026Mn) fabricated by MDF + asymmetrical rolling. As shown in Figure 2b, grains are fragmented and refined after MDF + asymmetrical rolling. This reveals that remarkable grain refinement is achieved due to novel MDF+ asymmetrical rolling. This is because compared to symmetrical flat rolling, asymmetrical rolling exerts more shear action on the rolled piece during the rolling deformation process. The intense shear imposed by the asymmetrical rolling intensifies the grain fragmentation and refinement. Here, it is noted that the average grain size is measured by IPP software and is a statistical result of grain band width because the rolling grain size is usually expressed by the average grain band width. In addition, asymmetrical rolling results in fine equiaxed grains in AZ (Mg-Al-Zn) magnesium alloys, but leads to banded or elongated grains instead of equiaxed grains in the Mg-Li alloy such as the present (Mg-9.55Li-2.92Al-0.027Y-0.026Mn) alloy. Analysis of Dynamic Grain Coarsening after Tension In recent years, the issue of grain coarsening including static grain coarsening [29][30][31] and dynamic grain coarsening [32][33][34] has attracted the attention of researchers in superplastic aluminum alloys and zinc-0.8Ag alloys, but according to our survey, only a few reports are available regarding dynamic grain coarsening in superplastic magnesium alloys. In particular, little information is available in superplastic Mg-Li based alloys except our for previous work [35]. Here, dynamic grain coarsening means deformation-induced grain coarsening or strain-induced grain coarsening. With the increase in strain, grain boundary and phase boundary migration increase, and grain becomes coarse, which is called strain-induced grain coarsening. As shown in Figure 4, compared to the initial microstructures in Figure 2c,d, straininduced grain coarsening appears in the tensile deformation alloy. Due to short-time weak coarsening (grain size 6.42 µm) at a high strain rate of 1 × 10 −2 s −1 and 523 K, the elongation to failure of 228.05% is obtained at 523 K and 1 × 10 −2 s −1 . It is found that strain-induced grain coarsening at 523 K is much weaker than the strain-induced grain coarsening at 573 K. Thus, the ductility of 228.05% is especially suitable for application in high strain rate quasi-superplastic forming. Furthermore, due to pronounced grain coarsening (grain size 22.32 µm) at 573 K, the elongation of 287.12% is obtained in this alloy at 573 K and 5 × 10 −4 s −1 . The causes of grain coarsening are analyzed as follows. Firstly, the experimental phase proportion of α-Mg phase to β-Li phase, which is 15.59:84.41, indicates that the present alloy is a β-Li phase-dominated alloy with a small volume fraction of α-Mg phase. As such, the capability of hard α-Mg phase in restricting the coarsening of soft β-Li phase is weaker at 573 than at 523 K. Secondly, based on the report on the diffusivities of α-Mg and β-Li phases [36], the diffusivity or mobility of the β-Li phase is much faster than the diffusivity or mobility of the α-Mg phase. Hence, the β-Li-dominated alloy is prone to grain coarsening at higher temperature. Thirdly, according to Jin et al.'s report [37], the product of grain boundary width multiplied by grain boundary diffusivity for Al abides by the following relation: where δ is the grain boundary width, (=2b), b is the magnitude of Burgers vector of dislocation, 2.86 × 10 −10 m (Al), D gb is the grain boundary diffusivity of Al, R is the gas constant, and T is the absolute temperature. Thus, the grain boundary diffusivity of Al at 573 K is calculated to be 1.92 × 10 −12 m 2 s −1 . The grain boundary diffusivity of Mg at 573 K is 2.88 × 10 −11 m 2 s −1 [36]. This means that strain-induced grain coarsening in Mg alloys is higher than that in Al alloys. It is not surprising that strain-induced grain coarsening is a common phenomenon and feature in Mg alloys at a certain temperature, regardless of the strain-induced grain coarsening in the present Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy. The aforementioned viewpoint is supported by our result in this Mg-Li-Al-Y-Mn alloy, Kim et al.'s thought [38], and Figueiredo-Langdon's grain coarsening evidence [39] in ZK60 (Mg-Zn-Zr) magnesium alloy tensiled at 493 K. Fourthly, as the contents of Y and Mn elements are 0.027 and 0.026 wt %, respectively, with low concentration in this alloy, Zener pinning cannot be effectively realized through the intermetallic compounds formed by Al and both elements, the grain boundary (α-Mg/α-Mg, β-Li/β-Li) migration and (α-Mg/β-Li) phase boundary migration occur, and strain-induced grain coarsening occurs. In addition, the superplasticity of Mg-Li-Al system alloy and binary Mg-Li alloy are analyzed. Dutkiewicz et al. [40] have recently reported the superplasticity of the Mg-9Li-2Al-0.5Sc alloy fabricated by extrusion and cyclic forging. They have obtained the superplastic elongations between 150 and 190% at 423 K and claimed that the superplasticity of the Mg-Li-Al system alloy is lower than the superplasticity of the binary Mg-Li alloy. This is consistent with our experimental results that the elongation of the Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy is lower than that of the Mg-8Li alloy [36]. This is because the addition of Al, Y, and Mn elements to binary Mg-9.55Li alloy increases the deformation resistance of intragranular slip in the matrix, raises the flow stress, and disfavors grain boundary sliding. Meanwhile, the phase proportion in this alloy is not adjacent to 50:50 and does not induce a superplastic crane effect. However, for the Mg-8Li alloy without these elements, the crane effect, a phenomenon that the maximum superplastic elongation or grain boundary sliding is achieved in dual phase alloy under the 50:50 phase proportion, is easily realized in this dual-phase alloy with 50:50 phase proportion. Deformation Mechanism at Elevated Temperatures The activation energy for deformation and stress exponents were determined so as to judge the deformation mechanism at elevated temperature. The phase proportion of the α-Mg phase to β-Li phase is calculated to be 16.30:83.70 as per the binary Mg-Li phase diagram [41], and the experimental phase proportion of α-Mg phase to β-Li phase is measured to be 15.59:84.41, indicating that this alloy is a β-Li phase-dominated multicomponent alloy, which is consistent with the microstructures in Figure 2. To obtain the theoretical activation energy according to our previous model [36], the following relations are presented in the Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy: where D gb is the grain boundary diffusivity, D l is the lattice diffusivity, and T is the absolute temperature in Kelvin. Due to D p = D gb [42], where D p is the coefficient of pipe diffusion or pipe diffusivity, in consideration of D = D 0 exp(−Q/RT), where D 0 = 1 × 10 −4 m 2 s −1 [43], and Equations (6) and (7), Table 1 is obtained. In terms of the above-mentioned grain boundary diffusivity of Mg, 2.88 × 10 −11 m 2 s −1 and the pipe diffusivity of 5.79 × 10 −11 m 2 s −1 at 573 K in Table 1, the grain boundary (or pipe) diffusivity or mobility of the present alloy is two times as much as that of Mg. As we know, the grain-coarsening velocity or boundary-migration velocity is directly proportional to the mobility. This means that the grain-coarsening velocity of the present alloy is two times as much as that of Mg at 573 K. Because of grain coarsening at 573 K, grain boundary sliding is hindered, and intragranular sliding is enhanced. As a result, ductility decreases. This indirectly indicates that the appropriate quasi-superplastic deformation temperatures for the present Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy is from 473 to 523 K in which strain-induced grain coarsening is not obvious. It is noted in Table 1 that the theoretical pipe diffusion activation energy, Q p , is 68.4 kJ/mol, and the theoretical lattice diffusion activation energy, Q l , is 107 kJ/mol in the temperature range of 473-573 K. As shown in Section 3.3, the average experimental activation energy is 50.56 kJ/mol, which is close to the theoretical activation energy of pipe diffusion, 68.4 kJ/mol. This reveals that pipe diffusion governs the diffusion process. Meanwhile, as shown in Section 3, the stress exponent is determined to be 3, indicating that dislocation viscous glide governs the rate-controlling process. In terms of available reports on dislocation viscous glide or solute drag creep in solid solution-based aluminum alloys [44][45][46][47] and quasi-single phase magnesium alloys [48] deformed at elevated temperature, the occurrence of dislocation viscous glide or solute drag creep results from the interaction of solutes and dislocations. However, the appearance of dislocation glide in the β-phase-dominated Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy is a new discovery. Thus, the deformation mechanism at elevated temperature is dislocation glide controlled by pipe diffusion. Furthermore, the activation volume is estimated to judge the deformation mechanism at elevated temperature. The activation volume, V * , is given by the following formula [49] where M T is Taylor factor, =4.5 [50] for the equiaxed grain microstructure in Figure 4, k is Boltzmann's constant, T is the absolute temperature, . ε is the strain rate, σ is the true stress, and [ln . ε 2 / . ε 1 /∆σ] T indicates the variation in logarithmic strain rate divided by the yield stress difference at constant temperature. b 3 , where b is the magnitude of Burgers vector of dislocation, is considered as a volume unit. Here, the b value of Mg is 3.21 × 10 −10 m. The V*/b 3 is taken as the normalized activation volume. Figure 10 shows the normalized activation volume as a function of temperature at different strain rates. There are three curves in Figure 10 because the calculation of activation volume in Equation (8) involves the varying strain rate or jump strain rate. The normalized activation volume increases with the increase in deformation temperature. It is reported [51] that when the grain size ranges from 6 to 40 µm, V* = 100 − 300b 3 , in this case, trans-granular dislocation slip occurs, but when V* < 1 − 10b 3 , grain boundary sliding occurs in nanometer material. The experimental grain sizes of 6.42 and 22.32 µm fall into the range of 6-40 µm. Since the ratio of V*/b 3 is in the range of 25-366 b 3 , as shown in Figure 10 at different temperatures, dislocation slip governs the deformation mechanism. According to what has been described above, we can conclude that the deformation mechanism at elevated temperature is dislocation glide or slip. To further validate the dislocation activity, an estimation was made to calculate the number of dislocations inside a grain at 523 K and 1 × 10 −2 s −1 . The number of dislocations inside a grain is given by the following relation [52]: where N is the number of dislocations, is Poisson's ratio, 0.28 for Mg, d is the grain size, d = 6.42 μm (Figure 4a), is the true stress, 16.4 MPa is determined by the true stress (strain)-engineering stress (strain) relation in Section 3.2.1, G is the shear modulus, 15,189 MPa (Equation (3)), and b is the magnitude of Burgers vector, 3.21 × 10 −10 m for Mg. Thus, N = 28.14 ≈ 29. There are 29 pieces of dislocations inside a grain under this condition. In consideration of experimental evidence of stacking faults dissociated from the dislocation reaction in Figure 4a, theoretical estimation and experimental evidence support the occurrence of dislocation glide. Moreover, Equation (9) was validated in our previous report on the hot-compressed Al-Mg-Er-Zr alloy [53] and is convincing. In terms of aforementioned facts and analyses, the deformation mechanism at elevated temperature is found to be dislocation glide controlled by pipe diffusion. Conclusions (1) An ultralight microduplex Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy was made by novel multidirectional forging and asymmetrical rolling. The average grain size was 1.9 μm in the present alloy fabricated by multidirectional forging + asymmetrical rolling. Remarkable grain refinement caused by such a forming, which turns the as-cast grain size of To further validate the dislocation activity, an estimation was made to calculate the number of dislocations inside a grain at 523 K and 1 × 10 −2 s −1 . The number of dislocations inside a grain is given by the following relation [52]: where N is the number of dislocations, ν is Poisson's ratio, 0.28 for Mg, d is the grain size, d = 6.42 µm (Figure 4a), σ is the true stress, 16.4 MPa is determined by the true stress (strain)-engineering stress (strain) relation in Section 3.2.1, G is the shear modulus, 15,189 MPa (Equation (3)), and b is the magnitude of Burgers vector, 3.21 × 10 −10 m for Mg. Thus, N = 28.14 ≈ 29. There are 29 pieces of dislocations inside a grain under this condition. In consideration of experimental evidence of stacking faults dissociated from the dislocation reaction in Figure 4a, theoretical estimation and experimental evidence support the occurrence of dislocation glide. Moreover, Equation (9) was validated in our previous report on the hot-compressed Al-Mg-Er-Zr alloy [53] and is convincing. In terms of aforementioned facts and analyses, the deformation mechanism at elevated temperature is found to be dislocation glide controlled by pipe diffusion. Conclusions (1) An ultralight microduplex Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy was made by novel multidirectional forging and asymmetrical rolling. The average grain size was 1.9 µm in the present alloy fabricated by multidirectional forging + asymmetrical rolling. Remarkable grain refinement caused by such a forming, which turns the as-cast grain size of 144.68 µm into the as-rolled grain size of 1.9 µm, was achieved. (2) The elongation to failure of 228.05% was obtained at 523 K and 1 × 10 −2 s −1 , which demonstrates the high strain rate quasi-superplasticity. The maximum elongation to failure of 287.12% was demonstrated in this alloy at 573 K and 5 × 10 −4 s −1 . It was found that strain-induced grain coarsening at 523 K is much weaker than the strain-induced grain coarsening at 573 K. Thus, the ductility of 228.05% is suitable for application in high strain rate superplastic forming. Theoretical analysis of atomic diffusion shows that the graincoarsening velocity of the present alloy was two times as much as that of Mg at 573 K. This indicates that the appropriate quasi-superplastic deformation temperatures for the present Mg-9.55Li-2.92Al-0.027Y-0.026Mn alloy is from 473 to 523 K in which strain-induced grain coarsening is not obvious. (3) The power-law constitutive equation was established. The stress exponent was determined to be 3. The average activation energy for deformation was 50.06 kJ/mol, which is close to the theoretical activation energy of pipe diffusion, 68.4 kJ/mol. The results of estimation of stress exponent, activation energy, and activation volume indicate that the rate-controlling deformation mechanism is dislocation glide controlled by pipe diffusion. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
8,066.8
2022-10-27T00:00:00.000
[ "Materials Science" ]
Preparation and Characterization of Biocompatible Iron/Zirconium/Polydopamine/Carboxymethyl Chitosan Hydrogel with Fenton Catalytic Properties and Photothermal Efficacy In recent years, multifunctional hydrogel nanoplatforms for the synergistic treatment of tumors have received a great deal of attention. Here, we prepared an iron/zirconium/polydopamine/carboxymethyl chitosan hydrogel with Fenton and photothermal effects, promising for future use in the field of synergistic therapy and prevention of tumor recurrence. The iron (Fe)–zirconium (Zr)@ polydopamine (PDA) nanoparticles were synthesized by a simple one-pot hydrothermal method using iron (III) chloride hexahydrate (FeCl3•6H2O), zirconium tetrachloride (ZrCl4), and dopamine, followed by activation of the carboxyl group of carboxymethyl chitosan (CMCS) using 1-(3-Dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC)/N(4)-hydroxycytidine (NHS). Finally, the Fe–Zr@PDA nanoparticles and the activated CMCS were mixed to form a hydrogel. On the one side, Fe ions can use hydrogen peroxide (H2O2) which is rich in the tumor microenvironment (TME) to produce toxic hydroxyl radicals (•OH) and kill tumor cells, and Zr can also enhance the Fenton effect; on the other side, the excellent photothermal conversion efficiency of the incorporated PDA is used to kill tumor cells under the irradiation of near-infrared light. The ability of Fe–Zr@PDA@CMCS hydrogel to produce •OH and the ability of photothermal conversion were verified in vitro, and swelling and degradation experiments confirmed the effective release and good degradation of this hydrogel in an acidic environment. The multifunctional hydrogel is biologically safe at both cellular and animal levels. Therefore, this hydrogel has a wide range of applications in the synergistic treatment of tumors and the prevention of recurrence. Introduction The effective treatment of tumors remains a challenge for the biomedical field today [1]. Traditional clinical treatments include surgery [2], chemotherapy [3], radiotherapy [4,5], immunotherapy [6,7], etc. However, all these methods have certain limitations such as the risk of metastasis or infection due to the surgical trauma [8], the lack of targeting of some chemotherapeutic drugs to the lesion, and the killing of normal tissue cells [9,10]. These shortcomings have limited the prospects for their use in oncology treatment. Therefore, there is a pressing want to develop more effective therapeutic strategies to tackle the difficulties in tumor therapy. In recent years, studies have proven that various kinds of tumor cells show off multiplied ranges of reactive oxygen species (ROS) and altered redox status due to genetic, metabolic, and microenvironmental alterations [11][12][13][14]. Stimulated by high ROS levels, oncogenes can induce the activation of various downstream signaling pathways to adapt Gels 2023, 9, 452 3 of 18 lies in the use of the classical Fenton reaction, combined with metal ion properties, to complement the hydrogel to form a composite multifunctional hydrogel, circumventing the disadvantages of applying nanoparticles alone and amplifying their advantages to provide a novel strategy in the field of tumor killing and prevention of local tumor recurrence. bicompatible advantages of hydrogels to form a composite hydrogel. The composite sy tem provides a new idea in broadening the application potential of hydrogels. While co bining the Fenton effect and photothermal efficacy, the functionalized hydrogel can n only function as a tumor suppressor, but also plays an important role in preventing t recurrence of residual tumor tissue after local surgical excision (Scheme 1). The novelty our work lies in the use of the classical Fenton reaction, combined with metal ion prop ties, to complement the hydrogel to form a composite multifunctional hydrogel, circu venting the disadvantages of applying nanoparticles alone and amplifying their a vantages to provide a novel strategy in the field of tumor killing and prevention of lo tumor recurrence. Scheme 1. A practical and working schematic of a multifunctional hydrogel that can be injected future use in killing tumors and inhibiting their postoperative recurrence (CDT: chemodynam therapies; PTT: photothermal therapy). Preparation and Characterization of Fe-Zr@PDA and Fe-Zr@PDA@CMCS Hydrogel To examine the microstructural morphology, the nanoparticle was investigated w a scanning electron microscope (SEM, Zeiss Sigma 300, Oberkochen, Germany). F Zr@PDA was synthesized by a hydrothermal one-pot method. The Fe-Zr@PDA NPs e hibited a homogeneous spherical morphology with a particle size of approximately 42. 9.2 nm (Figure 1a). The hydrodynamic diameters of NPs dispersed in different media we measured using DLS. As shown in Figure 1b, the mean hydrodynamic diameters of F Zr@PDA in water was 164 nm. The results of DLS also revealed that Fe-Zr@PDA tend to undergo certain aggregation. More importantly, the hydrodynamic diameters of F Zr@PDA hardly changed within 24 h. To verify the existence of iron, Fe-Zr@PDA w mixed with the o-phenanthroline solution at room temperature. It was observed that t supernatant changed swiftly from colorless to orange-red and showed the characteris absorption at 510 nm. However, there was no detectable color change when cultured wi out Fe-Zr@PDA (Figure 1c). These results suggest the presence of Fe in the composite. F Zr@PDA@CMCS hydrogel was prepared by mixing the Fe-Zr@PDA with the EDC/NH activated CMCS. SEM analysis showed that the CMCS hydrogel showed a smooth surfa (Figure 1d). Compared to the unactivated CMCS [48], a new peak at 1650 cm −1 assigned the -C=N was observed in the FT-IR spectrum of CMCS hydrogel (Figure 1e). These resu suggest that CMCS forms amide bonds through the activation of EDC/NHS, which fac tated the formation of hydrogels. A digital camera was used to record the changes of t black solution (Figure 1f, g), indicating the solution state and solid state of the hydrog Scheme 1. A practical and working schematic of a multifunctional hydrogel that can be injected for future use in killing tumors and inhibiting their postoperative recurrence (CDT: chemodynamic therapies; PTT: photothermal therapy). Preparation and Characterization of Fe-Zr@PDA and Fe-Zr@PDA@CMCS Hydrogel To examine the microstructural morphology, the nanoparticle was investigated with a scanning electron microscope (SEM, Zeiss Sigma 300, Oberkochen, Germany). Fe-Zr@PDA was synthesized by a hydrothermal one-pot method. The Fe-Zr@PDA NPs exhibited a homogeneous spherical morphology with a particle size of approximately 42.4 ± 9.2 nm ( Figure 1a). The hydrodynamic diameters of NPs dispersed in different media were measured using DLS. As shown in Figure 1b, the mean hydrodynamic diameters of Fe-Zr@PDA in water was 164 nm. The results of DLS also revealed that Fe-Zr@PDA tended to undergo certain aggregation. More importantly, the hydrodynamic diameters of Fe-Zr@PDA hardly changed within 24 h. To verify the existence of iron, Fe-Zr@PDA was mixed with the o-phenanthroline solution at room temperature. It was observed that the supernatant changed swiftly from colorless to orange-red and showed the characteristic absorption at 510 nm. However, there was no detectable color change when cultured without Fe-Zr@PDA ( Figure 1c). These results suggest the presence of Fe in the composite. Fe-Zr@PDA@CMCS hydrogel was prepared by mixing the Fe-Zr@PDA with the EDC/NHS-activated CMCS. SEM analysis showed that the CMCS hydrogel showed a smooth surface (Figure 1d). Compared to the unactivated CMCS [48], a new peak at 1650 cm −1 assigned to the -C=N was observed in the FT-IR spectrum of CMCS hydrogel (Figure 1e). These results suggest that CMCS forms amide bonds through the activation of EDC/NHS, which facilitated the formation of hydrogels. A digital camera was used to record the changes of the black solution ( Figure 1f, g), indicating the solution state and solid state of the hydrogel. The macroscopic and microscopic appearance of hydrogel was then recorded, which uncovered that the hydrogel is large and in macro scale. PDA is capable of generating thin film coatings on the surface of multiple materials. As shown in Figure 1d, h, CMCS hydrogel showed a smooth surface, while the Fe-Zr@PDA@CMCS hydrogel showed some loose porous pores. Gels 2023, 9, 452 4 of 18 This may be attributed to the formation of hydrogen bonds between PDA and CMCS. The element distribution diagram clearly showed that Zr were evenly distributed in the hydrogel, and there were no excess impurity elements in the hydrogel (Figure 1i). The macroscopic and microscopic appearance of hydrogel was then recorded, which uncovered that the hydrogel is large and in macro scale. PDA is capable of generating thin film coatings on the surface of multiple materials. As shown in Figure 1d, h, CMCS hydrogel showed a smooth surface, while the Fe-Zr@PDA@CMCS hydrogel showed some loose porous pores. This may be attributed to the formation of hydrogen bonds between PDA and CMCS. The element distribution diagram clearly showed that Zr were evenly distributed in the hydrogel, and there were no excess impurity elements in the hydrogel (Figure 1i). Photothermal Conversion Evaluation of Fe-Zr@PDA@CMCS Hydrogel In recent years, great efforts have been made in developing photothermal agents for the therapy of various types of tumors [28]. PDA has good photothermal properties, can enhance the biocompatibility of nanomaterials, improve hydrophilicity, reduce cytotoxicity, and its many other advantages are widely used in the field of nanomaterials research [49,50]. The doping of PDA gives Fe-Zr@PDA@CMCS hydrogel the photothermal conversion properties to convert the absorbed NIR laser mild into heat for action. In this study, the temperatures of the hydrogel were recorded for the different groups of treatments to determine the overall photothermal performance of the hydrogels (Figure 2a-d). As shown in Figure 2a, as the concentration of Fe-Zr@PDA in the hydrogel increased (from 2 mg/mL to 8 mg/mL), and the ΔT gradually increased, with ΔT of 24.5 °C, 30 °C, and 42.1 °C, respectively, while the temperature of the control group only increased by 4.2 °C. The corresponding thermographs, likewise, illustrated that the concentration of Photothermal Conversion Evaluation of Fe-Zr@PDA@CMCS Hydrogel In recent years, great efforts have been made in developing photothermal agents for the therapy of various types of tumors [28]. PDA has good photothermal properties, can enhance the biocompatibility of nanomaterials, improve hydrophilicity, reduce cytotoxicity, and its many other advantages are widely used in the field of nanomaterials research [49,50]. The doping of PDA gives Fe-Zr@PDA@CMCS hydrogel the photothermal conversion properties to convert the absorbed NIR laser mild into heat for action. In this study, the temperatures of the hydrogel were recorded for the different groups of treatments to determine the overall photothermal performance of the hydrogels (Figure 2a-d). As shown in Figure 2a, as the concentration of Fe-Zr@PDA in the hydrogel increased (from 2 mg/mL to 8 mg/mL), and the ∆T gradually increased, with ∆T of 24.5 • C, 30 • C, and 42.1 • C, respectively, while the temperature of the control group only increased by 4.2 • C. The corresponding thermographs, likewise, illustrated that the concentration of PDA is closely related to the photothermal effect ( Figure 2b). Further, the hydrogel was irradiated with an 808 nm NIR laser of exceptional energy densities (the specific power densities of the Gels 2023, 9, 452 5 of 18 laser are 0.5 W/cm 2 , 0.8 W/cm 2 , and 1.0 W/cm 2 ) for 5 min and the ∆T of the hydrogel was measured. The results revealed that the higher the power density of the NIR laser was, the higher the ∆T was ( Figure 2c). This suggests that the power density of the NIR laser is another factor affecting its photothermal performance. The thermographic images provided further evidence of the power-density-related photothermal performance (Figure 2d). In addition, Fe-Zr@PDA@CMCS hydrogel has good photothermal stability. After being irradiated by the laser for up to 120 min (six NIR irradiated cycles), the hydrogel showed a stable temperature rise trend, and the maximum ∆T was still greater than 50 • C (Figure 2e). Based on linear regression analysis, we calculated that the τs of Fe-Zr@PDA@CMCS hydrogel was 209.22 s (Figure 2f), and the η was 35.68% (Figure 2g), which was higher than the Au@Bi 2 Se 3 core-shell nanoparticle [42]. PDA is closely related to the photothermal effect ( Figure 2b). Further, the hydrogel was irradiated with an 808 nm NIR laser of exceptional energy densities (the specific power densities of the laser are 0.5 W/cm 2 , 0.8 W/cm 2 , and 1.0 W/cm 2 ) for 5 min and the ΔT of the hydrogel was measured. The results revealed that the higher the power density of the NIR laser was, the higher the ΔT was ( Figure 2c). This suggests that the power density of the NIR laser is another factor affecting its photothermal performance. The thermographic images provided further evidence of the power-density-related photothermal performance ( Figure 2d). In addition, Fe-Zr@PDA@CMCS hydrogel has good photothermal stability. After being irradiated by the laser for up to 120 min (six NIR irradiated cycles), the hydrogel showed a stable temperature rise trend, and the maximum ΔT was still greater than 50 °C (Figure 2e). Based on linear regression analysis, we calculated that the τs of Fe-Zr@PDA@CMCS hydrogel was 209.22 s (Figure 2f), and the η was 35.68% (Figure 2g), which was higher than the Au@Bi2 Se3 core-shell nanoparticle [42]. •OH Generating Capacity of Fe-Zr@PDA@CMCS Hydrogel Therapeutic modalities based on reactive oxygen radicals and their derivatives have been demonstrated as emerging therapeutic strategies for tumors [43,51,52]. Among these, Fenton-based and Fenton-like responses are considered to be potential tumor cell therapy modalities and there are now several studies demonstrating the use of the Fenton effect to cause tumor-killing effects [44][45][46][47]53]. We first verified the ability of Fe-Zr@PDA to produce •OH; specifically, •OH can rapidly oxidize colorless TMB to a blue-green oxidized TMB compound (oxTMB), and further scanned the absorbance of the reacted solution at 652 nm by UV spectrophotometry. The magnitude of the absorbance gave a side view of the ability of the Fenton reaction to produce •OH. As shown in Figure 3a, the concentration of Fe-Zr@PDA was increased from 0 µg/mL to 100 µg/mL, and the results of the absorbance of the final solution after reaction with TMB were gradually increased, indicating a positive correlation between its ability to produce •OH and the concentration of Fe-Zr@PDA. The reaction of Fe-Zr@PDA with H 2 O 2 shows that the colorless TMB is oxidized to the blue oxTMB, which further demonstrates the ability of Fe-Zr@PDA to generate •OH (Figure 3a). Further, we tested the •OH generation ability of Fe-Zr@PDA@CMCS hydrogel. The control hydrogel without Fe-Zr@PDA@CMCS had an absorbance value of almost zero at 652 nm and the mixture remained transparent ( Figure 3b). In contrast, the absorbance of Fe-Zr@PDA@CMCS hydrogel increased with the concentration of Fe-Zr@PDA and the color of the solution deepened, which further indicated that the catalytic ability of Fe-Zr@PDA was well-preserved after doping in the hydrogel (Figure 3b). Gels 2023, 9, x FOR PEER REVIEW 6 of 18 •OH Generating Capacity of Fe-Zr@PDA@CMCS Hydrogel Therapeutic modalities based on reactive oxygen radicals and their derivatives have been demonstrated as emerging therapeutic strategies for tumors [43,51,52]. Among these, Fenton-based and Fenton-like responses are considered to be potential tumor cell therapy modalities and there are now several studies demonstrating the use of the Fenton effect to cause tumor-killing effects [44][45][46][47]53]. We first verified the ability of Fe-Zr@PDA to produce •OH; specifically, •OH can rapidly oxidize colorless TMB to a blue-green oxidized TMB compound (oxTMB), and further scanned the absorbance of the reacted solution at 652 nm by UV spectrophotometry. The magnitude of the absorbance gave a side view of the ability of the Fenton reaction to produce •OH. As shown in Figure 3a, the concentration of Fe-Zr@PDA was increased from 0 μg/mL to 100 μg/mL, and the results of the absorbance of the final solution after reaction with TMB were gradually increased, indicating a positive correlation between its ability to produce •OH and the concentration of Fe-Zr@PDA. The reaction of Fe-Zr@PDA with H2O2 shows that the colorless TMB is oxidized to the blue oxTMB, which further demonstrates the ability of Fe-Zr@PDA to generate •OH ( Figure 3a). Further, we tested the •OH generation ability of Fe-Zr@PDA@CMCS hydrogel. The control hydrogel without Fe-Zr@PDA@CMCS had an absorbance value of almost zero at 652 nm and the mixture remained transparent ( Figure 3b). In contrast, the absorbance of Fe-Zr@PDA@CMCS hydrogel increased with the concentration of Fe-Zr@PDA and the color of the solution deepened, which further indicated that the catalytic ability of Fe-Zr@PDA was well-preserved after doping in the hydrogel (Figure 3b). Evaluation of Hydrogel Degradation and Swelling Properties The ability to swell and degrade swelling properties are important physical parameters of hydrogels, which affect the rate of release of loaded compounds and the absorption of tissue exudate, among other things [54][55][56]. In this section, we first investigated the degradation properties of Fe-Zr@PDA@CMCS hydrogel under different pH conditions. As shown in Figure 3c, Fe-Zr @PDA@CMCS hydrogel showed a fast weight loss in both PBS and CBS at the beginning and then degraded by more than 45% in the first three days. The degradation rate increased gradually with time, but the degradation rate was slower in the CBS than in the PBS. It is worth mentioning that the TME is acidic due to the altered metabolism of the tumor cells; therefore, the Fe-Zr@PDA@CMCS hydrogel can be retained in the acidic TME for a longer period to achieve prolonged drug release and tumor therapy. Then, we evaluated the swelling properties of Fe-Zr@PDA@CMCS hydrogel by immersing them in ultrapure water, CBS, or PBS for 24 h. As shown in Figure 3d, the freeze-dried hydrogel immersed in PBS buffer and ultrapure water swelled hastily inside a brief length when they come upon the liquid. However, the swelling state of the hydrogel immersed in the PBS buffer solution also increased slowly with time. At the same time, the swelling of the hydrogel in pure water remained almost constant, and in the final experimental phase, the freeze-dried hydrogel reached swelling equilibrium. Interestingly, compared with the hydrogel soaked in PBS and pure water, Fe-Zr@PDA@CMCS hydrogel soaked in CBS showed little change in the swelling degrees. This will facilitate the presence of Fe-Zr@PDA@CMCS hydrogel in a slightly acidic environment. In Vitro Cytocompatibility Results of Fe-Zr@PDA@CMCS Hydrogel Biocompatibility refers to the ability of a material to elicit appropriate host and material responses in a given application environment [57,58]. The ideal composite hydrogel requires good biocompatibility both in vitro and in vivo [59,60]. We evaluated the cell safety of Fe-Zr@PDA@CMCS hydrogel through a series of in vitro cell experiments using the mouse fibroepithelial L929 cell line as a model. Adherently grown L929 cells were cocultured with 1 mg/mL, 2.5 mg/mL, and 5 mg/mL of hydrogel extracts for 24 h and 48 h, and the results are shown in Figure 4a. Even at concentrations up to 5 mg/mL, the viability of L929 cells at 24 h and 48 h was higher than 90%, indicating that Fe-Zr@PDA@CMCS hydrogel has good cytocompatibility ( Figure 4a). Following the CCK-8 cell viability assay, we further stained cells in each experimental and control group using AM-PI stain and observed the distribution of live and dead cells using inverted phase contrast fluorescence microscopy. As can be seen in Figure 4b, in the 24 h and 48 h images, both in the control and experimental groups, the l929 cells, which represent viable cells, show green fluorescence, while the red fluorescence, which represents dead cells, is negligible in the images. It is worth noting that there is no significant difference in the observation of cell morphology between the experimental group and the control group. The above staining results were consistent with the results of CCK-8 experiments, which further demonstrated that Fe-Zr@PDA@CMCS hydrogel has no significant effect on the cell morphology of L929 cells and has good cell safety. In Vitro Hemocompatibility Results of Fe-Zr@PDA@CMCS Hydrogel The validation of in vitro biocompatibility experiments is crucial to ensure the biomedical applications of the Fe-Zr@PDA@CMCS hydrogel. The hemocompatibility test was used to assess whether Fe-Zr@PDA@CMCS hydrogel would cause hemolysis. Briefly, Fe-Zr@PDA@CMCS hydrogel was co-cultured with rat red blood cells to investigate their hemocompatibility. As shown in Figure 4c, the hemolysis rate of erythrocytes was 2.56 ± 3.31%, 3.35 ± 0.41%, 3.98 ± 0.43%, and 4.61 ± 0.18% after being cultured with Fe-Zr@PDA@CMCS hydrogel with concentrations of 10, 20, 50, and 100 mg/mL respectively, which were all less than 5%. This result indicated that Fe-Zr@PDA@CMCS hydrogel has good hemocompatibility. In addition, we recorded digital photographs of the erythrocyte solution after co-culture and, in agreement with the previous results, the positive control group of erythrocytes co-cultured with ultrapure water appeared bright red. This demonstrated that the erythrocytes have all ruptured, whereas the supernatant of the experimental group was clearer and more transparent, indicating that most of the erythrocyte structure was still intact (Figure 4d). In Vitro Hemocompatibility Results of Fe-Zr@PDA@CMCS Hydrogel The validation of in vitro biocompatibility experiments is crucial to ensure the biomedical applications of the Fe-Zr@PDA@CMCS hydrogel. The hemocompatibility test was used to assess whether Fe-Zr@PDA@CMCS hydrogel would cause hemolysis. Briefly, Fe-Zr@PDA@CMCS hydrogel was co-cultured with rat red blood cells to investigate their hemocompatibility. As shown in Figure 4c, the hemolysis rate of erythrocytes was 2.56 ± 3.31%, 3.35 ± 0.41%, 3.98 ± 0.43%, and 4.61 ± 0.18% after being cultured with Fe-Zr@PDA@CMCS hydrogel with concentrations of 10, 20, 50, and 100 mg/mL respectively, which were all less than 5%. This result indicated that Fe-Zr@PDA@CMCS hydrogel has good hemocompatibility. In addition, we recorded digital photographs of the erythrocyte solution after co-culture and, in agreement with the previous results, the positive control group of erythrocytes co-cultured with ultrapure water appeared bright red. This demonstrated that the erythrocytes have all ruptured, whereas the supernatant of the experimental group was clearer and more transparent, indicating that most of the erythrocyte structure was still intact (Figure 4d). In Vitro Biocompatibility Assessment Results of Fe-Zr@PDA@CMCS Hydrogel After establishing the excellent in vitro cytocompatibility of Fe-Zr@PDA@CMCS hydrogel, the safety of the hydrogel was further assessed in vivo by subcutaneously embedding Fe-Zr@PDA@CMCS hydrogel in KM mice and recording the changes in body weight of the mice. The body weight of mice in both the experimental and control groups increased normally throughout the feeding cycle, and the body weight of mice in the experimental group fluctuated slightly for 21 days after encapsulation in the hydrogel; however, there was no statistical difference compared to the control group (p > 0.05) (Figure 5a). Further, we measured the blood biochemical parameters of the experimental and control mice on day 7 and 14 of the hydrogel implantation to assess whether the liver and kidney functions were affected. The serum biochemical parameters measured included TB (total bilirubin), ALT (alanine aminotransferase), AST (aspartate aminotransferase), UREA (urea), and CREA (creatinine). As shown in Figure 5b, the experimental and control groups of Gels 2023, 9, 452 9 of 18 the hydrogel did not show a statistically significant difference in liver and kidney function (p > 0.05), which could further confirm the safety of the hydrogel. Next, we evaluated H&E staining sections of the major organs after 7 and 14 days of feeding, which showed no significant cellular hypertrophy, atrophy, or necrosis in any of the organ tissue sections. These results further demonstrated that the encapsulated Fe-Zr@PDA@CMCS hydrogel had no significant side effects on the normal physiological function of the major organs and had a good in vivo safety profile (Figure 5c). Finally, we examined the routine blood parameters of each group of mice, and the results are shown in Figure 6a-i. There was no statistical difference (p > 0.05) between the experimental and control groups of mice at 7 and 14 days. ever, there was no statistical difference compared to the control group (p > 0.05) ( Figure 5a). Further, we measured the blood biochemical parameters of the experimental and control mice on day 7 and 14 of the hydrogel implantation to assess whether the liver and kidney functions were affected. The serum biochemical parameters measured included TB (total bilirubin), ALT (alanine aminotransferase), AST (aspartate aminotransferase), UREA (urea), and CREA (creatinine). As shown in Figure 5b, the experimental and control groups of the hydrogel did not show a statistically significant difference in liver and kidney function (p > 0.05), which could further confirm the safety of the hydrogel. Next, we evaluated H&E staining sections of the major organs after 7 and 14 days of feeding, which showed no significant cellular hypertrophy, atrophy, or necrosis in any of the organ tissue sections. These results further demonstrated that the encapsulated Fe-Zr@PDA@CMCS hydrogel had no significant side effects on the normal physiological function of the major organs and had a good in vivo safety profile (Figure 5c). Finally, we examined the routine blood parameters of each group of mice, and the results are shown in Figure 6a-i. There was no statistical difference (p > 0.05) between the experimental and control groups of mice at 7 and 14 days. In Vitro Anticancer Potential of Fe-Zr@PDA@CMCS Hydrogel To investigate their potential effectiveness in antitumor therapy, we selected SW1990 cells for in vitro evaluation of tumor therapy. The CCK-8 results (Figure 7a) showed that there was a decrease in the survival rate of SW1990 cells in the Fe-Zr@PDA nanoparticles group, proving that Fe-Zr@PDA caused some damage to the normal growth of cancer cells. The Fe-Zr@PDA@CMCS hydrogel extract group also showed a decrease in cell survival rate, suggesting that CMCS-doped functionalized hydrogels also have some tumorcell-killing effect. In contrast, the survival rate of SW1990 cells receiving photothermal treatment alone was significantly lower, with almost half of the cells being inactive. Most importantly, the Fe-Zr@PDA@CMCS hydrogel extract combined with photothermal treatment showed an even more pronounced reduction in cell survival, indicating that the synergistic Fe-Zr@PDA@CMCS hydrogel and photothermal treatment can kill even more tumor cells. Afterwards, we further validated the in vitro anticancer effects by live-dead cell staining. The red fluorescence of Fe-Zr@PDA nanoparticles, Fe-Zr@PDA@CMCS hydrogel extracts treated group and photothermal treatment group all showed a significant increase compared to the control group (Figure 7b-e). Notably, there was a significant increase in red fluorescence and a significant decrease in green fluorescence after combining Fe-Zr@PDA@CMCS hydrogel with photothermal treatment, indicating that most of the SW1990 tumor cells were killed (Figure 7f). In conclusion, the combination of multifunctional hydrogel and photothermal therapy can effectively kill tumor cells, which is also expected to further enable tumor treatment at the in vivo level again. In Vitro Anticancer Potential of Fe-Zr@PDA@CMCS Hydrogel To investigate their potential effectiveness in antitumor therapy, we selected SW1990 cells for in vitro evaluation of tumor therapy. The CCK-8 results (Figure 7a) showed that there was a decrease in the survival rate of SW1990 cells in the Fe-Zr@PDA nanoparticles group, proving that Fe-Zr@PDA caused some damage to the normal growth of cancer cells. The Fe-Zr@PDA@CMCS hydrogel extract group also showed a decrease in cell survival rate, suggesting that CMCS-doped functionalized hydrogels also have some tumor-cell-killing effect. In contrast, the survival rate of SW1990 cells receiving photothermal treatment alone was significantly lower, with almost half of the cells being inactive. Most importantly, the Fe-Zr@PDA@CMCS hydrogel extract combined with photothermal treatment showed an even more pronounced reduction in cell survival, indicating that the synergistic Fe-Zr@PDA@CMCS hydrogel and photothermal treatment can kill even more tumor cells. Afterwards, we further validated the in vitro anticancer effects by live-dead cell staining. The red fluorescence of Fe-Zr@PDA nanoparticles, Fe-Zr@PDA@CMCS hydrogel extracts treated group and photothermal treatment group all showed a significant increase compared to the control group (Figure 7b-e). Notably, there was a significant increase in red fluorescence and a significant decrease in green fluorescence after combining Fe-Zr@PDA@CMCS hydrogel with photothermal treatment, indicating that most of the SW1990 tumor cells were killed (Figure 7f). In conclusion, the combination of multifunctional hydrogel and photothermal therapy can effectively kill tumor cells, which is also expected to further enable tumor treatment at the in vivo level again. Conclusions In summary, we successfully designed and prepared multifunctional Fe-Zr@PDA@CMCS hydrogel with the Fenton effect and photothermal conversion properties using Fe, Zr, PDA, and CMCS. Fe-Zr@PDA@CMCS hydrogel combined the advantages of hydrogel and nanoparticles. The photothermal conversion, degradation, and swelling capabilities of the hydrogel and Fenton's catalytic ability under different conditions were investigated. The results showed that the composite hydrogel retains its photothermal and Fentonian catalytic properties while protecting Fe-Zr@PDA nanoparticles from degradation. In addition, the good biocompatibility of the hydrogel was demonstrated at the cellular and animal levels. Further results demonstrated good therapeutic effects at the cellular level, and will be validated at the animal level in the future. Unfortunately, there are limitations to our study and we need further validation at the animal level and clinical Conclusions In summary, we successfully designed and prepared multifunctional Fe-Zr@PDA@CMCS hydrogel with the Fenton effect and photothermal conversion properties using Fe, Zr, PDA, and CMCS. Fe-Zr@PDA@CMCS hydrogel combined the advantages of hydrogel and nanoparticles. The photothermal conversion, degradation, and swelling capabilities of the hydrogel and Fenton's catalytic ability under different conditions were investigated. The results showed that the composite hydrogel retains its photothermal and Fentonian catalytic properties while protecting Fe-Zr@PDA nanoparticles from degradation. In addition, the good biocompatibility of the hydrogel was demonstrated at the cellular and animal levels. Further results demonstrated good therapeutic effects at the cellular level, and will be validated at the animal level in the future. Unfortunately, there are limitations to our study and we need further validation at the animal level and clinical level. In summary, multifunctional composite hydrogels can be used as carriers of drugs for multimodal tumor therapy and tissue regeneration for biomedical applications. Based on the good degradability and therapeutic effects of functionalized hydrogels, multifunctional composite hydrogels could provide novel ideas and show great promise for future clinical oncology treatments. The study is also expected to provide a reference for future clinical studies of novel hydrogels against tumor recurrence. Preparation of Fe-Zr@PDA The Fe-Zr@PDA nanoparticles were synthesized by a simple one-pot hydrothermal method. Specifically, 0.2 g of FeCl 3 -6H 2 O and 0.075 g of ZrCl 4 were first weighed in scales and dissolved in 15 mL of ethylene glycol with stirring at room temperature, and the dissolved solution was named solution A. Next, 0.492 g of sodium acetate and 0.2 g of dopamine were dissolved in 10 mL of ethylene glycol (to form solution B). Solution A and solution B were then mixed well and added to a 100 mL autoclave lined with polytetrafluoroethylene, which was placed in an oven set at 180 • C for 12 h. Finally, after the reaction had cooled naturally, the product was washed 3 times with deionized water and anhydrous ethanol, centrifuged (9000 rpm, 8 min) to obtain the desired product, freeze-dried in a lyophilizer, and stored for subsequent experiments. Synthesis of CMCS Hydrogel and Fe-Zr@PDA@CMCS Hydrogel To synthesize the CMCS hydrogel, as a first step, 0.1 g of CMCS was introduced to 2 mL of ultrapure water, then dissolved while stirring in a water bath set to 50 • C. Next, 0.1 g of the activator EDC (0.05 g)/NHS (0.05 g) was added to 1 mL of pure water and the solution containing the activator was added to the CMCS solution and further stirred to form a gel. To synthesize the Fe-Zr@PDA@CMCS hydrogel, in brief, 0.05 g of EDC and 0.05 g of NHS were delivered to a solution of Fe-Zr@PDA nanomaterials (1 mL with a concentration of 0.01 g/mL) and finally mixed with the CMCS solution and stirred to form the hydrogel. Characterizations of Fe-Zr@PDA@CMCS and Fe-Zr@PDA To observe the microscopic morphology and elemental distribution of the CMCS and Fe-Zr@PDA@CMCS hydrogel, firstly, the conductive glue was carefully pasted on the test sample table, and the freeze-dried CMCS and Fe-Zr@PDA@CMCS hydrogel were cut into thin pieces and placed on the conductive glue. Then, an infrared baking lamp was used to dry the surface of the hydrogel. After that, its morphology was examined using a scanning electron microscope (SEM, Zeiss Sigma 300). Similarly, the morphology and particle size of Fe-Zr@PDA were observed using SEM, and the elemental composition of Fe-Zr@PDA nanoparticles was further analyzed using an X-ray energy spectrometer. Characteristics of Photothermal Conversion To explore the influence of different power densities on the temperature change, Fe-Zr@PDA@CMCS hydrogel was irradiated with the near-infrared laser electricity with strength densities of 0.5 W/cm 2 , 0.8 W/cm 2 , or 1 W/cm 2 . Then, we recorded the temperature change using an FLIR E60 camera. Finally, the photothermal stability of the hydrogel was explored. Specifically, the near-infrared laser with a power density of 0.8 W/cm 2 was used to irradiate the hydrogel for 6 cycles, and the cycle was set at 20 min each time. During the experiment, the thermal images and temperature changes of the experimental each cycle were recorded using a near-infrared thermal imager. The photothermal conversion efficiency (η) of Fe-Zr@PDA@CMCS was calculated by the modified Korgel formula (1): In the above formula, η is the photothermal conversion efficiency, h is the warmness switch coefficient, I replace the power of the laser, T max is the highest temperature of Fe-Zr@PDA@CMCS solution under the laser irradiation for 6 min, T surr represents the ambient temperature throughout the test period, hS is the heat transfer coefficient, A( λ ) is the absorbance of Fe-Zr@PDA@CMCS at wavelength 808 nm, S is the effective area of laser radiation, and Q in.surr is the heat loss of Fe-Zr@PDA@CMCS in the heating process. •OH Generating Capacity Firstly, we explored the ability of different concentrations of Fe-Zr@PDA to produce •OH. Specifically, a 3,3 ,5,5 -Tetramethylbenzidine (TMB) solution with a concentration of 3.2 mM was prepared using ultrapure water and 300 µL was mixed with different concentrations of Fe-Zr@PDA and H 2 O 2 . The total volume of the mixed solution was 1.5 mL, with the final concentration of H 2 O 2 set at 8 mM and the final concentrations of Fe-Zr@PDA set at 0 µg/mL, 20 µg/mL, 40 µg/mL, 60 µg/mL, 80 µg/mL, and 100 µg/mL. The reaction lasted for 15 min at room temperature accompanied by ultra-high-speed centrifugation (15,000 rpm, 10 min). After the reaction, the supernatant was collected and photographed with a digital camera. At the same time, the absorbance of each sample supernatant at the wavelength of 652 nm was measured by an ultraviolet spectrometer. The •OH formation ability of Fe-Zr@PDA@CMCS hydrogel was also determined by the color development properties of TMB after oxidation. Fe-Zr@PDA@CMCS hydrogel with different concentrations (1, 2, or 4 mg/mL) was mixed with TMB (mM) and H 2 O 2 (0.3%). The group of TMB and H 2 O 2 without Fe-Zr@PDA@CMCS hydrogel was set as the control. After being co-cultured at room temperature for 1 h, the supernatant was accumulated and photographed with a digital camera. The absorption value of the supernatant at λ = 652 nm was measured by UV-Vis-NIR spectroscopy (Lambda 25, Perkin Elmer, Waltham, MA, USA). Degradation Analysis of Fe-Zr@PDA@CMCS Hydrogel To explore the in vitro degradation ability, lyophilized Fe-Zr@PDA@CMCS hydrogel (100 mg) was immersed in 5 mL of PBS (pH = 7.4) or CBS (pH = 5.4). Then, the Fe-Zr@PDA@CMCS hydrogel samples were removed at predetermined time points (1, 3, 7, 14, and 28 days), washed with ultrapure water, and finally lyophilized and weighed in a lyophilizer. The degradation percentage of Fe-Zr@PDA@CMCS hydrogel was once calculated by way of the following formula: where R t represents the actual gram weight of Fe-Zr@PDA@CMCS hydrogel and R 0 represents the original weight of Fe-Zr@PDA@CMCS hydrogel. Swelling Analysis of Hydrogel The swelling rate (SR) and stability of Fe-Zr@PDA@CMCS hydrogel in different solutions (H 2 O, PBS, and CBS) were determined by the swelling test. In short, 10 mg of Fe-Zr@PDA@CMCS hydrogel was placed into a sealed centrifuge tube (n = 3) and added to 20 mL of H 2 O, PBS, and CBS respectively. The tubes were placed at 37 • C. The hydrogel was removed from the solution at predetermined time points (0.5, 1, 2, 12, and 24 h), and the surface water of the hydrogel was absorbed by filter paper and weighed again. The swelling kinetics curve of Fe-Zr@PDA@CMCS hydrogel was drawn, and the SR of the hydrogel reached swelling equilibrium was calculated according to the following formula: where m t is the mass of hydrogel after swelling at different points in time and m 0 represents the initial condition of this hydrogel. In Vitro Cytocompatibility of Fe-Zr@PDA@CMCS Hydrogel The cell safety of Fe-Zr@PDA@CMCS was evaluated using mouse fibroblasts (L929 cells). First, the L929 cells were inoculated on 96-well clear cell culture plates (5000 cells per well) and cultured for 24 h in a constant temperature incubator with 37 • C and 5% CO 2 . At the same time, Fe-Zr@PDA@CMCS hydrogel was positioned in DMEM cell medium (1, 2.5, or 5 mg hydrogel, in 1 mL DMEM medium) and incubated at 37 • C in a single day to attain the extract with the attention of 5 mg/mL. On the second day, the old medium was replaced with 100 µL of the leaching solution, and the control group was added with a fresh DMEM medium. The extract concentration in the leaching solution was 1 mg/mL, 2.5 mg/mL, or 5 mg/mL. Finally, the medium was removed at 24 h or 48 h culture, and the cells were washed with PBS twice. Cell viability was determined by the CCK-8 kit. In addition, staining was performed using an AM-PI staining kit for both living and dead cells, and staining images were collected using an inverted phase contrast microscope (Leica DM IL, Weztlar, Germany). In Vitro Blood Compatibility of Fe-Zr@PDA@CMCS Hydrogel The feeding and testing of animals were conducted at Changhai Hospital of Naval Medical University in strict accordance with the program and policies of the Ministry of Health. To test whether Fe-Zr@PDA@CMCS can cause rupture of red blood cells and hemolysis, the blood compatibility of these cells was evaluated by using rat red blood cells. Firstly, blood was collected from the anesthetized heart of SD rats into anticoagulant collection vessels and centrifuged (4000 rpm, 5 min) to collect the red blood cells. Finally, the purified rat red blood cells were diluted to 2% with PBS and stored in the refrigerator at 4 • C for subsequent use. For the hemolysis experiment, we first mixed previously stored diluted erythrocyte suspensions with PBS buffer containing 20, 40, 100, and 200 mg of Fe-Zr@PDA@CMCS hydrogel, respectively. In addition, a negative control group (specifically, 1 mL of diluted PBS buffer mixed with 1 mL blood cells) and a positive control group (that is, 1 mL of diluted red cell suspension mixed with 1 mL ultrapure water) were set up, respectively. Finally, the above-mixed solutions were incubated in a water bath at 37 • C for 2 h and then centrifuged immediately to obtain the supernatant (3000 rpm, 5 min). The absorbance at 541 nm was measured with the UV-visible-near-infrared spectrometer (Lambda 25, Perkin Elmer, Waltham, MA, USA) of all the supernatants collected. Then, the hemolysis percentage (HP) was calculated according to the following formula, and the supernatant was photographed with a camera. Hemolytic ratio(%) = B sample − B negative B positive − B negative × 100% where B sample is the absorbance at 541 nm of red blood cell suspension after treatment with hydrogel extract, B negative is the absorbance of erythrocyte suspension dealt with PBS buffer, and B positive is the absorbance of blood cell suspension after treatment with ultrapure water. In Vivo Animal Tissue Safety of Fe-Zr@PDA@CMCS Hydrogel To explore the safety of Fe-Zr@PDA@CMCS hydrogel in animals, KM mice were used as the animal model. Fe-Zr@PDA@CMCS hydrogel was embedded under the skin of mice to observe whether it would cause damage to mice. Specifically, KM mice (from the Laboratory in Shanghai Changhai Hospital, China) were randomly divided into control and experimental groups with three animals in each group (n = 3). Mice anesthetized with pentobarbital sodium were subcutaneously implanted with 200 mg of Fe-Zr@PDA@CMCS hydrogel in the experimental group. Normal healthful mice were used in the control group. The mice were weighed every 2 days after embedding to investigate the adjustments in body weight. The mice were sacrificed on the 7th, 14th, and 28th days, respectively. We collected blood by removing the eyes of mice, and major organs such as lungs, spleen, liver, heart, and kidney were collected and fixed in 4% paraformaldehyde for HE staining. Meanwhile, the collected blood was used to measure the values of the blood routine indexes of the two groups by the blood routine analyzer. An ELISA kit was used to detect the related indexes of kidney function and liver function in the serum. HE-stained sections of major organs were used to observe whether there were lesions such as inflammation and necrosis. All animal experimental operations were carried out in strict accordance with the protocols authorized by the hospital's Comprehensive Laboratory Animal Centre. In Vitro Antitumor Effects of Fe-Zr@PDA@CMCS Hydrogels We selected the human pancreatic cancer cell line SW1990 as a model to assess the in vitro therapeutic effect of Fe-Zr@PDA@CMCS. SW1990 cells were firstly inoculated into the 96-well culture plate at a density of 9000 cells/well. After 24 h of incubation, the original medium was replaced with 100 µL of fresh DMEM solution containing different substances and subjected to various treatments. The treatments were as follows: (a) control group (fresh DMEM), (b) Fe-Zr@PDA group (100 µg/mL Fe-Zr@PDA), (c) Fe-Zr@PDA@CMCS group (100 µg/mL Fe-Zr@PDA hydrogel extract), (d) laser group (808 nm NIR, 1 W/cm 2 , 5 min), (e) Fe-Zr@PDA@CMCS + laser group (100 µg/mL Fe-Zr@PDA hydrogel extract, 808 nm NIR, 1 W/cm 2 , 5 min). The groups were treated as described above and incubation was continued for 12 h, after which the survival of SW1990 cells was assessed by the CCK-8 method and the live-dead cell assay. Statistical Analysis All results are expressed as mean ± standard deviation and one-way ANOVA statistical analysis was used to assess the significance of the experimental data. The value of 0.05 was used as the significance level data, with probabilities less than 0.05 (p < 0.05), 0.01 (p < 0.01), and 0.001 (p < 0.01) indicated by (*), (**), and (***), respectively. The sample size was 3 unless stated (n = 3).
9,711.8
2023-05-31T00:00:00.000
[ "Engineering", "Materials Science" ]
Negation’s Not Solved: Generalizability Versus Optimizability in Clinical Natural Language Processing A review of published work in clinical natural language processing (NLP) may suggest that the negation detection task has been “solved.” This work proposes that an optimizable solution does not equal a generalizable solution. We introduce a new machine learning-based Polarity Module for detecting negation in clinical text, and extensively compare its performance across domains. Using four manually annotated corpora of clinical text, we show that negation detection performance suffers when there is no in-domain development (for manual methods) or training data (for machine learning-based methods). Various factors (e.g., annotation guidelines, named entity characteristics, the amount of data, and lexical and syntactic context) play a role in making generalizability difficult, but none completely explains the phenomenon. Furthermore, generalizability remains challenging because it is unclear whether to use a single source for accurate data, combine all sources into a single model, or apply domain adaptation methods. The most reliable means to improve negation detection is to manually annotate in-domain training data (or, perhaps, manually modify rules); this is a strategy for optimizing performance, rather than generalizing it. These results suggest a direction for future work in domain-adaptive and task-adaptive methods for clinical NLP. Introduction Negation in unstructured clinical text is a well-known phenomenon. It is crucial for any practical interpretation of clinical text, since negation is common in clinical narrative. For example, the medical significance of ''no wheezing'' is quite different from that of ''wheezing.'' With the increasingly widespread use of electronic medical records (EMRs), computational methodologies for negation detection have also become well-known, most notably the early and strikingly straightforward NegEx algorithm [1]. In NegEx, simple regular expressions yield solid performance on detecting the negation of Findings, Diseases, and Mental or Behavioral Dysfunctions from the Unified Medical Language System (UMLS). The success of NegEx (and other techniques) is attributable to the constrained pragmatics of clinical text: because physicians are writing the text in order to convey the health status of a patient, there is a limit to the ways that medically pertinent concepts can be negated. Since existing algorithms have performed well in many published studies [2][3][4][5][6][7][8], many clinical natural language processing (NLP) practitioners consider negation detection a solved problem (see Table 19s summary of Related Work) with a simple, generalizable solution. However, our present work will show that this ''solved'' designation is premature because current solutions are easily optimizable but not necessarily generalizable. Negation detection is still a challenge when considered from a practical, multi-corpus perspective, i.e., one in which an algorithm is deployed in many clinical institutions and on many sources of text. For simplicity in this article, we will consider each corpus as its own ''domain,'' though we recognize that each corpus bridges multiple medical subdomains and all sources that we consider consist only of clinical text. As the NLP Attribute Discovery team for the Strategic Health IT Advanced Research Project on the Secondary use of the EHR (SHARPn), we attempted to detect negation in four corpora, using machine learning, rules, domain adaptation, and various evaluation scenarios. These corpora include the new SHARPn NLP Seed Corpus of clinical text with multiple layers of syntactic and semantic information, including named entities (NEs) and polarity (i.e., negation). We also used the 2010 i2b2/VA NLP Challenge corpus, the MiPACQ corpus, and the NegEx Test Set. The SHARPn Polarity Module used in our evaluation is currently available in Apache cTAKES (clinical Text Analysis and Knowledge Extraction System; ctakes.apache.org) as part of the ctakes-assertion project, including an integrated domain adaptation algorithm [9]. cTAKES is a comprehensive clinical NLP tool based on the Unstructured Information Management Architecture (UIMA), including (among other things) named entity recognition and negation detection. We conclude that practical negation detection is not reliable without in-domain training data and/or development. Thus, it can be optimized for a domain, but is difficult to generalize across domains. ''Benchmark'' gold standard data sets differed sufficiently to have a profound effect on the viability of negation detection algorithms. Furthermore, it is difficult to determine an optimal mix of training data, or to standardize a definitive ''benchmark'' metric, since both are influenced by corpus-specific annotation guidelines and data sources. The results we report here should remind users of negation detection algorithms to be vigilant in tuning systems to their data, whether by training with local data or modifying rules. We also call for future work in domain-adaptive and task-adaptive methods. After a discussion of the extensive related work in negation detection, the remainder of this article will introduce the data and methods for corpus and system comparisons of negation detection, present the resulting performance of systems on the different corpora, and discuss implications for negation detection and annotation schema in the larger picture of clinical informatics. Related Work Negation has been studied philosophically since the time of Aristotle; computational efforts addressing negation and related evidentiality/belief state issues have surfaced much more recently [10]. In the clinical domain, negation detection was a very practical early motivation for NLP adoption among the informatics community, and thus significant effort has gone into this task. While there have been many systems implementing negation detection, publicly available corpora for testing them are limited by patient privacy concerns, as is typical in clinical NLP. Negation detection systems have shown excellent performance in clinical text, beginning with the rule-based NegEx algorithm [1]. NegEx was originally evaluated on spans of text that matched UMLS Findings, Diseases, and Mental or Behavioral Dysfunctions among 1000 test sentences sampled from discharge summaries at the University of Pittsburgh Medical Center; a regression test set was released later with de-identified notes of 6 different types. NegEx has produced numerous updated and customized systems [11,12], including the updated version released with ConText [13] which performed well on a benchmark NegEx Test Set (available at https://code.google.com/p/negex/wiki/TestSet). Our tests used the YTEX (Yale cTAKES Extensions) version of NegEx [14] as a baseline and included the NegEx Test Set as a benchmark. Similar to NegEx, many other negation algorithms take a rulebased approach, with a variety of techniques: lexical scan with context free grammar [6], negation ontology [3], or dependency parse rules [7]. Some negation algorithms treat the problem as a machine learning classification task [4] or as some hybrid between rules and machine learning [2,5]. The performance of these systems and their data sources is summarized in Table 1 below. All these general approaches were represented in the 2010 i2b2/VA NLP Challenge task on assertions [8]. In addition to catalyzing innovation from multiple systems, this shared task produced a benchmark data set that is available for research with a simple data use agreement; it interprets negation on medical problem NEs as an assertion that the problem is absent. The four corpora used in our study all annotate named entities explicitly; here, we consider named entities to be spans of text that refer to real-world entities or events that may or may not be classified or mapped to some external ontology. These corpora do not explicitly include the scope of negation indicators -i.e., the maximum span within a negation cue word could be applicable.. Some efforts have reversed this, giving an implicit notion of named entities but an explicit notion of negation scope: notably the BioScope Corpus [15] that was used as part of the CoNLL 2010 Shared Task [16]. Bioscope annotates negation, uncertainty, and their scopes on de-identified clinical free text (1,954 radiology reports), biological full articles (9 articles from FlyBase and BMC Bioinformatics), and scientific abstracts (1,273 abstracts also in the GENIA corpus). This is in contrast to the work we present here, which focuses on named entities. We ignore scope for two reasons: First, the lack of gold standard named entity mentions is an additional source of error that no other corpus would have, making the comparison unfair. Second, while negation scope annotations overcome some recall issues for non-standard terminology (e.g., ''patient is not feeling as much like a pariah today'' would represent negation correctly despite finding no NE), they do not overcome issues in fine-grained annotation guideline distinctions (see Section 3.3 on Annotation Guidelines). Methods Here, we first describe the annotated NLP corpora used in training and testing, with salient information about the gold standard entity and negation annotation guidelines. We then describe the new SHARPn Polarity Module and the YTEX NegEx rule-based baseline. Ethics statement We did not seek IRB approval as all the data used in this study were collected from previous studies. While the data sets were from electronic medical records that originally included protected health information, all medical records were reliably de-identified before we had access to the data sets. Thus, none of the authors had access to any patient identifying information. Three of the corpora (the SHARPn corpus, MiPACQ corpus, and i2b2 corpus) were available to us with signed Data Use Agreements between the supplier and recipient institutions. One (the NegEx Test Set) was freely downloadable online with no restrictions. NLP corpora with negation annotations Our work used four clinical NLP annotation efforts; the SHARPn NLP Seed Corpus, the 2010 i2b2/VA NLP Challenge Corpus; the MiPACQ corpus; and the NegEx Test Set. Statistics in Table 2 show their overall relative sizes, train/test splits, and proportion of negated concepts. First, the SHARPn NLP Seed Corpus consists of de-identified radiology notes related to Peripheral Arterial Disease (PAD) from Mayo Clinic, and de-identified breast oncology progress notes regarding incident breast cancer patients from Group Health Cooperative. This multi-layered annotated corpus follows community adopted standards and conventions for the majority of annotation layers, which include syntactic trees, predicateargument structure, coreference, UMLS named entities, UMLS relations, and Clinical Element Models (CEM) templates [17]. Negation is included in the CEM templates as an attribute of UMLS concepts. Second, the 2010 i2b2/VA NLP Challenge Corpus contained manually annotated, de-identified reports from Partners Healthcare, Beth Israel Deaconess Medical Center, and the University of Pittsburgh Medical Center. The majority of notes were discharge summaries, but the University of Pittsburgh Medical Center also contributed progress notes. Third, the MiPACQ corpus [18,19] annotates multiple syntactic and semantic layers, similar to the SHARPn NLP corpus. There are three major divisions to the sources of data: a snapshot of Medpedia articles on medical topics, written by clinicians, retrieved on April 26, 2010; clinical questions from the National Library of Medicine's Clinical Questions corpus (http:// clinques.nlm.nih.gov), collected by interviews with physicians; and sentences from Mayo Clinic clinical notes and pathology notes related to colon cancer. Finally, the NegEx Test Set is a set of manually-selected sentences from 120 de-identified University of Pittsburgh Medical Center reports (20 each of radiology, emergency department, surgical pathology, echocardiogram, operative procedures, and discharge summaries). This set was used to evaluate the ConText algorithm [13], while another 120 reports of similar distribution (not publically available) were used for the development of the negation portion of ConText (i.e., an updated NegEx). Comparison of annotation guidelines Manually annotated negation in one of these corpora is not strictly equivalent to that in other corpora. We cannot directly compare annotation guidelines because we do not have corpora that are multiply-annotated with different guidelines. However, we should note that all annotation projects reported high interannotator agreement within their respective projects. Here, we qualitatively analyze the annotation guidelines concerning the annotation of both NEs (concepts) and attributes (assertion status), hypothesizing that some differences in annotation guidelines may negatively affect the performance of negation algorithms across corpora. The primary difference between the annotation guidelines of the corpora appears to be in the definition of NEs, rather than direct indications of how negation should be handled. First, NE annotation guidelines differ in the semantic types that are allowed. The broadest is the MiPACQ corpus, which annotates 17 UMLS Semantic Groups. (However, in practice, some semantic groups have zero or negligible frequencies, and we have grouped them together in our analysis.) SHARP only annotates the 6 most clinically relevant groups, namely, Diseases and Disorders, Signs and Symptoms, Labs, Medications, Procedures, and Anatomical Sites. These semantic group divisions and their respective distributions are enumerated in Table 3, for these two corpora. The NegEx Test Set is much more narrow, including only Signs, Symptoms, Diseases, and Findings (but not differentiating between these) with qualitative values. The i2b2 corpus is similarly restrictive, only annotating ''problems,'' i.e., Diseases, Signs and Symptoms. Thus, they are excluded from Table 3. The corpora also differ in the span to consider when identifying NEs. NegEx Test Set is the most permissive, annotating whole clinically-relevant phrases as NEs regardless of their syntactic type (e.g., the statement ''Right ventricular function is normal'' is treated as a single entity as shown by the underlining). i2b2/VA guidelines only consider whole noun and adjective phrases as possible NEs (e.g., ''her shortness of breath and coughing resolved'' includes the modifier ''her'' in the NE). Similar to i2b2/VA, MiPACQ also indicates that whole noun phrases should be candidate NEs, but smaller units are typically used in practice (e.g., ''her chest x-ray'' leaves out the modifier ''her''). SHARP predominantly annotates maximal strings that match UMLS terms as NEs, which often excludes long paraphrases and closed-class modifying adjectives (similar to MiPACQ), although there are some cases of CUI-less NEs and multi-span NEs. Another difference in NE annotation guidelines is the amount of overlap allowed between NEs. The NegEx Test Set has only one phrase annotated per sentence, hence no overlap in NEs; i2b2/VA only annotates full noun and adjective phrases, so fully subsumed NEs are not allowed. In contrast, SHARP annotates subspans as long as they are mapped from the UMLS and of a different semantic type (e.g., both ''chest'' (anatomical site) and ''chest xray'' (procedure) in ''her chest x-ray''). MiPACQ removes this restriction of different semantic types, but stipulates that some relationship must be shared between the subspan and the full span -this is in practice very similar to SHARP (e.g., there is a locationOf relationship between ''chest'' and ''chest x-ray''). Overall, the four guidelines are not as precise with negation annotation definitions as they are with NEs. The SHARP, MiPACQ, and NegEx Test Set representations imply a relation between an explicit negation marker and the negated term (e.g., a cue word like ''no'' would be marked, and the following term ''shortness of breath'' would then set a negation_indicator = present). The i2b2/VA guideline assumes a pragmatic inference about the intent of the author in describing his/her observations (e.g., ''no shortness of breath'' would mark assertion = absent without marking the cue word). This difference does lead to some minor morphology-related annotation differences. For example, ''afebrile'' is marked as ''absent'' for i2b2, but not in SHARP, MiPACQ, or NegEx Test Set since there is no external negation indicator. SHARPn Polarity Module As with many existing approaches, the SHARPn Polarity module treats negation detection as a classification problem for NEs. We engineered features that would make sense of the context surrounding an NE: A. Token in Bag-of-Words (BOW). These most basic, binary features indicated whether a given word appeared within a window (bag) from the NE. For example, one feature might be whether ''no'' occurred in the 5 preceding words. We included several different BOWs, based on directionality (preceding vs. following the NE) and size (3, 5, or 10). B. Token in positional context. These features are similar to BOW features, but are specific to the exact position with respect to the NE of interest (e.g., ''without'' occurred 4 words preceding the NE). Windows of 4 and 5 were considered. C. Cue words. Following MITRE's successful negation detection system [2], we identified cue words -an expertcurated list of negation-related words (e.g., ''negative for''). The nearest cue word in scope and its category (a normalized word or phrase, e.g., ''negative'') were included as binary features. D. Dependency path rules. We directly utilized the rulebased DepNeg system [7] to produce binary features corresponding to whether the NE lay along a dependency path that typically specifies negation. For example, ''no evidence of coughing, rales, or wheezing'' has ''wheezing'' outside a 5-word window, but is connected by a dependency parse path to ''no.'' E. Constituency tree fragments. In addition to dependency path rules, we also used constituency tree fragments. The constituent parser within cTAKES is Ratnaparkhi's Maximum Entropy parser [20] as implemented in OpenNLP, trained on clinical treebanks. Tree fragments (partial constituency trees) can represent, for example, that the NE in question sits inside an adjective phrase ''negative for , concept..'' Fragments are automatically extracted and defined following Pighin and Moschitti [21]; training data determines whether the features are useful or not. Examples of these features are included in a table in the Discussion section. The size of the feature set is upper-bounded by the size of the training set's vocabulary and diversity of tree fragments; there are 12 dependency path rules. In practice, a feature vector, v, will be smaller than this upper bound, since not every dictionary word is in the context of an NE. The SHARPn Polarity Module classifies each NE based on these features. We chose to utilize classifiers via ClearTK because of its compatibility with UIMA-based systems like cTAKES [22]. After some preliminary experimentation with various classifiers, we selected linear kernel SVMs implemented with LIBLINEAR, which learn decision boundaries (negated vs. not negated) based on the distribution of features in the training data. SVMs are considered to have good generalization performance due to inherent regularization, and excel in situations (like ours) where there are a massive number of features. Since linear kernel SVMs require only one parameter to be tuned, we manually tuned it during development using cross-validation. Training data for a single model can consist of more multiple corpora. In a standard setting, instances from different corpora would not be differentiated during training. Alternatively, we implemented an optional domain adaptation algorithm, frustratingly easy domain adaptation (FEDA) [9], to build some of our multi-corpus models. FEDA is a simple but effective domain adaptation technique that requires in-domain training data. If there are data from four domains a, b, c, and d, for example, a model would be trained with 5 concatenated (row) feature vectors: A training sample from domain a will be logged in v all and v a only, whereas a training sample from from domain b will be logged in v all and v b only, and so forth. At test time, the domain of the test sample is supplied to the classifier, and instances are classified with a weighting of the domain-specific model in concert with the ''general'' model. Evaluation Setup Our evaluations used the NegEx algorithm as a baseline, as implemented in the Yale cTAKES Extensions (YTEX) [14]. Using Named Entities discovered by the standard cTAKES pipeline, the YTEX negation module set the ''polarity'' attribute of each NE to 21 (negated) or +1 (not negated). Because NegEx is a rule-based method, we would expect it to be immune to performance improvement or degradation based on training data. However, it is well-known that customization of rules is likely necessary when applying NegEx in settings other than the one in which it was initially developed [11,12]. The SHARPn Polarity module was implemented within the cTAKES system (see Figure 1), leveraging feature extraction and machine learning programming interfaces available in the ClearTK suite of tools (available at https://code.google.com/p/ cleartk/). It should be noted that we did preliminary tests using x 2 feature selection (filtering out the feature if their x 2 values were too low), but the performance did not significantly improve. Thus, we have left feature selection out of the results of this study; some sample x 2 values for specific features are listed in the Discussion section. The polarity module used in our tests is currently available as a tagged branch of the Apache cTAKES source code repository, and will be part of a future cTAKES release. For both training and testing, we used gold standard NEs and negation annotations as defined in each of the corpora. System negation annotations are compared to gold standard for precision, recall, and F-measure (the harmonic mean of precision and recall). We also used the default cTAKES pipeline to produce anything besides NEs or negation annotations (e.g., sentence annotations, tokens, POS tags, dependency parses, constituency parses, semantic role labels; see ). While there is some risk for error propagation from these other components into negation detection, we believe this risk is minimized for the main precision, recall, and F-measure metrics, because systemic errors would appear in both training and testing data, and any impact on negation performance would be mediated through their representation in a machine learning feature vector. We trained the SHARPn Polarity module on each of the four corpora; train/test splits were provided for the SHARPn, i2b2/ VA, and MiPACQ corpora; for these three corpora, training and testing in our evaluations uniformly respected these training and testing splits (e.g., even in cases like training on SHARP data but testing on i2b2 data). Because the NegEx Test Set's corresponding development set was not available, we used the NegEx Test Set in any single evaluation as either the training data or the testing data. The tables presenting our results use parantheses to show when reusing training data invalidates the test performance measures (i.e., training and testing would have been on the same data). Single test corpus performance The practical question a user might ask is: ''How can I maximize negation detection performance for my data?'' Table 4 below illustrates the difficulty of answering this question by showing performance on four corpora (columns) by various systems (rows). Row 0 gives previously reported comparison statistics for i2b2 data (MITRE [2]) and the NegEx TestSet (GenNegEx 1.2.0, see https://code.google.com/p/negex/wiki/ TestSet); SHARP and MiPACQ do not have previous results to compare with. We have grouped these systems to be representative of three strategies for negation detection that are used in the community: the unedited, rule-based YTEX algorithm (row 1); machine learning classifiers when only out-of-domain data (OOD) is available (rows 2-6); and machine learning classifiers when some in-domain data is available (rows 7-9). Note that row 7 is equivalent to the diagonal from rows 2-6, namely, where the training set and test set are from (different portions of) the same corpus. Table 4 also includes significance bands down each column; pair-wise approximate randomization significance tests for F 1 score, aggregated by document, are reported for p,0.05. Values in a column labeled with different successive superscripted letters (e.g., 93.9 a and 92.6 b ) indicate that there is a significant difference between two systems. These bands are further visualized in Figure 2. First, YTEX (top row), implementing the widely used rule-based NegEx algorithm, performed quite well on the NegEx Test Set (F 1 = 95.3%). When used without modification on other corpora, performance fell to unacceptable levels (e.g., F 1 = 62.3% on SHARP data). As might be expected, we may conclude that widely-used rule-based algorithms need to be modified according to their target data. For situations in which only OOD data is available (common in clinical text), one strategy is to use a single OOD corpus as training data (rows 2-5). Using a single OOD corpus has widely varying results, with models ranging from 59.3% to 95.4% F-score on the NegEx Test Set. Another strategy is to ''use all the (OOD) data you have'' (row 6), but again the results are mixed. With the highest OOD models in bold, it is not clear which strategy is optimal, and it is difficult to tell what pairs of corpora yield good performance. Underlying reasons for this variability are further explored in Section 4.2. The situation is much improved when in-domain data is available (rows 7-9, with most scores lying within the highest significance band, labeled with superscript 'a'). Only in MiPACQ data, for which the test set is small, are there OOD models in the same significance bands (i.e., superscript 'a' in rows 2-6) as the best models with in-domain data. With in-domain models, we still face the same problem of whether to use a single in-domain corpus (row 7) or to ''use all the data you have'' (row 8). Only in i2b2 data are improvements statistically significant, and which approach performs better appears to differ by corpus. It may be the case that, since i2b2 data is only on 'problems,' including training data from other sources decreases performance; the MiPACQ corpus, being the most general, appears to benefit from training on other corpora. Using domain adaptation (row 9) is also not conclusively better than a single in-domain corpus (row 7) or leaving out domain adaptation (row 8), since improvements are not statistically significant at the p,0.05 level (all share 'a' superscripts). Recall that these ''All+FEDA'' tests (row 9) will train a model with a feature space approximately 5 times the size of the ''All'' feature spaces (row 8). Without conclusive evidence, it is difficult to say whether the additional model complexity is worth it. Thus, whether there is in-domain data available or not, we cannot conclude a uniform policy such as ''use all available data to train your model'' or ''train a model on a single most similar corpus'' or ''always use domain adaptation if possible.'' However, we can conclude that annotating in-domain data is the best way to ensure solid performance on a machine learning system. Note that, this is a method of optimizing the performance for a corpus, rather than generalizing performance between corpora. Corpus difficulty and usefulness Rather than trying to define an arbitrary scientific measure of corpus 'similarity,'' we consider the practical perspectives of corpus ''difficulty'' (scores on testing, down columns) and ''usefulness'' (scores on training, along rows). As evidenced by the OOD rows 2-5 of Table 4, the difficulty and usefulness of corpora seem to vary. Testing on MiPACQ data has an average F 1 score of 70.9% down the column of trained systems, indicating it is probably the most difficult to test on. Training on i2b2 data (row 3) achieved a macro-averaged F 1 score of 80.7% across the row of test sets, indicating its training set is perhaps the single most useful for training. Difficulty and usefulness are not symmetric: i2b2 data is clearly the best OOD training data for the NegEx Test Set (F 1 = 95.4% in column 4); but NegEx is not the best OOD training data for the i2b2 test set (F 1 = 81.1% in column 2; MiPACQ is significantly better with F 1 = 82.6%). These variations in difficulty and usefulness could hypothetically be explained by several factors. For example, the diversity of source data in the MiPACQ corpus (including non-clinical data such as Medpedia) may contribute to its difficulty; MiPACQ in-domain performance is loosely comparable to the OOD performance of other models. Additionally, Section 4.4 below explores differences in the annotation guidelines (as expressed in NE length and semantic group). Different corpora have fundamentally different characteristics, and more samples from one corpus are not equal to those from another. We also sought determine whether usefulness could be explained by corpus size, hypothesizing that more data would lead to more robust machine learning models. Thus we performed experiments in which the amount of training data was varied. These experiments focus on the i2b2 training data which had a small but consistent advantage in cross-domain experiments. We built learning curves in which we tested on the SHARP Seed, MiPACQ, and i2b2 test sets. We randomly sampled from 10% to 100% of the training data, at increments of 10%. For each sampled proportion size we averaged across 5 runs to compute Fscores at that point. The results are shown in Figure 3. The learning curve for the i2b2 data seems to be increasing even until the very end, as the classifier seems to be making marginal improvements with ever more data. In contrast, in both cross-domain experiments the performance levels off very early, conservatively estimated at around 20% of the i2b2 training data being used. For additional reference, we have also plotted two points taken from Table 4-the in-domain performance for SHARP and Mipacq. The x-axis for each of these points is the size of the training data (counted as the number of instances of negation), while the y-axis is the F-score obtained on each corpus' in-domain evaluation. These experiments seem to indicate that the value of the i2b2 corpus is not simply because of its size. In fact, performance on outside corpora of a system trained on 20% of the i2b2 data is comparable to one trained on 100%. We should be careful to not overstate the distinctions of ''most difficult'' or ''most useful.'' Furthermore, overall ''usefulness'' does not necessarily imply usefulness in a specific OOD setting or corpus, for example, supplementing in-domain training data with the ''useful'' corpus. In further testing on the SHARP corpus, we considered whether the ''useful'' i2b2 training data could augment the SHARP training data, and found that adding i2b2 training data did not improve performance on the SHARP corpus (F 1 = 90.9%), whereas adding MiPACQ did improve performance (F 1 = 94.6%). Though it is difficult to define ''similarity,'' it may be the case that more similar corpora can be mixed as training data more effectively. Average performance We considered average performance of several models on multiple corpora. In Table 5 we include averages with and without FEDA (i.e., for rows 8-9 of Table 4), labeling pairwise statistical significance at p,0.05 between the domain adapted and non-domain adapted versions with an asterisk. The NegEx Test Set is used for training rather than testing. Here, we report both macro-averages (arithmetic mean of the three test sets) and micro-averages (weighted by the number of instances in each test set). The micro-averaged scores are heavily weighted towards the i2b2 numbers because the i2b2 test set is the largest; macro-averages, on the other hand, are much lower than has been previously reported in literature, in large part due to the difficulty of the MiPACQ corpus. Overall, i2b2 is the only corpus on which domain-adapted models clearly outperform un-adapted models. Named Entity characteristics Negation predictions were further analyzed to see if the differences in NE annotation guidelines influenced performance, since resulting differences in ''gold standard'' training data could confuse machine learning systems. Because guidelines for annotating NEs differed in how much of a noun phrase to include, we examined NE length in words. Figure 4 shows that the i2b2trained model has the best overall performance, likely due to its larger number of training samples rather than its similarity to other annotation guidelines. Underscoring this, the NegEx Test Set is the most permissive guideline (allowing whole phrases), yet it obtains similar performance to the restrictive SHARP and MiPACQ guidelines (typically short phrases). Figure 4 also shows that longer Named Entities are more difficult to negate correctly in all of the corpora; in the i2b2 corpus, single-word terms were easy to negate, whereas in other corpora single-word terms were substantially harder. One hypothesis is that this could be due to i2b2's different accounting of inherently negated terms such as ''afebrile.'' ''Afebrile'' itself accounted for 124 of 3,609 negated NEs in the i2b2 training set, and the number of single-term entities inherently negated by virtue of negative suffixation or negative acronym component (e.g., ''NAD'' standing for ''no acute distress'') total 299 (8.3%). While this does not account for the total error difference in one-word NEs, it is a factor worth noting. Additional annotation differences may result from differing assumptions regarding explicit and implicit expression of negations. Further accounting of these terms may require reannotation of the corpus, which is out of the scope of this article. Because the annotation guidelines also differed in which semantic groups to annotate, we considered performance of each model for each specific semantic group, shown in Figure 5. Recall from Table 3 that SHARP and MiPACQ included a broad selection of semantic groups, including anatomical sites (ANAT), chemicals and drugs (CHEM), disorders (DISO), laboratories (LAB), procedures (PROC), and symptoms (SYMP). i2b2 and the NegEx Test Set only specified ''problems'' and are considered EVENT in Figure 5. Despite their annotation guideline similarity, we did not find that SHARP and MiPACQ performed similarly on individual semantic groups. Note in particular the relatively low SHARPtrained performance on ANAT, CHEM, PROC, and SYMP despite its having training data in those groups. A MiPACQtrained model also did not outperform other models, despite that most of the test set NEs of minority semantic groups came from the MiPACQ corpus. Similarly, the i2b2-trained and NegEx Test Set-trained models had similar annotation guidelines, but did not perform similarly on groups such as EVENT, DISO, or PROC. These models were not uniformly worse than SHARP or MiPACQ on the semantic groups for which they had no training data. Salient features From the foregoing tests, NE properties like length and semantic group (and thus, annotation guidelines) did not fully explain the discrepancy in performance between different models. Thus, we qualitatively examined the broader differences between corpora by looking at negation contexts in each corpus. We defined negation contexts as the features of the SHARPn Polarity Module, as defined in Section 3.4. Table 6 calculates and ranks the x 2 statistic corresponding to each feature (i.e., on a 262 grid of whether the NE was negated vs. whether the feature was present) within all four sets of training data. Thus, the ranking in Table 6 corresponds to the model trained on ''All'' training sets, in row 8 of Table 4 and in the preceding section. Table 6 also compares the rank of features in the ''all'' model to salient features in each individual corpus. It is evident that the most important features were consistent across all the corpora, representing the ''easy cases'' of negation: namely, when the word ''no'' is related to a concept by proximity or by syntax. The SHARP corpus differs somewhat, likely due to the sources of data for the SHARPn Seed Corpus: Mayo Clinic radiology reports (do not directly report a patient interaction) and Seattle Group Health breast cancer-related notes (only one example of a patient ''denying'' smoking). This distinction does not explain why MiPACQ, rather than SHARP, is a more ''difficult'' corpus. The Big Picture for Negation Detection Because of the relatively constrained pragmatic uses of negation in clinical text, negation detection algorithms are easy to optimize for specific corpora, as illustrated in Table 1. However, we believe the research community has at times conflated this with being immediately effective off-the-shelf. Evaluation of systems is artificially inflated by the ad hoc development of training and testing corpora and their differing annotation guidelines. When indomain, consistently-annotated training data is scarce or nonexistent, negation detection performance remains unimpressive (middle portion of Table 4), just as in other NLP problems like parsing or named entity recognition. Furthermore, it is difficult to simply characterize the differences between domains, e.g., by NE length (Figure 4), semantic group ( Figure 5) or lexical and syntactic context (Table 6). To ensure excellent negation performance for a machine learning model, it appears that we still need to annotate examples of negation on the target corpus for fully supervised training (or domain adaptation). Similarly, rule-based methods need a development set and experts who can develop domain-specific rules. Thus, we conjecture that negation is not ''solved'' until negation is tailored to specific applications and use cases, or until the more general problem of semi-supervised domain adaptation is solved. Conclusion While a review of published work may suggest that the negation detection task in clinical NLP has been ''solved,'' our multi-corpus analysis of negation detection indicates that it is easy to optimize for a single corpus but not to generalize to arbitrary clinical text. Though negation detection can be straightforward in constrained settings, both rule-based and machine-learning approaches have mixed results in heterogeneous corpora. Furthermore, more training data was not necessarily better for the common case in which no in-domain data is available. The most significant difference in performance was the availability of in-domain training data, which is inherently a strategy for optimizing performance rather than generalizing it. Furthermore, training on all available data and using domain adaptation techniques did not uniformly benefit performance in a significant way. Future work includes task-adaptive negation detection algorithms and semi-supervised domain adaptation.
8,435
2014-11-13T00:00:00.000
[ "Computer Science" ]
Comparison between covert sound-production task (sound-imagery) vs. motor-imagery for onset detection in real-life online self-paced BCIs Background Even though the BCI field has quickly grown in the last few years, it is still mainly investigated as a research area. Increased practicality and usability are required to move BCIs to the real-world. Self-paced (SP) systems would reduce the problem but there is still the big challenge of what is known as the ‘onset detection problem’. Methods Our previous studies showed how a new sound-imagery (SI) task, high-tone covert sound production, is very effective for onset detection scenarios and we expect there are several advantages over most common asynchronous approaches used thus far, i.e., motor-imagery (MI): 1) Intuitiveness; 2) benefits to people with motor disabilities and, especially, those with lesions on cortical motor areas; and 3) no significant overlap with other common, spontaneous cognitive states, making it easier to use in daily-life situations. The approach was compared with MI tasks in online real-life scenarios, i.e., during activities such as watching videos and reading text. In our scenario, when a new message prompt from a messenger program appeared on the screen, participants watching a video (or reading text, browsing images) were asked to open the message by executing the SI or MI tasks, respectively, for each experimental condition. Results The results showed the SI task performed statistically significantly better than the MI approach: 84.04% (SI) vs 66.79 (MI) True-False positive rate for the sliding image scenario, 80.84% vs 61.07% for watching video. The classification performance difference between SI and MI was found not to be significant in the text-reading scenario. Furthermore, the onset response speed showed SI (4.08 s) being significantly faster than MI (5.46 s). In terms of basic usability, 75% of subjects found SI easier to use. Conclusions Our novel SI task outperforms typical MI for SP onset detection BCIs, therefore it would be more easily used in daily-life situations. This could be a significant step forward for the BCI field which has so far been mainly restricted to research-oriented indoor laboratory settings. Introduction Even though the BCI field has quickly developed in the last few years, it is mainly investigated as a research area due to shortcomings in terms of practicality and usability. Many BCI systems employ cue-based (synchronous) approaches, where the analysis and classification of brain signals are locked to the machine's predefined timing protocol [1]. This means that it forces users to follow the computer's timing commands (locked to the machine). Event-related approaches such as P300 and SSVEP also require users to keep their mental focus and/or gaze on the computer interface for long periods of time, which is not only unnatural, but also leads to loss of both user autonomy and the ability to have a rich interaction with their environment [1][2][3][4]. These, along with still prevailing reliability limitations, are the main issues when BCIs are used outside laboratory settings. On the other hand, self-paced (asynchronous) systems enable users to control the system in a more natural way, i.e., according to their own timing and speed of communication without any computer-controlled stimulus [5]. Giving users more autonomy and flexibility in terms of system control is integral to the ultimate aim of utilizing BCIs in the real world [4]. However, self-paced approaches usually requires more complex analyses and have worse correct classification rates as well as a more complex system design compared to cue-based systems due to the lack of knowledge about precise time location of the user commands. That is, the problem of when a relevant spontaneous event happens supersedes the problem of what command was given by the user. The system must thus first identify a specific active task against the idle (i.e., no specific control) state [6]. However, determining the time of spontaneous command onset, the so-called 'onset detection problem', is difficult as there is no direct non-invasive means to validate the timing of the onset event. Thus, onset detection systems have an inherent timing error, which has recently been reduced to a promising few seconds [2,4]. Nonetheless, onset detection can be used as an on/off switch for self-paced BCIs [7]. In this paper, a novel sound-production related cognitive task (sound-production imagery, SI), i.e., high tone covert sound production, which showed promising onset detection results in our previous offline setting studies [2,4,6,8], was compared with the still most common approach for online self-paced onset detection systems in real-life scenarios, i.e., motor imagery (MI). As the self-paced covert sound-production task is a new concept that we recently proposed, there is no literature (other than our own) related to onset detection in BCIs used with covert speech or sound-production related tasks. In [9][10][11] the authors investigated a somewhat similar imaginary speech case, but it was done in a cuebased system, i.e., not for onset detection. In that study the subjects were asked to think either the syllable 'ba' or 'ku' at a specific rhythm with audio cues. In addition, other speech related EEG-based BCI studies using different syllables (or vowels), e.g., [12], are focused on the discrimination between various tasks and not on onset detection (i.e., idle versus intentional state). In the recent self-paced system, MI is mostly used (e.g., [1,3,[13][14][15][16][17]). However, MI self-paced onset detection systems have a crucial issue when they are used outside laboratory settings. The mental procedure is largely overlapping with other common, spontaneous cognitive states. For example, a classifier would not be able to reliably identify whether the onset detection was from an actual relevant command or from other dailylife gestures such as waving, head movements, etc., especially if MI is also used for multi-class control (i.e., not for onset detection) within the same BCI system. This has motivated the search for alternative cognitive tasks for self-paced BCIs. Thus, the MI vs SI comparison in daily-life conditions will be discussed in this paper. A Sound-production related cognitive task is also needed to reduce the chances of intentional command (IC) false positives but this can be addressed by choosing cognitive tasks that do not significantly overlap with other common, spontaneous and frequent cognitive states [6]. Using specific words, syllables or letters for the onset detection would likely increase both the onset false positives as well as the task-related false negatives due to the large overlap with the continuous internal speech in normal thought processes. For this reason, we have chosen high tone sound production as an onset switch as this task is unlikely to overlap with normal thought processes. We also expected the chosen task to be easy to produce and control voluntarily and there is no dependence on the users' mother-language or even on their language capabilities. In addition, the SI task used here is expected to be very intuitive for the vast majority of people as we almost constantly 'speak' internally while awake. This is also a big advantage for people with severe motor disabilities, especially those with damage in motor control cortical areas, an important target population for BCIs [4]. Besides these advantages, our approach showed significantly better performance results than the MI task for self-paced onset detection BCI, which will be discussed in more detail below. Bringing BCI to the real world and maintaining user autonomy and engagement with the surroundings as much as possible is important and this is the main novelty of the present paper. Cognitive tasks description In this experiment, two different cognitive tasks were tested for the sake of comparison. One was MI, which is a typically used mental task in the BCI field. It is performed by the imagining of limb movements such as those of the hands, feet or the tongue [5]. In our experiment, participants were instructed to imagine the movement of their primary wrist. The other task was a sound-production related cognitive task (Sound Imagery (SI) proposed in our previous studies [2,4,6,8]). The task had showed encouraging results in an offline semi self-paced onset detection system. For this reason, we used this SI task in an online experiment in order to test it in real-life task scenarios and to compare it with a typically used MI task. In this experiment, participants were instructed to imagine producing an 'um' sound with a high tone in a covert (i.e. imaginary) manner, which necessarily overlaps with auditory recall (auditory imagery [18]). In addition, participants were told not to tense any organs related to the sound-production in order to ensure the purely covert task execution. The high pitch tone level was chosen by the participants based on sounds they could comfortably generate for a couple of seconds but high enough to think that they were unusual tones to be used in a normal daily life situation. As this experiment was about online self-paced onset detection, the idle state (i.e., non-control or null state) had to be defined for training purposes so that the Intentional-Control (IC) task state could be reliably distinguished from the idle state. To this end, participants were instructed not to think of any IC tasks and to stay calm and relaxed for the idle state recordings. Experimental paradigm Twelve healthy subjects (Ten males and two females aged between 19 and 27) with normal or corrected vision participated in the experiments. Four of them (P3, P4, P10 and P12) had previous experience in other BCI experiments and the remaining eight were naïve subjects. Each subject sat comfortably on a medical chair and a monitor was placed 50 cm away from the subject. A keyboard was placed on their lap so that they could give feedback to the system. The experiments were conducted in accordance with the University of Essex Ethics Committee guidelines. The experiment was designed to simulate a message opening system when a new message (prompt) arrived during realistic daily-life task situations (i.e., watching video, reading text from a book) and browsing photos. Fig. 1 shows an example of the system interface. On the background of the screen, a video clip was playing, and a subject was watching it while panels (A), (B) and (C) were hidden from the screen. Once the new message arrived (randomly between 5 s and 15 s in order to prevent subject anticipation), panel (A) smoothly and slowly slid into the side of the screen in order to minimise any Visual Eventrelated Potentials (VEPs). Then, the participants could either keep watching the video without trying to open the message, or they could open the message dialogue (B) by executing the Sound Imagery (SI) task state. The participants could perform the SI task action at any point, as the system was self-paced. However, they were asked to execute their onset at least 2 s after the new message dialogue had appeared, in order to eliminate any other event-related potentials (e.g. negative potentials or P300) so that the results were entirely based on the SI tasks. While participants were executing their SI task, they could estimate how long it took them to open the message dialogue by referring to the time keeping interface (D). This circular progress bar continuously turned from a light grey to a dark grey colour for 12 s, followed by dark to light grey again. There were small marks at each 1 s interval so participants could estimate their task execution time. As a result, the users could provide feedback to the PC on whether its response was correct (True-Positive, TP) or not (False-Positive, FP) as well as the execution time if it was a TP. After this feedback, panels (A), (B) and (C) disappeared. The process from (A) to (C) comprised a trial and each single run consisted of 15 trials. Each participant had to go through three different runs (which featured different background daily-life tasks). Background daily-life tasks were randomly ordered for testing for each participant in order to prevent any sequencedependent results. The block diagram in Fig. 2 summarises the experimental protocol. Daily-life task scenarios There were three different experimental scenarios. In the first two scenarios, the participants were instructed to open a message dialogue (as explained above) while they were on two different daily life scenarios (one was watching video and the other was reading text). The above message opening onset detection system was tested separately on each of the two daily tasks. The last experimental scenario was the sliding image task. The participants were presented with an image and if they wanted to slide the image to see the next one, they executed the mental task. In this scenario, there was no external stimuli such as a message alert. Consequently, the participants controlled the system in a 100% self-paced approach. These three different experimental scenarios were chosen because they are very common scenarios for most people in real-life situations. In terms of material, a documentary titled "BBC -The Blue Planet" [19] was used for the video watching task as it requires low cognitive load and emotional neutrality [20,21]. For the reading task, a book titled "English Fairy Tales" [22] was used as it does not have any complex text and is emotionally neutral as well. Hence, the material had reduced cognitive loads for both native and non-native English speakers. In the sliding image task, natural images (wild background scenery without animals) from [23] were used for emotional neutrality. Signal pre-Processing & Artefacts Handling An Enobio (dry electrode equipment [24]) system was used for data acquisition. Seventeen electrodes were placed on the head based on a 10-20 layout and 1 reference channel was recorded on the right-side earlobe. Three extra external channels were placed on the forehead and both the right and left temples (anterior-most edge of the temporalis muscle) based on [25] in order to detect an Electrooculogram (EOG) and Electromyogram (EMG) for artefact removal purposes. The sample rate was 500 S/s (equipment bandwidth: 0-125 Hz) in order to ensure that all the EEG rhythms, up to some high gamma band, could be analysed. High gamma waves have not been widely used in BCIs due to concerns over contamination with EMG artefacts. However, studies have shown high gamma activity is associated with language tasks [26][27][28]. It was therefore included in the experiments and EMG artefacts handling methods were applied to avoid EMG-related classification results. EEG data were wirelessly transferred from the Enobio to a PC via Bluetooth. These EEG data were bandpass filtered (Butterworth filter, order 5) with cut-off frequencies at 4 Hz and 100 Hz followed by a notch filter (Butterworth filter, order 5) at 49-51 Hz in order to remove mains interference. Then, the data were segmented with a 0.5 s window length. The segmented data underwent automatic EOG detection based on [29]. A Discrete Wavelet Transform (DWT) with a Haar mother wavelet (decomposition level 6) was applied to the external channel that was placed on the forehead. If the external channel's data were detected as EOG artefacts, the data segment was rejected from further analysis. If there was no EOG artefact, the EEG data were passed on to the EMG artefact removal process. For automatic EMG removal, the Blind Source Separation by Canonical Correlation Analysis (BSS-CCA) was used, which is a very common and widely used EMG removal technique in BCIs. BSS-CCA assumes mutually uncorrelated sources that are maximally auto-correlated. It can therefore be used to separate the brain signal from the muscle activity (mainly facial, body movements) sources as the muscle artefacts have relatively low autocorrelation compared to the brain signal (please see [30,31] for a more detailed discussion of BSS-CA). The threshold of the autocorrelation coefficient ρ was chosen at 0.35 based on [32]. Then, these pre-processed and EOG/EMG artefacts-handled EEG data was used for feature extraction and classification. Each band's FFT value was square powered and it was used as a feature. The third method was the Common Spatial Pattern (CSP). EEG source components were sorted in order to maximise the variance in one class and minimise it in the other class. Then, the first three and last three EEG source component variances were taken and linear regression was applied. The slope of the fitted line was used as a feature. The last feature extraction method was the Discrete Wavelet Transform (DWT). The data were decomposed up to level 7 and detailed parts, which represent the pseudo frequency bands of around 4-8, 8-16, 16-31, 31-62 and 62-100 Hz, were taken. From each detail part, the variance was calculated from the coefficients for dimensionality reduction. The mother wavelet 'db2' was chosen because of its common use in BCIs. These four different feature extraction techniques were chosen, as together they cover the time, frequency, spatial and time-frequency domains. Feature Extraction & Classification These feature extraction processes generated hundreds of feature points for each channel. Therefore, a feature selection method was required. In this experiment, the Davis-Bouldin Index (DBI [33]) was used. The optimal DBI threshold was calculated from the training data (training and validation set) for each subject and task. Firstly, we used the training data for optimal DBI threshold selection. Secondly, we divided the training dataset into training_2 and the validation set. Thirdly, we calculated the optimal DBI threshold number from 1 (increasing by 1). The validation set results show a gradual increase followed by a decrease. The peak point DBI value was chosen as the DBI threshold point for the testing data. Then, features that had DBIs below the threshold were used for classification. For the classification, the Linear Discriminant Analysis (LDA) was used. It was chosen because of its simplicity and low computational cost [34]. Therefore, it suits the online classification for real-time processing as well as being a widely-used technique in BCIs. Performance evaluation method As the study was about an online onset detection system, a performance evaluation took place with the subjects' feedback. Fig. 4 shows the feedback process. If the machine classified the incoming EEG data as an onset (intentional control) event from the user, a message window would appear on the screen with a feedback panel (A). The panel (A) could also be opened manually by pressing the 'Esc' button on the keyboard to indicate a True-Negative (TN) action. If the event was indeed an intended action, the user would choose 'YES'. Otherwise the user would choose 'NO' for a False-Positive (FP). If the user chose 'YES', the feedback panel would change to (B) in order to clarify whether it was an actual thought command (i.e. sound-production) or a manual opening of the message (TN action). Subjects were asked to press the 'Esc' button when the continuous onset command (up to 11 s) did not work. If it was an intentional thought command, users were directed to panel (C) and were asked how much time had lapsed from the start of the onset until the message dialogue was opened. This will be regarded as a True-Positive (TP) with additional system response speed information (less than 3 s, 3-5 s, 5-7 s, 7-9 s or 9-11 s). Based on the number of TP and FP, the True-Positive rate and False-Positive rate was calculated: The definition of the number of idle events is important for the calculation of the FP rate. Firstly, the idle period was defined as: idle period¼total recording time−task activation period−refractory period The refractory period is the period during which a signal is ignored for classification after the TP or FP action (i.e. while the message is opened for user feedback). Therefore, the total number of idle events which can yield output from the classifier was idle period (sec) / windows length (sec). In our case this was idle period / 0.5 s. In addition, the True-False-Positive score (TFP Score ) [35] was also calculated in order to take the idle period length into account for the final score in the self-paced system. Feature interpretation -spatial and spectral analysis for sound-imagery task In this section, spatial and spectral characteristics will be analysed for the sound imagery task. As the experiment was online, this analysis was carried out with the training dataset, which was recorded as an offline setting. For the spatial analysis, the Common Spatial Pattern (CSP) was found based on the Enobio 17 channels electrode placement [24]. Fig. 5 (A) shows the visual pattern for the average result. The pattern varies depending on the subject because of the characteristic of EEG. However, the average result shows that channels around F3, Fig. 4 User feedback process during the online experiment for performance evaluation [8] Song and Sepulveda Journal of NeuroEngineering and Rehabilitation (2020) 17:14 Page 6 of 11 P3 and T7, which are located near Broca's and Wernicke's area that are related to speech, had a big pattern difference between the idle and sound imagery task period. Similarly, channel F8 also showed some pattern difference. In addition to the CSP analysis, the channels that had the best class separability based on the feature selection method are shown in Fig. 5 (B). From each participant, the 10 best feature points were selected based on the DBI feature selection method and their channel numbers were counted and summed up from twelve subjects. As can be seen from the Figure, channel F3 was selected the most amount of times as the best class separable channel, followed by channel T7. It shares some common results with the CSP spatial analysis by having the same F3 and T7 channels that are located near Broca's and Wernicke's area. In terms of spectral domain analysis, the frequency band that had the most class separability was found in a similar fashion. From the 120 feature points (the best 10 features from each of the twelve subjects), the wavelet transform feature was selected the most times (56 times), followed by the band power feature (44 times) and autoregressive model feature (20 times). The common spatial pattern feature was not selected at all be any of the subjects. From those 56 DWT and 44 band power features, frequency bands were counted to find out which range was selected the most as the best class separable frequency band. As can be observed from Fig. 6 (top part), the pseudo-frequency band of 16-31 Hz was selected the most times followed by the range of 31-62 Hz for the DWT feature. On the other hand, in the band power feature in the bottom part, the 20-30 Hz band was selected the most times. A review paper [36] reported that some studies suggested that the 30 Hz range should be elicited by linguistic processing of meaningful words but not of meaningless non-words. However, our high pitch sound imagery task showed the best class separability versus the idle state with the range of around 20-30 Hz. Table 1 shows the classification performance with the True-Positive (TP) rate and False-Positive (FP) rate on both the Sound Imagery (SI) and Motor Imagery (MI) tasks in three different scenarios. In the sliding image task scenario, the twelve subjects' average TP rate for the sound imagery task was 88.3% while the motor imagery task had a 73.3% rate. Only one out of twelve participants (P3) showed that the motor imagery task's TP rate was higher than the sound imagery task's and P5 showed the same TP rate with a lower FP rate in the sound imagery task. The Wilcoxon method was used for the statistical tests in this paper. It was chosen as it is a suitable test for our (non-parametric) data and it is widely used in BCIs. In terms of the Wilcoxon test p value, the sound imagery onset detection task had a significantly higher (p value at 0.033) TP rate than the motor imagery task. Even though the average FP rate in sound imagery had a lower value of 2.6% than the motor imagery at 4.8%, there was no statistically significant difference with a p value of 0.451. Sound-imagery vs. Motor-imagery for Onset Detection In the video-watching scenario, the 12 subjects' average showed an 86.1% TP rate for the SI task and 63.9% for the MI task. All the subjects had a higher TP rate with the SI than the MI task except for participant 3. The Wilcoxon test p value was 0.031, which depicts that SI had a significantly higher TP rate than the MI task. On the other hand, the average result of the FP rate shows that the SI task's FP rate is slightly higher than the one of the MI tasks but there is no significant difference with a p value of 0.259. In the reading text scenario, the average TP rate of the SI task was 81.1 and 77.2% for the MI task. Even though the SI task showed a slightly better TP rate result, there was no statistically significant difference between them with a p value of 0.243. The FP rate also showed that the SI task provided a slightly better result (lower FP rate) but the difference was minor. If the two different daily-task scenarios are averaged, the TP rate of the SI task is significantly higher (83.6%) than the one of MI (70.6%) with a p value of 0.0106. There is, however, no significant difference with the FP rate. In order to take into account all the true-positives, false-positives and the idle period length at the same time, as they are very important aspects of performance evaluation in self-paced BCI systems, the True-False-Positive score (TFP Score ) presented in [35] was calculated and discussed here. 83.3% (10 out of 12) of the participants showed that the sound imagery onset detection task performed better in the TFP score than the motor imagery task in both the sliding image and watching video scenario. 66.7% (8 out of 12) of the participants showed a higher TFP score with the sound imagery task for the reading text scenario. Participant 3, who had previous experience in BCIs, showed that the motor imagery task performed better than the sound imagery task in all the daily-life scenarios but other participants, such as P4, P10 and P12, who also had BCI experience, did not follow the same pattern. Only two out of nine subjects showed that the MI task had a higher TFP score. From the naïve subjects, 87.5% (21 out of 24 cases) of them showed a higher TFP score with the SI task. Figure 7 (A) shows the twelve subjects' averaged TFP score for each daily-life task scenario. The sound imagery onset detection task produced a significantly higher TFP score than the motor imagery task with a p value of 0.035 and 0.04 for the sliding image and watching video scenario, respectively. However, there was no statistically significant difference for the reading text scenario even though the TFP score was higher for the SI task. The TFP score differences were very large in some cases. The image-sliding scenario had a more than a 17% difference (SI: 84.04%, MI: 66.76%) while the video watching scenario had a 19.77% difference (SI: 80.84%, MI: 61.07%). The text reading scenario, on the other hand, had only a 4.56% difference (SI: 77.17%, MI: 72.61%), and this difference was found to have no statistical significance (p = 0.298) In terms of system response speed, the users' feedback from Fig. 4 (C) was used in order to calculate the onset response time. Figure 7 (B) shows the twelve subjects' averaged onset speed for the SI and MI tasks. The SI task required 3.93 s, 4.03 s and 4.28 s on average for the sliding image, watching video and reading text scenario, respectively, while the MI one required 4.8 s, 5.83 s and 5.75 s. In all of the three-different daily-life task scenarios, the SI task had a significantly faster onset response than the MI task by having a p value of 0.0262, 0.0119 and 0.0055, respectively. Discussion This experiment investigated an online onset detection method for BCIs by prompting participants to open a message when it arrived in two different daily-life task scenarios (watching video and reading text) and in the sliding image task. Our new sound imagery task and typical MI task were tested and compared. In terms of system performance, the sound imagery task achieved 84.04, 80.84 and 77.17% as a TFP score for the sliding image, watching video and reading text scenario, respectively, on average for twelve subjects. In contrast, the MI task achieved values of 66.79% (significantly worse), 61.07% (significantly worse) and 72.61% (no significant difference), respectively. In addition, the system speed showed a significantly faster response with the sound imagery than the MI task. Although it is difficult to directly compare our results with other onset detection systems as the experiment environment and tasks are different, our SI task showed a relatively high TP rate. In [13], three subjects produced on average a classification TP accuracy of 79.67% between the motor-imagery task and the non-control state. In [37], six different mental tasks versus the idle state showed TP rates of between 55% (auditory imagery) and 72% (motor-imagery) on average over five subjects in an offline setting. Compared to these results, our 88.9% (in the video-watching case) and 78.9% (in the text-reading case) TP rates look very promising even though our study was carried out for more realistic scenarios than the ones previously reported by others. From a usability point of view, participants completed a short survey at the end of the experiment regarding the level of difficulty of use of the two different SI and MI tasks. 0 depicts very easy to use and 10 represents very difficult to use. On average, SI received a value of 4.42 while MI received 6.42. Nine out of twelve (75%) subjects marked a lower value (easier to use) for the SI than the MI. Only participants P3, P4 and P12 said that the MI was easier. These three participants were BCI research students, who had experience in MI but not SI. On the other hand, one BCI research student and all the other naïve subjects marked the SI as easier to use. The p value of the twelve subjects was 0.0108. Therefore, the SI task was significantly easier to use than the MI one for the onset detection of BCIs. Based on these results, our new sound imagery task outperformed the motor imagery task for the self-paced onset detection BCI system not only in performance but also in usability and system speed. Therefore, this onset detection system prototype showed some strong potential in terms of the real-life application of BCIs (compared to the typical motor imagery task) and it will move the BCI field a significant step forward once it is developed further by improving current EEG recording issues such as practicality and usability. Contrary to the other two real-life activities, the textreading scenario showed no significant TFP score difference between SI and MI. This may be because sound production imagery is harder while reading as they use same brain region (Broca's area) based on our spatial pattern analysis and silent reading literature [38,39]. With this in mind, the limitations of the proposed approach specifically in text-reading and related activities need to be further explored. Also, BCI researchers have discussed the so-called "BCI Illiteracy" problem, which is that about 15 to 30% of users are not able to use a MI BCI [40,41]. As the SI task uses different parts of the brain, it may produce different results in regards to the BCI Illiteracy problem. This would be an interesting investigation. Conclusions The scope of this study was to investigate how well our new sound imagery task works for a self-paced onset detection system in real-life scenarios by comparing it to a typical motor imagery task. From a performance point of view, our novel sound imagery task showed a significantly better TFP score in the sliding image (84.04%) and watching video (80.04%) scenario (opening message onset task) than in the motor imagery task (66.79 and 61.07%, respectively). Furthermore, the reading text scenario also reported a higher performance result with our approach (77.17% SI vs 72.61% MI). Moreover, the sound imagery task showed a significantly faster system response (4.08 s SI vs 5.46 s MI on average for the three scenarios) and had a significantly better usability (easier to use) score than the motor imagery. Based on these results, our novel sound imagery onset detection system outperformed the motor imagery one and it showed great potential. This could be a significant step forward for the BCI field which is mainly restricted to research-oriented indoor laboratory settings with the use of motor imagery and cue-based studies.
7,541.8
2020-02-07T00:00:00.000
[ "Computer Science" ]
Spectral and Informational Analysis of Temperature and Chemical Composition of Solfatara Fumaroles (Campi Flegrei, Italy) Temperature and composition at fumaroles are controlled by several volcanic and exogenous processes that operate on various time-space scales. Here, we analyze fluctuations of temperature and chemical composition recorded at fumarolic vents in Solfatara (Campi Flegrei caldera, Italy) from December 1997 to December 2015, in order to better understand source(s) and driving processes. Applying the singular spectral analysis, we found that the trends explain the great part of the variance of the geochemical series but not of the temperature series. On the other hand, a common source, also shared by other geo-indicators (ground deformation, seismicity, hydrogeological and meteorological data), seems to be linked with the oscillatory structure of the investigated signals. The informational characteristics of temperature and geochemical compositions, analyzed by using the Fisher–Shannon method, appear to be a sort of fingerprint of the different periodic structure. In fact, the oscillatory components were characterized by a wide range of significant periodicities nearly equally powerful that show a higher degree of entropy, indicating that changes are influenced by overlapped processes occurring at different scales with a rather similar intensity. The present study represents an advancement in the understanding of the dominant driving mechanisms of volcanic signals at fumaroles that might be also valid for other volcanic areas. Introduction Changes in temperature and composition at fumaroles are widely used to monitor volcanic activity aimed to surveillance [1][2][3]. These observations shed light on complex endogenous processes and offer the opportunity to reveal driving forces that occur at different time-space scales. However, along the path from the deep-rooted magmatic component to surface, many processes (e.g., magmatic volatile solubility, scrubbing, permeability) control the timing and the amplitude of temperature and composition changes. From the early precursory to post eruption stages, the geochemical signature and thus the monitoring activity is site-specific and requires case by case evaluations [4]. There are several cases in which the analysis of temperature and composition in fumaroles was useful to get insight into the dynamical processes taking place in volcanoes. The chemical and isotopic composition of Cumbal fumaroles (Colombia) were interpreted as reflecting a long period dominated by magmatic volatile input, which ends with an increased hydrothermal signature; the differences among discharging sites suggest differences in flow paths along which ascending gases may or may not be quenched by the hydrothermal system and/or meteoric water [5]. At the Merapi volcano (Indonesia), variations in the composition and temperature of fumarolic emissions were found to be related to atmospheric pressure and higher water concentrations formed by intensive rainfall (>0.4 mm/5 min) [6], along with a correlation between temperature and a certain type of seismic activity (high-frequency seismic cluster and ultra-long period signal) [7]. At the Stromboli volcano, long-term (months to years) and short-term (hours to days) changes in soil temperature fumaroles were typical of crustal-driven and meteorological seasonal phenomena respectively, while abrupt changes were likely informative of precursors of explosive eruptive events [8]. The study of fumarole gas compositions and melt inclusions at the Kawah Ijen volcano (Indonesia) suggested a two-stage history implying the existence of shallower dacitic magma reservoir and a degassing deeper mafic one, which supplies metals and is characterized by progressive breakdown of sulfide [9]. The time variation in the fumarolic compositions at the Kusatsu-Shirane volcano (Japan) suggest a close relation between activation of seismicity and the increase of magmatic components; the compositional difference among the fumarolic gas recorded at different sites have been interpreted as reflecting the existence of three hydrothermal reservoir and formed by distinct condensation mechanisms [10]. Radon monitoring series at the two sites of Campi Flegrei caldera (Italy) were analyzed, and the results were compared with the CO/CO 2 ratio, CO 2 concentration, fumarolic tremor, ground deformation and the cumulative number of days with earthquakes [11,12]; the well correlated time variation of the independent signals suggest a general intensification of volcanic crisis at the caldera and that the current unrest involves an area much larger than the one characterized by seismicity and intense hydrothermal activity. At Campi Flegrei caldera (CFc), geochemical, seismic and ground deformation measurements were carried out by the monitoring network, which was set up to manage the volcanic risk in a densely populated area. The CFc is a volcanic complex formed by two large explosive events (ca. 39 and ca. 15 ka) followed by numerous minor eruptions [13][14][15]. CFc is still active as testified by the last eruption (Monte Nuovo, 1538 AD), the bradyseismic episodes and the intense fumarolic and hot spring activity. The Solfatara is a maar-diatreme [16,17] located in the central sector of CFc (Figure 1), close to its most uplifted part (Pozzuoli area) and characterized by sustained hydrothermal activity. The crater is roughly elliptical in shape and is crossed by ring faults related to volcanic explosions and collapse of the crater center, regional faults striking mainly NW-SE and NE-SW and faults related to gravity instability of the volcanic rims [16]. This pervasive fault network allows the hydrothermal fluids to migrate and rise manifesting themselves in different form ad states. The surface expression of the Fangaia mud pool, the widespread soil diffuse degassing and the intense fumarolic activity reflect the lateral and vertical zoning of heterogeneity related to gas saturation, mixing between gas and condensed steam, fluid temperature and subsoil permeability [18][19][20][21]. The Fangaia mud pool is located in the most depressed zone of the crater (Figure 1), where the water table outcrops in a CO 2 -rich liquid domain and permeability is reduced by gas steam condensation and hydrothermal alteration [22]. The monitoring of diffuse volcanic degassing shows a significant expansion of the area releasing deep-sourced CO 2 from 2003 to 2016; the amount of diffusively released CO 2 , recently assumed to be at least 2000-3000 t d −1 , matches with typical values of persistently degassing active volcanoes [23]. Within the crater, the fumarolic activity is gathered in its southeastern part, at Bocca Grande and Bocca Nuova fumaroles ( Figure 1). The study of the temperature, chemical and isotopic compositions of these fumaroles date to early 1980s. The main component of the fumaroles is H 2 O followed by CO 2 , H 2 S, N 2 , H 2 , CH 4 , He, Ar, and CO; temperatures at Bocca Grande and Bocca Nuova are well above the boiling point, respectively 161 ± 2.7 • C and 146.0 ± 2.0 • C [24,25]. Fumarolic chemical ratios studies, supported by physical-numerical simulations and geophysical investigations, have been carried out to get clues on the thermodynamic conditions and on the origin and evolution of the current unrest. Based on physical and volatile saturation models, it has been proposed that magma could be approaching the critical degassing pressure [26]. Accordingly, it was observed that the post 2000 background seismicity is increased at the same rate of the ground uplift and the concentration of the fumarolic gas specie more sensitive to temperature, likely because of repeated injections of magmatic gases from depth in accordance with a thermo-fluid-dynamic modeling [27]. Moreover, the current unrest is considered compatible with the ascent of CO 2 -rich hot gases from the deep (~8 km) magma chamber into the hydrothermal system and aquifers in nearly isenthalpic conditions, excluding both shallow arrival of new magma and steam condensation along the fumarolic flow path [28,29]. The current debate leaves room for further investigation needed for a better understanding of caldera dynamics. In this paper, we analyze geochemical ratios and temperatures at Bocca Nuova and Bocca Grande fumaroles in the time span December 1997-December 2015. Our goal is to investigate the inner dynamics of the observed data and link it to the processes driving the observed changes. In this study we focus on the analysis (by using decompositional, spectral and informational statistical methodologies) of the oscillatory dynamics of the observed data, the detection of their periodicities (from monthly to years) and the identification of their volcanic or non-volcanic drivers. Data The dataset includes records of temperature and fluid compositions discharged at the main fumarolic vents in Solfatara, Bocca Grande and Bocca Nuova ( Figure 1) from December 1997 to December 2015; the temperature and composition of the Solfatara fumaroles were obtained from literature [26]. We analyzed the monthly means series of temperature and CO/CO 2 , CO 2 /H 2 O, H 2 S/CO, He/CH 4 and N 2 /He geochemical ratios; each series has a length of 217 samples. The percentage of missing data ranges from~14% to~17% at Bocca Grande and from~15% to~19% at Bocca Nuova. The geochemical ratios are very important since they may provide information on specific volcanic processes. In fact, among fumarolic reactive gas species, CO is the most sensitive to temperature and CO/CO 2 is considered the best gas-geothermometer in hydrothermal systems [30]. The CO 2 /H 2 O ratio reflects the magmatic component and it is a useful indicator of magma degassing [31]. Since hot, oxidized magmatic gases are rich in S species, almost converted in H 2 S when entering the hydrothermal system, H 2 S may reflect changes in the S-rich magmatic fluids input and variable steam separation [32]. In hydrothermal systems, the ratio between the He species of magmatic origin and the slow reactive specie CH 4 (He/CH 4 ) may provide information of the input of magmatic fluid in the hydrothermal system [25,33]; while the ratio between the inert gas species (N 2 /He) can reflect the parameters of the primary source (e.g., type of magma, pressure) of the fluids [34]. All the geochemical ratios and temperature series are characterized by a quasioscillating component superimposed on a trend ( Figure 2). CO/CO 2 , CO 2 /H 2 O and He/CH 4 show an increasing trend, while H 2 S/CO 2 and N 2 /He show a decreasing one. Comparing the same series related to the two different fumaroles, the CO/CO 2 , CO 2 /H 2 O and N 2 /He ratios can be almost overlapped; H 2 S/CO 2 ratio and temperature recorded at Bocca Grande are slightly shifted up in comparison with the same observables recorded at Bocca Nuova. If until late 2007 (sample 120) He/CH 4 ratio at both vents are almost overlapped after 2007 the ratio at Bocca Nuova is slightly increased (Figure 2). Lomb-Scargle Periodogram The Lomb-Scargle periodogram is used to estimate the power spectral density of unevenly sampled series, like those presenting gaps or missing data [35]. Considering a time series x k , where each value is observed at time t k , for k = 1,...,N, the Lomb-Scargle periodogram of the series x k is defined as follows: and are respectively mean and variance of x k . The time offset τ is chosen as: It was demonstrated that a peak in the periodogram occurs at the same frequency which minimizes the sum of squares of the residuals of the fit of a sine wave to the data [36]. Due to the presence of data missing in the in the analyzed series, we used the Lomb-Scargle (LS) periodogram (we calculated the LS periodogram by using the Matlab built-in function plomb, https://it.mathworks.com/help/signal/ref/plomb.html accessed on 29 March 2021). Singular Spectrum Analysis (SSA) The singular spectrum analysis SSA [37] is a non-parametric method that extracts interpretable components such as slowly varying trends, periodic or quasi-periodic oscillations and structureless noise from short and apparently noisy time series [38]. The SSA is a principal components analysis (PCA) technique, where the input vectors comprise a time-series and phase-lagged copies of itself. The method basically consists into two main steps: decomposition and reconstruction. Let y i be a real-valued time series where i varies from 1 to N (length of the signal), for a lag M, the Toeplitz lagged correlation matrix expressed as: provides the eigenvalues λ k and eigenvector E kj sorted in decreasing order of λ k , with j and k varying from 1 to M. The kth principal component is defined by: The kth reconstructed component of the signal is then given by: Since λ k is the fraction of the total variance of the original series contained in the kth rik, sorting λ k in decreasing order make also the corresponding reconstructed components ordered by decreasing information about the original series [39]. For time series with missing data, Schoellhamer [39] modified the calculation of the lagged autocorrelation and principal components as follows: that ignores any pair of data points with a missing value for N l pairs with no missing data and provides the eigenvalues and the eigenvectors with no gaps. The computation of the kth principal component ignores missing data points. The reconstruction step is performed as with SSA. Fisher-Shannon Method The Fisher-Shannon (FS) method allows to analyze complex time series by jointly using the Fisher Information Measure (FIM) and the Shannon entropy (SE). The FIM and SE describe the probability density function of a series, respectively, at a local and global level [40], and are generally employed to study the complexity of non-stationary time series in terms of order and organization (FIM) and disorder and uncertainty (SE) [41]. The FIM and SE are calculated, as follows: where f (x) is the probability density function of the series x. Because SE can also be negative, the exponential transformation of Shannon entropy is generally applied to obtain the so-called Shannon entropy power N X that is commonly utilized in statistical analysis: According to the isoperimetric inequality FIM · N X ≥ D [42], where D is the dimension of the space (1 in case of time series), the FIM and the N X are interrelated; the equality stands in case of Gaussian processes. Due to the isoperimetric inequality, a better description of the dynamics of a time series can be obtained by using jointly both the measures. It was also shown in Martin et al. (1999) [43] that FIM allowed for the detection of some non-stationary behavior in situations where the Shannon entropy showed a limited detection power. As the calculation of FIM and N X depends on the probability density function, attention has to be paid to its good estimation. In this study, we used the kernel-based approach to estimate f (x), which has been shown to have a better performance than the discrete-based approach in calculating the value of FIM and SE for the Gaussian distributed series [44]. The kernel-based approach for estimating the probability density function is based on the kernel density estimator technique [45,46]: where b refers to the bandwidth, M represents the number of data, and K(u) is a continuous non-negative and symmetric kernel function that satisfies the following two conditions: The estimation of f (x) uses an optimized integrated method [43] that is based on Troudi et al.'s [47] and Raykar and Duraiswami's [48] algorithms, where a Gaussian kernel with zero mean and unit variance is adopted: The isoperimetric inequality enables the application of the Fisher-Shannon (FS) information plane to explore the dynamics of a series [49], in which the coordinate axes are N X and FIM. For scalar signals, the line FIM·N X = 1 divides the FS information plane into two parts, and each signal is represented by a point that lies exclusively in the half-space of FIM·N X > 1. Results We applied the Lomb-Scargle periodogram (LSP) to investigate the spectral properties of the investigated time series (Figure 3). For each spectrum we also calculated its 95% confidence curve (red), obtained as the 95th percentile of the distribution at each frequency of the values of the LSP of 1000 surrogates generated as random shuffles of the original series; thus, all the LSP peaks of the original series above the 95% confidence curve can be considered significantly not random. This preliminary analysis shows that for most of the time series the spectral content is mainly concentrated in the region of the very low frequencies that suggests the dominance of the trend. The dominance of the trend does not make easy the detection of the oscillatory components. In order to better explore the time dynamics of the investigated series, we applied the SSA. The SSA requires the selection of a proper window length M. For relatively short series modulated by an oscillation of period T, the window length M should be proportional to T [50] and satisfying the Khan and Poskitt's criterium [51]. In our case, we selected M = 36, that allows the extraction of the oscillatory components without losing much information on long-term fluctuations. We applied the Schoellhamer's SSA algorithm [39] with a fraction of missing data points f = 0.5 within the window size M = 36. Thus, each series was decomposed into 36 independent components, the first one (RC1) representing the trend. The contribution of the first component (RC1) to the variance of each time series, given by its corresponding eigenvalue, is reported in Table 1. Unlike temperature series, the first reconstructed components of the geochemical ratios explain the great part of variance of the series. Figure 4 shows for each series the trend (RC1) and the residual (RS), given by removing the trend from the original series. In both the vents, for CO/CO 2 , CO 2 /H 2 O the trend has increasing behavior; for He/CH 4 after a slight increase until early 2008 (125th month) the trend seems to be almost stable; for H 2 S/CO 2 and N 2 /He the trend is characterized by a decreasing behavior. The trend of temperature at Bocca Grande and Bocca Nuova shows opposite behaviors. For all the series the RS is modulated by an oscillatory behavior. Figure 5 shows the LSP along with the 95% significance levels of the RSs. CO/CO 2 , He/CH 4 ratios display significant periodicities ranging from 12 to 79 months, at both Bocca Grande and Bocca Nuova vents. The RS of temperature recorded at Bocca Nuova shows the periodicity of 7 months, absent in the RS of temperature recorded at Bocca Grande; while the 32-months periodicity is found only in Bocca Grande. The RS of CO 2 /H 2 O and H 2 S/CO 2 are characterized by the presence of medium to long term periodicities ranging from 38 to 87 months. The LSP of the RS of N 2 /He ratio is very different from the others: the periodogram occupies mainly the low frequency band and only very short (2.5 and 6 months) periodicities are significant. The order/disorder pattern of the RSs (Figure 6) shows that the dominance of order or organization in the series is not strictly linked with the vent; for instance, some series recorded at Bocca Grande (N 2 /He, He/CH 4 ) are characterized by larger disorder degree than those recorded at Bocca Nuova. However, the FS plane shows that the series are almost aligned, with N 2 /He occupying the top left part of the plane and temperature occupying the right bottom part of the plane; this indicates that N 2 /He of the residual series in both the vents is characterized by higher organization and lower disorder, while the temperature by larger disorder and lower organization. The other series are placed in the middle between the positions of N 2 /He and temperature. Discussion and Conclusions The trend that represents the very low frequency dynamics of the series, was separated by the oscillatory residual by the SSA. Concerning the temperature, the trend does not account for most of the variability of the original series (30-39%), and this indicates that short to medium term processes mostly govern the time dynamics of temperature. The trend of the geochemical ratios, instead, accounts for most of their variability (from 71% to 87%), indicating a clear dominance of very low frequency fluctuations. This result suggests that the geochemical ratios mainly reflect the direction of deep large-scale processes, which may have magmatic or hydrothermal origin (e.g., interaction between the input of magmatic gases and the recharge-discharge dynamics of the reservoir). This result is in agreement with [52], who developed numerical models aimed at evaluating the dynamics of the coupled water-magmatic system at CFc; the authors found decennial cycles, likely associated to the heating at depth interacting with the recharge-discharge dynamics of the reservoir where faults and permeability play a crucial role. Consistently, the pattern of fumarolic effluents and geophysical signals at CFc recorded during the period 2000-2008 was found to be interpretable as increment of the relative amount of magmatic fluids hosted by the hydrothermal system favored by the opening of an easy ascent pathway toward the brittle domain [24]. The pattern of geochemical records after 2005 was found to be consistent with a state of unrest related to the hydrothermal activity [53]. The long-term decreasing trend of relative velocity variations during 2010-2014, derived by noise-based seismic monitoring, has been interpreted (at the light of the increased release of H2O-rich magmatic gas) as a gradual heating of the hydrothermal system and compatible with the upper 4 km crustal rheological change from elastic to plastic since the 1982-1984 bradyseismic crisis inferred by the study of temporal and spatial evolution of the 1982-2014 seismicity [54,55]. The analysis of the RSs has revealed how rich is the time dynamics of the investigated signals. The LSP of CO 2 /H 2 O and H 2 S/CO 2 shows similar periodic structure with most of the power concentrated at medium-long periods (36-38 months for both and 67 and 87 months respectively for CO 2 /H 2 O and H 2 S/CO 2 ). CO/CO 2 , He/CH 4 and temperature share a more complex periodic structure, characterized by the presence of several significant periodicities from 7 to 79 months. Since magmatic processes can have characteristic periods of the order of several months [56], CO 2 /H 2 O and H 2 S/CO 2 may be considered to track the timing of deep magmatic driving processes; while CO/CO 2 , He/CH 4 and temperature could allow the recognition of recurrent processes occurring in the hydrothermal system and that are also influenced by non-magmatic processes. The periodicities of 62-87 months commonly found in the series may reflect processes that drive ground deformation episodes widely reported in the CFc literature [57,58]. In fact, after the bradyseismic crisis occurred in 1982-84, the pattern of the ground displacement is made up of a deflation pattern which reverses, since 2005, in an inflation pattern that is still ongoing. Small-scale episodes of faster uplift, named mini-uplift, are superimposed on the general trend and have a recurrent time of 5-6 years [29,59]. Ground uplift episodes have been ascribed to hydrothermal circulation, to magma intrusion at shallow depth, or to repeated injection of magmatic fluids into the hydrothermal system [25,[60][61][62]. Chiodini et al. (2003) [61] analyzed correlations between fumarolic ratios with ground deformation and performed numerical modeling accounting for periodic injections of hot CO 2 -rich fluids at the base of the hydrothermal system; the good fitting between observed and simulated CO 2 /H 2 O ratio together with the good pattern correspondence to the observed uplift episodes suggested that periodic intense magmatic degassing may origin both geochemical ratios' changes and ground deformation [61]. The periodicity of about 67 months retrieved in CO2/H2O is in good agreement with the observational evidence of recurrence of mini-uplift episodes and with the findings of the studies mentioned above. Similar periodicities are also present in residuals of CO/ CO 2 , He/CH 4 and temperature, but less powerful. As widely reported, the geochemical and geophysical signatures at CFc also depends on rocks properties, as shown by thermo-fluid dynamical modeling at CFc. Indeed, several numerical models show that heterogeneities in hydrological and mechanical properties may influence the timing and the amplitude of signals' changes through time [63][64][65]. The interplay between magmatic gas raising from depth and the recharge-discharge dynamics of the reservoir gives rise to periodic geophysical and chemical signatures [52]. Within this context, it is noteworthy the relationship found between the NAO (North Atlantic Oscillation) index and the groundwater recharge of carbonate karst aquifers in Campania region analyzing data from 1921 to 2010 [66]; in particular, the main periodogram peaks of the regional normalized indexes of precipitation and spring discharges and those of NAO well match at 2-3, 5 and 45 years. The periodicities in the range of 29-38 months of CO 2 /H 2 O, H 2 S/CO 2 , CO/CO 2 , He/CH 4 residuals at both Bocca Grande and Bocca Nuova and of temperature at Bocca Grande, well match with those of NAO index periodicities from 2 to 3 years found in Campania region. This result could indicate that the geodynamics of fumaroles may be influenced by hydrological cycle of large regional aquifers, which in turn may be linked to NAO. Although it is reasonable to assume a relationship between the fumarole's activity and groundwater recharge processes, this finding would deserve further investigations. Short term periodicities found in CO/CO 2 , He/CH 4 and temperature residuals indicates that these variables may reflect the influence of seasonal external forces. Short term periodicities have also been observed in seismic activity at CFc. The analysis of earthquakes occurred in the 2005-2016 observational interval, showed a cyclic behavior of the clustered seismicity ranging from semidiurnal to annual [67]. Notably, the authors find a correlation with rainfall series, observing a major occurrence of energetic swarm in the wet season. The possible influence of seasonal rainfalls on velocity variation derived by seismic noise monitoring at CFc from January 2010 to December 2014 has been also considered [54]. As reported by the authors, the velocity variation annual periodicity may be ascribed to changes of the permeability of the shallowest crust influenced by rainfalls. Moreover, by applying the independent component analysis technique to time series recorded at 16 GPS stations during the period 2001-2007, periodicities up to one year associable to atmospheric and oceanic loading processes were found [68]. Thus, it is reasonable to consider temperature, CO/CO 2 , He/CH 4 as most reflecting the effects of exogenous processes. The temperature at the two fumaroles (Bocca Grande and Bocca Nuova) shows different trends; although the two fumaroles are very close (~25 m apart), results indicate warming-up trend at Bocca Grande and a cooling trend Bocca Nuova. Furthermore, assuming that long to medium term periodicities are related to deep-medium rooted processes and that short-term periodicities are related to shallow processes, Bocca Grande would better reflect magmatic and hydrothermal processes interacting with the deep ground water recharge. In fact, in a recent study, it was shown that the different behavior of the trend could be due to the different geometry/properties of the fumaroles structure and inherent fluid flow; the cooling trend observed at Bocca Nuova could be produced by the mixing between gas and condensed steam, while Bocca Grande seems not affected by shallow water mixing [20]. The results obtained by applying the Fisher-Shannon method shed light on the relationship between organization/uncertainty of the series and the scales of the different processes governing their dynamics. As a general pattern, we observe that the entropy of the series, which informs about the uncertainty and loss of order in the series, seems to be correlated with the numerosity of the modes (significant peaks in the LSP) that characterize each series (Figures 5 and 6). The LSP of the residual of N 2 /He is mainly concentrated in the very low frequency region, characterized by a single dominant mode of variability, and thus the best organized. CO 2 /H 2 O, He/CH 4 and H 2 S/CO 2 show a few significant medium-long periods (36-78 months), thus displaying higher degree of organization than CO/CO 2 and temperature, which are, instead, characterized by a wide range of significative periodicities (from short to long) nearly equally powerful. In other words, CO/CO 2 and temperature residuals seem concomitantly affected by different processes with different scales and thus their dynamics is featured by a higher entropy and a greater uncertainty. The two vents Bocca Grande and Bocca Nuova show the main difference in the behavior of temperature residuals: at Bocca Nuova, whose temperature residual is found to be more influenced by short term processes, it is characterized by a higher degree of disorder, likely indicating that shallower processes taking place at Bocca Nuova make its residual temperature more unpredictable. In this study the combined approach based on the spectral and informational analysis contributes to the advancement of the knowledge of the dominant driving mechanisms underlying the time dynamics of volcanic on signals recorded at fumaroles that might be also be extended to other different volcanic settings.
6,657
2021-05-01T00:00:00.000
[ "Geology" ]
Simple cyclic covers of the plane and Seshadri constants of some general hypersurfaces in weighted projective space Let $X$ be a general hypersurface of degree $md$ in the weighted projective space with weights $1,1,1,m$ for some for $d\geq 2$ and $m\geq 3$. We prove that the Seshadri constant of the ample generator of the N\'eron-Severi space at a general point $x\in X$ lies in the interval $\left[\sqrt{d}- \frac d m, \sqrt{d}\right]$ and thus approaches the possibly irrational number $\sqrt d$ as $m$ grows. Introduction Seshadri constants measure local positivity of line bundles. In this capacity they have close ties with two major conjectures in the theory of smooth complex surfaces: the Segre-Harbourne-Gimigliano-Hirschowitz conjecture [14,11,15,16,23] and the Bounded Negativity Conjecture [3,4]. A big line bundle is locally positive at a point, if the global sections of a suitable multiple of it embed an open neighbourhood of the point into projective space [6,17]. If this is the case then one can try to measure how many 'reasonably different' global sections there are around the reference point. One way to quantify local positivity is via Seshadri constants defined for ample line bundles by Demailly [10] during his first attempt at Fujita's global generation conjecture. For a thorough introduction and references we refer the reader to [2] and [19,Chapter 5]. Rationality of Seshadri constants is a long-standing open question, and although it is widely assumed to be false, there is no counterexample even in higher dimensions. As examplified by the volume or asymptotic cohomology of a line bundle [5] or Newton-Okounkov bodies [18], asymptotic invariants of linear series have a tendency to be rational on surfaces and wildly irrational in higher dimensions. The existing evidence in the case of Seshadri constants is very mixed. On the one hand all computed cases of Seshadri constants are rational, and this includes all abelian surfaces, some of which have round nef cones. At the same time there is no structural result pointing towards rationality, although the rationality of Seshadri constants would disprove Nagata's conjecture [11], which is more or less unanimously expected to be true. Seshadri constants on general surfaces of degree d ≥ 5 in projective space are expected to be integral multiples of √ d, hence often irrational. There is a closely related work centering around Szemberg's conjecture [13,24] on primitive solutions to Pell's equation and optimal lower bounds on Seshadri constants. Here we initiate a study of a class of surfaces that avoided being studied from this point of view. Theorem A -Fix integers d ≥ 2 and m ≥ 3. Let X be a smooth hypersurface of degree md in P(1, 1, 1, m) not passing through the singular point. Let L = O X (1), and x a point on X. If the pair (X, x) is general then In particular, for every integer d and δ > 0 there exists a point x on a polarised surface (X, L) such that L 2 = d and ε(L; x) ∈ [ The main step is to prove the estimate in the special case when B is a very general plane curve of degree md, X = X d,m → P 2 the simple cyclic d-uple cover branched over B and L = π * O P 2 (1). One can then deduce ( * ) directly for a very general point of X using a result of Bauer [2] and the following generalisation of a result of Cox [9], which might be of independent interest. Theorem B -A very general simple cyclic cover X of the plane is a smooth surface with Picard number ρ(X) = 1. Seshadri constants and submaximal curves Here we collect the necessary material about Seshadri constants on surfaces. Good references for this topic are [2] and [19,Chapter 5]. Let (X, L) be a polarised algebraic surface and C ⊂ X be a curve containing a point x. We then define The Seshadri constant ε(X, L; x) (often just ε(L; x)) of L at the point x ∈ X is defined as Equivalently, upon writing π : X → X for the blowing-up of X at x with exceptional divisor E, one has The definition gives rise to the following quick upper bound: It is known that if the inequality is sharp then there must exist a submaximal curve C ⊆ X. Equivalently, submaximal means that the proper transform C of C on X satisfies ((π * L − ε(L; x)E) · C) = 0. A couple of words about the behaviour of ε(L; x) in its two arguments. As can be seen from its definition as nef threshold on the blow-up, ε(L; x) only depends on the numerical equivalence class of L, and for all positive integers m. In particular, if ρ(X) = 1, then ε(L; x) for the ample generator L of X determines all Seshadri constants at x ∈ X. By [21] Seshadri constant are upper-semicontinuous in x. Consequently, for given L, the numbers ε(L; x) are constant for very general x ∈ X. It is customary to write ε(L; 1) for this common value. If ρ(X) = 1 then we will denote by ε(X, 1) the Seshadri constant of the ample generator of X at a very general point of X. In this case we will call ε(X; 1) the Seshadri constant of the surface X. Usually one is especially interested in irreducible submaximal curves but it is sometimes convenient to allow C to be any curve containing x. Let H be the class of a line in the plane. The proof of the following is an elementary computation using the projection formula. Lemma 1.2 -Let π : X → P 2 be a d-uple plane (that is, a finite surjective morphism of degree d), with polarisation L = π * H. If C is a plane curve then π * C is not submaximal for any point on X. A key ingredient in our argument is the following result of Thomas Bauer, that controls the degree of a submaximal curve. Simple cyclic multiple planes A d-uple plane is a finite map π : X → P 2 of degree d ≥ 2. In general, the structure of such maps is quite complicated if d > 2 but we only discuss the following simple construction, where X arises by taking the d-th root of a section of a line bundle. More precisely, let m ≥ 1 and B be a plane curve of degree md defined by an equation f . Then a simple cyclic cover X is a hypersurface of degree md in P(1, 1, 1, m), with equation Note that π : X → P 2 is a Galois ramified cover with group Z/dZ. Decomposing π * O X into eigensheaves with respect to this action we get a decomposition where the first decomposition is as O P 2 -module while the second specifies in addition the algebra structure. A local computation shows that X is smooth if and only if B is smooth. For a more complex analytic point view on simple cyclic covers see [1, Ch. 1], for a more general theory of abelian covers compare [22]. Definition 2.2 -A simple cyclic d-uple plane is a ramified cover π : X = X d,m → P 2 of degree d, branched over over a curve B of degree dm arising as above. We say X is very general if B is very general. For lack of convenient reference we sketch a proof of the following. This is proved for the full family of weighted hypersurfaces in [9] and we follow the proof very closely. The case of degree 2 is covered in [7]. We will consider d-uple planes X d,m as hyperplanes in P(1, 1, 1, m) with equation (2.1). As explained in the first paragraph of [9, Proof of Theorem] (see also [8,III (a)]), in order to verify that ρ(X d,m ) = 1 for the general hypersurface, it suffices to check that the map is surjective, where H 1 (X, Θ X ) 0 stands for the image of H 1 (X, Θ X ) under the Kodaira-Spencer map of the family of degree d hypersurfaces in P(1, 1, 1, m) with equation (2.1). The case of double covers is treated in [7]. As far as the link between various cohomology spaces occurring above and the graded polynomial ring S def = C[x, y, z, w] goes, write For proofs see the reference in [9]. Proof. Consider the weighted polynomial rings S = C[x, y, z] ⊂ S = C[x, y, z, w], where x, y, z have degree 1 and w has degree m. Let f be the equation defining B so that X is cut out by F = w d + f ∈ S = C[x, y, z, w]. Consider the Milnor algbra which is the homomorphic image of T = S/(w d−1 ). Let V be the tangent space to the family of d-uple planes at X. Then as in [9] the result follows if the composition is surjective, where κ is the Kodaira-Spencer map. Considering d-uple planes as particular hypersurfaces and argueing as in [9], this multiplication map fits into a commutative diagram Now note that as a graded S -module T k = d−2 i=0 w i · S k−im and all summands on the right hand side are non-zero as soon as Since the multiplication maps in the standard polynomial ring S are surjective, the surjectivity of the first row of the diagram, and hence our claim, follows if m ≥ 3. We would like to have that the Picard group is actually generated by L, at least for large m, but we do not know a convenient reference for this. For our purpose the following simple observation is enough. Corollary 2.4 -Let π : X → P 2 be a very general simple cyclic d-uple plane and L = π * O P 2 (1). If C is any curve on X then there exists a k such that dC ∈ |kL|. Proof. Let σ be a generator of the Galois group of π. Since the Picard rank is 1, the automorphisms act trivially on the Picard group and we have dC ∼ d k=1 (σ k ) * C = π * π * C ∈ |π * O P 2 (k)| = |kL| for some k. Proof of Theorem A We start by proving the estimate ( * ) for a very general simple cyclic d-uple plane π : X → P 2 branched over a curve of degree md. Assume C is an irreducible curve which is submaximal for a very general point x ∈ X and such that By Corollary 2.4 we have dC ∈ |kL| and dC is submaximal as well. Then by Bauer's bound, Theorem 1.3, we have This inequality implies by the projection formula that is, every curve of degree at most m is pullback of a plane curve. By Lemma 1.2 the curve dC is not submaximal -a contradiction. Now consider the family X ⊂ |O P(1,1,1,m) (md)| of smooth hypersurfaces of degree md in P (1, 1, 1, m) is nef. By [20,Corollary 4] there is an open subset U ⊂ U containing (X, x) and such that for every (Y, y) ∈ U the line bundle L| Y − ε(L| X ; x)E y is nef. Using (1.1) again, we see that ε(L| X ; x) ≤ ε(L| Y ; y). Therfore the estimate ( * ) holds for all pairs in the open subset U . This concludes the proof of Theorem A.
2,721.6
2020-12-04T00:00:00.000
[ "Mathematics" ]
Single-atom electron paramagnetic resonance in a scanning tunneling microscope driven by a radiofrequency antenna at 4 K Combining electron paramagnetic resonance (EPR) with scanning tunneling microscopy (STM) enables detailed insight into the interactions and magnetic properties of single atoms on surfaces. A requirement for EPR-STM is the efficient coupling of microwave excitations to the tunnel junction. Here, we achieve a coupling efficiency of the order of unity by using a radiofrequency antenna placed parallel to the STM tip, which we interpret using a simple capacitive-coupling model. We further demonstrate the possibility to perform EPR-STM routinely above 4 K using amplitude as well as frequency modulation of the radiofrequency excitation. We directly compare different acquisition modes on hydrogenated Ti atoms and highlight the advantages of frequency and magnetic field sweeps as well as amplitude and frequency modulation in order to maximize the EPR signal. The possibility to tune the microwave-excitation scheme and to perform EPR-STM at relatively high temperature and high power opens this technique to a broad range of experiments, ranging from pulsed EPR spectroscopy to coherent spin manipulation of single atom ensembles. I. INTRODUCTION Scanning tunneling microscopy (STM) is a unique technique to achieve subatomic spatial resolution with simultaneous local spectroscopic information [1]. The demonstration of spin sensitivity in STM experiments [2][3][4] enabled the study of single magnetic atoms on a surface and their interactions [5][6][7][8]. Despite these great advances, the energy resolution remains limited in tunneling-spectroscopy modes by the thermal energy broadening of the electronic tip and sample states (>1 meV at 4 K). This broadening limits the precise sensing of low-energy excitations, e.g., spin-flip excitations, which motivated efforts to reduce the STM operational temperature to the mK range [9][10][11][12] and to apply large magnetic fields to obtain the required sensitivity [13]. Another promising way to overcome the thermally-limited energy resolution is to employ a resonance technique. In this regard, electron paramagnetic resonance (EPR) is an established method [14] that has found diverse applications such as the identification of free radicals in chemical reactions [15], detection of spin-labeled molecules in biological systems [16], or the study of molecular nanomagnets [17]. Following early attempts [18,19], Baumann et al. [20] were the first to convincingly demonstrate EPR of single atoms on a surface using STM. In these experiments [20][21][22], the authors studied single Fe, Ti, and Cu atoms on a thin insulating MgO layer grown on Ag(100) (see Fig. 1a). In EPR-STM experiments, an external magnetic field ext splits the energy of the states of the magnetic atom under investigation. A resonant microwave voltage that is fed through the tip wire to the STM tunnel junction induces transitions between these Zeeman-split states. Upon resonance, the spin-dependent conductivity of the tunnel junction varies, which is sensed by a spin-polarized STM tip through a magneto-resistive effect [20,23,24]. Given the impressive results listed above, it is highly desirable to make EPR-STM accessible to a broader range of experiments. Crucial steps in this direction are maximizing the signal-tonoise ratio in EPR-STM and demonstrating routine operation at liquid-helium temperature and above. To achieve the first goal, the EPR excitation needs to be tailored to reduce the noise and maximize the EPR signal, which involves an optimized coupling of the radiofrequency (RF) voltage to the tunnel junction. Moreover, the efficiency of different EPR detection modes such as frequency and magnetic field sweeps, as well as amplitude, and frequency modulation of the RF excitation should be compared. Eventually, with stronger EPR excitation, the Rabi rate (i.e., the rate, at which the driven system undergoes population inversion) could overcome the spindecoherence rate [33]. This strong excitation might thus enable coherent spin manipulations with EPR-STM, opening the way to performing quantum computation experiments with single atoms on surfaces [35]. In previous work, EPR-STM was achieved by amplitude modulation of the RF excitation. Furthermore, the RF voltage required to drive EPR was fed directly through the STM tip into the tunnel junction by combining the RF with the DC bias voltage outside the STM cryostat using a bias-Tee [34,36,37]. This approach limits the RF transmission efficiency [34,36], the signal-to-noise ratio, and involves complex modifications of the STM wiring. Here, we implement and characterize an indirect RF coupling via an RF antenna close to the STM tip (see Fig. 1) [38,39] that results in significantly higher RF voltages across the tunnel junction than reported so far for frequencies up to 40 GHz. Further, we analyze the RF-coupling scheme in the tunnel junction and show that the RF antenna reaches unexpectedly high coupling efficiencies, of the order of unity, which we explain using a simple equivalent circuit model. In the second part, we show that our indirect RF-coupling scheme can drive EPR of single hydrogenated Ti atoms on a surface using a broad range of power (see Fig. 1). Remarkably, even at temperatures above 4 K an energy resolution of 1 μeV can be achieved. (sweep of radiofrequency or external field ext ) highlights the advantage of sweeping ext , which significantly reduces the acquisition time of an EPR spectrum. We find that both schemes yield consistent magnetic moments of the investigated species. As far as we know, this is the first side-by-side comparison of these sweep modes with the same STM microtip and the same EPR species. Finally, we extend the repertoire of EPR-STM by exploring and comparing directly three RF-modulation schemes, namely, amplitude and frequency modulation. We point out how the right choice of modulation can further maximize the signalto-noise ratio. A. Setup All experiments were performed with a Joule-Thomson STM (Specs GmbH) equipped with a superconducting magnet having a maximum out-of-plane magnetic field of 3 T. To upgrade the STM for EPR capabilities, we installed an RF-transmission line consisting of a semi-rigid coaxial cable (SC-119/50-SB-B with K-type connectors assembled by Coax Co., Ltd, length of 2.5 m) going from 300 K to the bottom of the He vessel and a flexible coaxial cable (Part No. 1070551 from Elspec, SMPM connector on one side, length of 0.3 m) going from the liquid-He vessel to the STM head. The RF cables were thermally anchored at different positions in the cryostat, such that no significant change of the liquid-He hold time was detected after the upgrade. The RF antenna forms the final part of the RF-transmission line. The antenna is made from the 5-mm-long unshielded inner conductor of the flexible coaxial cable, which is positioned as closely (~5 mm away) and as parallel (angle of ~30°) as possible to the STM tip (see Fig. 1). This geometry aims at maximizing the capacitive coupling between the STM tip and the RF antenna, which we discuss in more detail in Sect. III.B. B. Sample and tip preparation A clean Ag(100) surface was prepared by repeated cycles of sputtering (for 10 min in an Ar atmosphere, sputtering current of 30 μA) and annealing (for 10 min, sample temperature of 800 K). Mg was evaporated from a resistively heated crucible held at a temperature of 653 K at a growth rate of 0.2 Å/min, during which the sample was kept at a temperature of 700 K in an O 2 atmosphere (1e-6 mBar). After 20 min of slow cool down in ultra-high vacuum (5e-10 mBar), the sample was inserted directly into the STM at 4.5 K [40][41][42]. In this way, we obtain rectangular MgO islands that are tens of nanometers in size (see Fig. 2a). The MgO thickness is characterized by point-contact measurements [42]. All single-atom experiments were performed on double-layer MgO, which is the most abundant thickness on our sample. We deposited Fe and Ti atoms in-situ with an electron-beam evaporator with the sample kept below 10 K (see Fig. 2b). Fe atoms were deposited to prepare spin-polarized STM tips [20]. It is known that residual hydrogen gas in the vacuum chamber tends to hydrogenate the Ti atoms, forming TiH [43]. [6,7,21,32]. d d Τ spectra (see Fig. 2c) were obtained at constant-height with a (root-meansquare) bias-voltage modulation of 1.5 mV at 971 Hz. The STM tip was made of a chemically etched W wire that we indented into the clean Ag substrate until an atomically sharp STM-tip apex was obtained. Spin-polarized STM tips were prepared by repeatedly picking up of Fe atoms from the surface (on the order of 10); the resulting spin polarization was confirmed by conductivity spectra on TiH O showing a pronounced step around zero DC bias that is otherwise absent (see inset in Fig. 2c) [21]. C. Excitation and detection of EPR The excitation mechanism of EPR-STM is believed to rely on a piezo-electric coupling of the RF electric field inside the tunnel junction to the EPR species leading to a GHz mechanical oscillation in the inhomogeneous exchange field caused by the nearby magnetic STM tip. The resulting change in the transverse effective magnetic field induces transitions between the Zeeman states split by an external magnetic field [25,26]. Upon resonance, the average Zeeman-state population changes and the junction resistance oscillates at the driving frequency. The former is equivalent to a decrease of the longitudinal magnetization of the probed atom, which is sensed by a DC readout current through the magnetic tip as a tunnel magneto resistance effect. The latter is due to the precession of the transverse magnetization, which is sensed by the homodyne current resulting from the mixing of the AC junction conductance and the driving RF voltage [24,44]. As outlined in more detail in Ref. [24], this results in an EPR line shape that contains symmetric (DC and homodyne detection) and asymmetric (only homodyne detection) components, which can be described by a Fano line shape [24,44]. Other excitation mechanisms have been proposed [25][26][27][28], including the RF magnetic field due to the tunneling and displacement currents associated to the RF electric field between tip and sample, a RF modulation of the tunnel barrier in a cotunneling picture, a spintransfer torque induced by the RF tunnel current, or the modulation of the dipolar field between the magnetic tip and the probed atom. Such mechanisms are considered too weak to be effective; however, more work is required in order to quantify and disentangle one from the other. In our experiments, we modulate the amplitude of the microwave excitation voltage at 971 Hz with a square wave and a modulation depth of 100%, and detect the EPR-induced change of the tunneling current using a lock-in amplifier (LIA). Alternatively, we modulate the frequency of the microwave excitation voltage at 971 Hz with a square wave and a bandwidth of 32 MHz. The STM feedback parameters are the same for constant-current images, RFtransmission characterization, and EPR-STM measurements. An atom-tracking module was used during all measurements on single atoms, which gives a spatial averaging over a circle with radius of 10 pm set by the trackingscheme's routine. In this work, the operational temperature of the STM was restricted to the liquid helium bath temperature of 4.5 K, which increased up to 5 K upon application of an RF signal at an RF-generator output power of G ≤ 30 dBm. A. Transfer function The coupling of the RF antenna to the tunnel junction is characterized following the scheme presented in Ref. [36] by measuring the RF transmission function ( ) 10 log / U G , where is the frequency, is the RFvoltage amplitude, and G is the output voltage of the RF generator. This definition of has a more direct interpretation than the ones previously adopted in EPR-STM studies, which were based on either relative to G [36] or the RF power at the tunnel junction relative to G assuming a tunneling-junction impedance of 50 Ω [34,37]. To simplify comparison with previous works, we also plot in Fig. 3a Τ ) related to the latter definition. We obtain G by converting the RFgenerator output power G using an impedance at its output of 50 Ω. Note that here all RFvoltage amplitudes and RF powers (modulated and unmodulated) refer to zero-to-peak values. Details about the calibration procedure are given in Appendix A. Notably, once calibrated, remains approximately constant on a timescale of days, which we ascribe to the thermalization of the RF cables at fixed temperature points of the cryostat, whose temperatures change only little with the cryogenic's filling level (hold time of our system is about 100 h for liquid He). First, we discuss the general features of . The data in Fig. 3a show that is close to the estimated transmission function of the RF cables up to the antenna in a broad frequency range. The RF losses up to the RF antenna were estimated from the specifications of the cables in-and outside of the STM cryostat, and of all the connectors (see Appendix C for details). Further, we accounted for the different temperature stages in the cryostat and calculated a total RF-voltage loss of (13 ± 3) dB at 40 GHz from the RF generator to the antenna. For the estimated loss function, shown as the greyshaded area in to Fig. 3a, we assume a linearly increasing dB-loss with frequency, which is mainly given by the coaxial cable. Upon comparison with the measured , we obtain the remarkable result that the RF-antenna can reach coupling efficiencies to the STM tunnel junction on the order of one. Upon closer inspection, reveals two prominent oscillatory features: a fast oscillation with a period of several hundreds of MHz and an amplitude of about 5 dB, which we ascribe to standing waves along the entire length of the RF cabling, and a stronger modulation with an amplitude of about 15 dB (see the dip in around 25 GHz in Fig. 3a). We ascribe this modulation to resonances of the RF antenna and its electromagnetic environment (compare with Fig. 1b), which includes the STM tip (length of about 5 mm), the gap between RF antenna and STM tip (also about 5 mm), and the surrounding metallic STM body (with dimensions in the centimeter range). This notion is further supported by a characterization of the entire RFtransmission line prior to installation. Measuring the RF losses with a vector network analyzer by inserting the open-ended flexible cable loosely into its input port, we observed a featureless transmission up to 40 GHz (not shown). This discussion highlights the importance of considering not only the RF cable itself but also its environment for an efficient RF coupling. With the knowledge of , we can now compensate the RF losses in a broad spectral range by using an iterative optimization procedure, thereby obtaining a frequencyindependent microwave excitation at the STM junction. In this way, we obtain, for instance, (70 ± 1) mV between 33.5 and 39.5 GHz as shown in Fig. 3b. Note that the setting accuracy of G of our RF generator is 0.01 dB, which implies that the theoretical accuracy of upon compensation of is given by ∆ ≥ 2 (10 0.00 − 1) 2 Τ 0.1 mV. This shows that an optimal RF calibration would require a significantly longer averaging time of about 100 h (see Fig. 3b). The extraordinary performance of our indirect RF-coupling scheme becomes apparent in comparison to previous reports, in which the RF excitation was fed directly through the STM-tip wiring into the tunnel junction [34,36]. Our is on average 15 dB higher than the RFtransmission reported by Natterer et al. in the frequency range between 10 and 30 GHz (for comparison with our definition of , their transmission function was divided by 2) [34]. In the frequency range from 1 to 2 GHz, Hervé and coworkers [37] report an RF-transmission of about -10 dB (for comparison with our definition of , their transmission function was divided by 2), which is comparable to or better than our in this low-frequency range. However the RF-transmission for higher frequencies was not reported in this work. For the frequency interval from 16 to 34 GHz, Paul et al. [36] find RF-transmissions ranging from -10 dB to -35 dB (for comparison with our definition of , their transmission function was reduced by 50 dB and divided by 2). In the same frequency window, our measured (see Fig. 3a) ranges between -5 dB and -23 dB. This allows us to apply a more than ten times higher constant in a frequency sweep for the same , i.e., two orders of magnitude higher RF power at the tunnel junction. B. Equivalent circuit model Previous EPR-STM studies focused on characterizing but did not analyze the coupling of the RF signal to the tunnel junction in detail [34,36,37]. In this regard, we now aim at analyzing the efficiency of the antenna coupling to the STM junction. For this purpose, we employ an equivalent circuit model (see Fig. 1c). Our model considers capacitive coupling from the RF antenna to the sample and to the STM tip with capacitances and , respectively. The tunnel junction is modelled as a resistance (typically in the GΩ range) in parallel to a capacitance and, as a reference, we set the sample voltage to zero. This minimal model is sufficient to capture the impact of the electromagnetic surrounding on the RF coupling efficiency. We measure and by applying a kHz voltage to the RF antenna while recording with a LIA the induced currents in the sample and the STM tip, respectively, giving for and values on the order of 10 − 4 F. In theory, the capacitance between a wire parallel to a plate is given by C 2 0 arcosh( Τ ) Τ , with the vacuum dielectric constant 0 , the length , the wire diameter , and the distance . The capacitance between two parallel wires is given by C C 2 Τ . In our experiment, ≈ ≈ 5 mm (these are upper limits due to the angle between the antenna and the STM tip) and 0.1 mm, which yields calculated values for the capacitances of about 3 • 10 − 4 F, in good agreement with the measured values. Moreover, reported values for range between 10 − 8 F and 10 − 5 F [45], implying that C , ≫ . The equivalent-circuit model corresponds to a voltage divider along the antenna/tip path, yielding where is the voltage applied to the antenna. Note that is independent of C ; the latter, however, determines how the electrical current, and therefore the RF power, splits among the antenna/sample and antenna/tip paths. As is in the GHz range, the imaginary part in the denominator of Eq. 1 is negligible. This leads to the important conclusion that ≈ if ≫ , which explains the high coupling efficiency of the RF antenna observed experimentally (see Fig. 3a). Note that this situation is the longwavelength/near-field analogue to the coupling of infrared radiation to a Whisker diode reported previously [46], where a thin metal tip acts as an efficient long-wire receiving antenna. The electrostatic picture (i.e., the near-field coupling) employed above is justified if all the involved wavelengths (e.g. 30 cm at 1 GHz) are much larger than the typical length scales of our setup. More specifically, the Frauenhofer condition states that the far-field coupling starts dominating at distances from the antenna ≥ 2 2 Τ , where is the speed of light [47]. We find that ≥ 7 mm for the employed frequencies, which shows that the far-field coupling is not dominant in our experiment. Nevertheless, the wave nature of the RF signal might lead to deviations from the near-field picture outlined above, as apparent from the complex structure of the measured . Understanding of the latter requires a more sophisticated RF modelling, which is outside the scope of this study. Knowing also allows us to estimate the strength of the antenna's RF magnetic field A caused directly by the displacement current upon charging the antenna-tip capacitor (note that including the antenna-sample capacitor leads to minor corrections and, thus, to the same conclusions). According to Ampère's law, A μ 0 Τ , where is the distance from the axis of the antenna-tip capacitor and μ 0 is the vacuum permeability. With 40 GHz, 1 V , and ≈ 5 mm, we find that A ≈ 10 −4 mT at the STM junction, which results in a Rabi rate μ B B A 2ħ ≈ 10 4 Τ Hz with the g-value for TiH of 2 [21], the reduced Planck constant ħ and the Bohr magneton μ B [14]. According to Ref. [24], the maximum change in tunnel current sensed by the DC tunnel current is given by Τ , where the homodyne contribution to the current is neglected. Here, M is the tunnel-magnetoresistance efficiency and s is the spin life time, where equal longitudinal and transverse lifetimes are assumed. We estimate an upper bound for ∆ DC by setting DC 10 pA, s ≈ 100 ns [24], and M 1, and find ∆ DC ≈ 1 aA, which is far below the detection limit of 10 fA in our setup. Therefore, B A cannot be the EPR driving source. C. Best frequency window for EPR-STM at 4 K The spectral resolution, spin polarization, and sensitivity of EPR generally increase with increasing frequency or the associated static magnetic field. The best frequency window for EPR-STM is dictated by the following considerations. For a larger frequency , the external magnetic field ext has to increase in order to match the resonance condition. The larger ext leads to a higher thermal population asymmetry ∝ tanh[ℎ 2 B Τ ] between the Zeeman-split states (with the Planck constant ℎ and the Boltzmann constant B ). Thus, a higher frequency favors a larger EPR signal. This reasoning is further supported by the additional increase of the STM-tip spin polarization with larger ext . In combination with the experimental observation that the EPR-signal amplitude scales linearly with (see below), we find From this, we derive that (see Fig. 3a) the best frequency window for EPR-STM for our is located above 30 GHz. Note that the minor raise in temperature of the STM body with applied microwave power has been neglected in Eq. 2. To summarize the RF characterization, we find that, in general, higher frequencies are particularly suited for EPR-STM (see Eq. 2) and, since our performs well above 30 GHz, it is exactly this previously unexplored frequency window from 30 to 40 GHz which is best suited for our experiments (see Fig. 3a). IV. EPR OF SINGLE HYDROGENATED TI ATOMS Knowing the RF excitation precisely, we now describe and compare different measurements of the EPR of single magnetic atoms on a surface. To record an EPR spectrum, the STM tip was positioned above an isolated TiH B atom, at a distance larger than 2 nm from other magnetic species in order to minimize interactions. In the following, we used two different schemes for EPR sweeps: In a magnetic-field sweep (MFS), is constant, whereas in a frequency sweep (FS), ext is constant. For the latter, we compensate (see Figs. 3a and b) to avoid spurious signals. Note that a MFS requires a sufficiently high mechanical stability of the STM during a ramp of ext . In the experiment, we measure the change in DC current ∆ (peak-to-peak current) induced by the modulated microwave excitation, which is obtained from the detected lock-in voltage LIA through division by the gain of the transimpedance amplifier (1e9 V/A) and multiplication by /√2. The latter accounts for the square RF-power modulation (with sine demodulation) and for the peak-to-peak value of ∆ . The FS data were recorded with an averaging time of 80 ms per frequency point (time constant of 20 ms at the LIA) and an overall averaging over ten FSs. The MFS data were acquired with a 200 ms running-average time (time constant of 50 ms at the LIA) at a magnetic-field sweep rate of 0.7 mT/s without any further averaging. These averaging schemes where chosen in order to obtain for MFS and FS a similar acquisition time of about 10 min per spectrum for the data presented in Fig. 4 while maximizing the respective RF excitation. We note that the first 10 s of each MFS are subject to strong noise due to a relative motion between STM tip and sample, which is compensated for by the atom-tracking module. Figure 4 compares side by side the EPR spectra obtained in FS and MFS mode on the same TiH B complex with the same STM microtip. In both modes, a resonant feature clearly evolves by changing either ext (see Fig. 4a) or (see Fig. 4c). To gain more insight into the MFS and FS spectra, we fit the EPR signal with a Fano function (cf. Refs. [24,44] and Sect. II.C) given by with the amplitude , the offset , the Fano factor , and the line width . Here, is the magnetic field or frequency relative to the resonance positions 0 or 0 , respectively, given by − 0 in the FS mode and by 0 − ext in the MFS mode. This definition takes into account that FS and MFS are inverted along the x-axis, i.e., by going higher in frequency at constant ext , the resonance is approached from the low-energy side, whereas for the MFS, this situation is inverted. As seen in Figs. 4 a and c, the EPR spectra are well described by Eq. 3 and the fit parameters contain valuable information that will be discussed in the following: First, for both measurement schemes, increases with 0 and 0 , respectively (see Figs. 4 a and c), which we ascribe to an increased thermal population asymmetry between the two Zeeman states (see Eq. 2) and an increase in STM-tip spin polarization. We find values of about twice as high for the MFS than for the FS, which is attributed to the doubled RF-voltage amplitude (70 mV for FS and 150 mV for MFS) and indicates a dominating homodyne EPRdetection mechanism [24,33]. Second, from the fit we find the values 90 MHz (80 MHz) and 0.6 (0.7) for the MFS (FS), respectively. These small variations in and are expected since a different was used for the MFS and FS measurements (see Fig. 4) [24]. The EPR line shapes appear similar to those reported in Ref. [32] although is not explicitly given therein. Remarkably, a line width on the order of 100 MHz corresponds to an energy resolution better than 1 μeV even for temperatures as high as 5 K. This resolution is about three orders of magnitude below the thermal limit, which clearly demonstrates the advantage of EPR-STM over conventional scanning tunneling spectroscopy in terms of resolving magnetic excitations. An important result is that is similar to that reported for TiH B species measured at 1 K [24,34], indicating that the read-out process (i.e., the interaction with tunneling electrons) is still limiting the spin life time instead of the intrinsic temperaturedependent life time. We additionally verified that the relatively high RF-power levels that we deliver to the tunnel junction do not broaden the EPR spectra by local heating. For this purpose, the AM depth was varied in order to alter the average local RF-induced heating, which, however, did not influence . These observations have significant implications as they show that, for the studied system, the energy resolution is not limited by temperature but only by the measurement process itself. Regarding the resonance positions, linear fits of 0 ( ext ) and 0 ( ) (see Figs. 4 b and d) consistently yield a magnetic moment of 1.00 ± 0.01 μ B for TiH B , in accordance with previous DFT calculations [21] and measurements [21,34]. However, we find deviations from the magnetic moment reported in Ref. [24] of 0.9 μ B for TiH B . Bae et al. [24] argue that this change in measured moment arises from a finite angle between ext and the tip magnetic field experienced by the atom on the surface, which is experimentally difficult to access [26]. Additionally, an STM-tip magnetic field of about 20 mT is found from the intercepts of the fits in Figs. 4 b and d, which is consistent with previous results [21]. Hence, we infer that MFS and FS provide equivalent results for measurements of EPR-STM. Note that we found additionally broadened EPR peaks for a minority of the investigated TiH B complexes, which we interpret as indications of hyperfine interaction as reported in Ref. [32]. However, we could not yet resolve separate hyperfine-split EPR peaks even by reducing the setpoint current and . A likely limiting factor here is that we observe an additional extrinsic broadening of the EPR peaks, which is on the order of the hyperfine splitting of about 40 MHz for TiH B [32]. This additional broadening is attributed to the movement of the STM tip due to vibrations and the atom-tracking routine, which leads to variations of the exchange field between the STM tip and the magnetic atoms, as well as to the higher temperature of our setup. A. Maximizing the EPR excitation in magnetic-field sweeps Next, we demonstrate one of the advantages of the MFS mode in combination with the RFantenna coupling, which is the possibility to apply stronger EPR excitations. Accordingly, Fig. 5 presents MFS data for from 90 to 360 mV at 36 GHz (with all other settings kept constant), which strongly affects the EPR signal. is found to increase linearly with without saturation, showing how the MFS mode can increase the signal-to-noise ratio of EPR spectra by orders of magnitude. This result might be related to a dominating homodyne detection (cf. Sect. II.C) which was also reported previously for EPR of TiH B with ≤ 1.2 K and ≤ 60 mV [21,24,48]. B. Frequency-modulated EPR-STM All data presented up to here and all previously reported EPR-STM studies relied on amplitude However, for certain experimental conditions, it can be of advantage to use a different modulation scheme: If the ( ) curve shows a strong nonlinearity in the range DC ± , the AM leads to a large offset caused by the RF rectification (see Eq. S2 in the Appendix), which can result in an increased noise level by limiting the gain settings of the transimpedance amplifier and the LIA. To circumvent this issue, we introduce here EPR-STM performed by frequency modulation (FM) of [14]. Figure 6 compares the EPR spectra recorded on the same TiH B atom with AM and FM using the same acquisition time (note that a different STM microtip was used compared to Figs. 4 and 5). Interestingly, the measured FM EPR spectrum ∆ M is similar to a derivative of the AM EPR spectrum ∆ AM (see Fig. 6), as expected from classical EPR experiments [14], and a reconstruction of ∆ M is possible: where Λ is given by the FM bandwidth (32 MHz). The resulting reconstructed spectrum agrees well with the measured ∆ M (see Fig. 6). This demonstrates that AM and FM modes contain the same information and can thus be deliberately chosen according to the experimental requirements. We note that the signal-to-noise ratio in the FM mode could be enhanced by choosing a modulation bandwidth on the order of the EPR peak width (about 140 MHz for the data presented in Fig. 6), which, however, was not possible due to limitations of the modulation bandwidth of our RF generator. V. CONCLUSIONS In summary, we demonstrated EPR measurements on single atoms driven by an RFantenna with strong coupling efficiency to the STM junction and an energy resolution of about 1 μeV at 4-5 K. This approach allows for applying high RF-voltage amplitudes across the tunnel junction over a broad spectral range ( reaches 350 mV even above 30 GHz) and facilitates the implementation of EPR capabilities into standard 4 K STM. Comparing the MFS and FS modes (see Fig. 4), we conclude that the MFS mode presents several advantages: First, it saves time by not requiring to characterize in detail (about 1 h in our case), which might be of special importance if changes on short time scales. Second, the MFS mode allows for selecting frequencies with good RF transmission, boosting the EPR-signal amplitude significantly (see Fig. 5). We further demonstrated the possibility of performing EPR-STM by modulating the RF voltage in frequency, instead of amplitude. The FM mode is well established in classical EPR studies [14]. In EPR-STM, FM is of advantage compared to AM due to its weaker dependence on nonresonant background signals caused by heat modulation and its smaller sensitivity to characteristic features of ( ) in the range DC ± . Thus, by a proper choice of , the FM mode can be of particular advantage in the spatial imaging of EPR of single atoms [48] as it is largely insensitive to local changes in the ( ) curve within DC ± , eliminating the need for background subtraction. Moreover, the gain of the STM preamplifier and the LIA can be increased to best match the amplitude of the resonance signal without reaching saturation, thus improving the signal-to-noise ratio for a given acquisition time. The precision in determining the position of the EPR signal can be increased further by choosing AM for asymmetric and FM for symmetric EPR spectra, respectively. Our findings clearly show how the right choice of RF coupling and measurement mode can drastically enhance the signal-to-noise ratio of EPR-STM, allowing also for tailoring the EPR excitation to specific experimental requirements. Future work might aim at even higher driving frequencies to increase the EPRsignal amplitude further by reducing the thermal population of the excited state. A more sophisticated RF engineering of the entire cavity geometry might further maximize the RF-voltage amplitude, paving the way towards high-power pulsed EPR and coherent spin manipulation in STM [35]. A. Calibration of the RF voltage at the tunnel junction STM is usually performed in the low-frequency domain due to the small amplitude of the tunneling current (~pA), whose detection requires a low-frequency cut-off of the transimpedance amplifier (see Fig. 1c in the main text). Therefore, it is important to know how the presence of an RF voltage can influence STM measurements. In the experiment, an RF voltage at a frequency in the GHz range with amplitude is added to the DC bias voltage DC yielding a total voltage of DC + sin (2 ). (S1) In STM, as in any nonlinear circuit, a secondand any higher even-order nonlinearity in the current-voltage characteristic ( ) gives rise to an additional DC current upon rectification of the RF voltage in the resulting tunnel current ( ) [49]. This can be readily understood by considering a Taylor expansion of ( ) around DC up to second order, which gives Considering that the transimpedance amplifier (see Fig. 1c in the main text) cannot detect a current oscillating far above its cut-off frequency of several kHz, we find Thus, a RF voltage causes an additional DC current, which, to lowest order, is proportional to 2 and to the second-order nonlinearity of the ( ) characteristic. As a consequence, a d d Τ spectrum recorded at finite is also altered [49]. However, the Taylor-expansion approach is only valid if ≪ DC . In the more general case of arbitrary and for typical EPR-STM experimental conditions, the altered ( ) or d d Τ curve is determined by a convolution of the corresponding characteristic in the absence of RF voltage with the arcsine-distribution function carrying the information on [36,50]. Here, the arcsinedistribution function describes the probability of the voltage taking a specific value in the interval DC ± over one oscillation period 1 Τ . B. Measurement of the transfer function In detail, is characterized in three steps [36]: First, we calibrate for one specific pair of the RF-generator output power G and frequency . For that, we measure the d d Τ spectrum of TiH O , which has a strong nonlinearity due to a vibrational inelastic excitation at around -90 mV (see Fig. 2 in the main text). Upon application of G −14 dBm (unmodulated) at 17.6 GHz, the d d Τ spectrum is broadened (see Eq. S2 and Fig. 7a). This RF-voltage-induced broadening is reproduced by convoluting the d d Τ spectrum measured in the absence of RF excitation with an arcsine-distribution function, fitting in order to match the experimental broadening. Figure 3a shows Fig. 2 in the main text). We modulate the RF signal in amplitude and record the RF-induced rectified current by the first harmonic in-phase signal LIA measured by a LIA behind the transimpedance amplifier. Upon rescaling this RF-power sweep by the calibrated 15.9 mV at G −14 dBm and 17.6 GHz (see above and Fig. 7a), converting power to voltage, and taking the inverse function of LIA ( ), we can establish a polynomial relationship (of 4 th order with zero offset) between the measured LIA and , i.e., an analytic relation ( LIA ) (see Fig. 7b). Third, is swept at G 9 dBm while we record LIA . Using ( LIA ) finally allows us to determine from 1 to 40 GHz (see Fig. 3a in the main text). A frequency-independent RF-rectification efficiency is assumed for this calibration, which is justified for the given frequency range [49]. C. Estimate of the RF-line losses For the semi-rigid coaxial cable (see Sect. II.A for details) the manufacturer provides information on the attenuation for the frequencies 0.5, 1, 5, 10 and 20 GHz at the temperatures of 300 and 4 K. In our analysis we calculated the power attenuation of the cables with an impedance of 50 Ω using the formula where r is the relative permittivity of the dielectric, i and c the resistivity of the inner and outer conductor, respectively, i the inner diameter of the outer conductor, c the diameter of the center conductor, tan the dielectric loss tangent and GHz the frequency in GHz. The relative dielectric constant of the dielectric material PTFE is r 2.02. PTFE also has a very low loss tangent with a typical value of tan 4 • 10 −4 , which decreases by a factor of 2-3 from 300 K down to 4 K [51,52]. Here, we used tan 2 • 10 −4 at 4 K. The relative magnetic permeabilities of the inner and outer conductors d and D , respectively, were set to 1 for the employed frequency range. The values for and are tabulated in the data sheet of the cables. The resistivities of the silver-plated CuBe inner conductor and CuBe outer conductors have been estimated as c (300 K) 1.55 • 10 −8 Ωm, c (4 K) 5.0 • 10 − Ωm, i (300 K) 8.15 • 10 −8 Ωm, and i (4 K) 4.7 • 10 −8 Ωm following published data on low-temperature materials properties [53]. With these values we are able to reproduce the tabulated losses from the manufacturer within an error of 0.2 dB, while we can calculate the attenuation at an arbitrary frequency value. At 40 GHz, we calculate the power attenuation of the coaxial cable to be 10.2 dB/m at 300 K and 3.9 dB/m at 4 K. Below about 40 K the residual resistivity of most metals does not change anymore due to the residual resistivity from material imperfections, i.e., the RF losses are expected to saturate at a minimum value below these temperatures [53]. For the semi-rigid cable, we have a section of 1 m connecting the feedthrough flange at 300 K with the radiation shield at about 40 K. For this section we used the average attenuation between 300 K and 4 K. For the last section of 1.5 m we used the attenuation for 4 K. Hence, the estimated power attenuation of the entire semi-rigid coaxial cable is 13 dB at 40 GHz. Further components to consider in the attenuation analysis are the DC block at the output of the generator (1 dB flat), the 1.5 m long coaxial cable from the generator to the feedthrough flange (4 dB flat), the feedthrough flange (1 dB flat), a K-type to SMPM adapter (1 dB flat) at the end of the semi-rigid cable and the 30 cm long flexible coaxial cable. The latter has been measured at room temperature for a length of 1 m with two SMP connectors at its ends and the power loss was found to be 45 dB at 40 GHz. The estimated loss at 4 K (assuming half the losses compared to 300 K as found for the semi-rigid cable) with one connector only and a length of 30 cm is 7 dB. This adds 14 dB at 40 GHz to the 13 dB found for the semi-rigid cable. The total attenuation of 27 dB needs to be divided by 2 to be compared with the losses defined by the voltage ratio in the manuscript. Hence, we expect a voltage attenuation of 13 ± 3 dB as quoted in the manuscript, where the uncertainty mainly related to the losses in the flexible coaxial cable and the K-type-to-SMPM adapter.
9,549.4
2019-08-09T00:00:00.000
[ "Physics" ]
$p-$forms non-minimally coupled to gravity in Randall-Sundrum scenarios In this paper we study the coupling of $p$-form fields with geometrical tensor fields, namely Ricci, Einstein, Horndeski and Riemann in Randall-Sundrum scenarios with co-dimension one. We consider delta-like and branes generated by a kink and a domain wall. We begin by a detailed study of the Kalb-Ramond (KR) field. The analysis of KR field is very rich since it is a tensorial object and more complex non-minimal couplings are possible. The generalization to $p$-forms can provide more information about the properties and structures that can possibly be universal in the geometrical localization mechanism. The zero mode is treated separately and conditions for localization of zero modes of $p-$forms are found for all the cases above and with this we arrive at the above conclusion about vector fields. Another property that can be tested is the absence of resonances found in the case of vector fields. For this we analyze the possible unstable massive modes for all the above cases via transmission coefficient. Our conclusion is that we have more probability to observe massive unstable modes in the Ricci and Riemann coupling. Introduction Since Kaluza and Klein introduced extra dimensions in high energy physics to unify electromagnetism and gravitation [1,2], it has been the subject of many developments [3]. In order to recover the four dimensional physics it was imposed that the extra dimension should be compact. However, at the end of the last century, Randall and Sundrum (RS) proposed an alternative to compactification using the concept of brane-words [4]. In this scenario the extra dimension is not compact and gravity is trapped to the four dimensional membrane by the introduction of a non-factorisable metric. Since the extra dimension is not compact all the matter fields, and not only gravity, must be trapped on the brane to provide a realistic a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>arXiv:1801.06098v4 [hep-th] 23 Apr 2018 model. However unlike gravity and the scalar field, the vector field is not trapped on the brane, what becomes a drawback to the RS model. To circumvent this problem some authors introduces a dilaton coupling [5], while others proposes that a strongly coupled gauge theory in five dimensions can generate a massless photon in the brane [6]. Most of these models introduces other fields or nonlinearities to the gauge field [7]. Some years ago Ghoroku et al proposed a mechanism that does not includes new degrees of freedom and trap the gauge field to the membrane. This is based on the addition of two mass terms, one in the bulk and another on the brane [8]. Despite working, the mechanism has the undesirable feature of possessing two free parameters, from which one is left after imposing the boundary conditions. Beyond this, the mechanism is not covariant since in principle it introduces a four dimensional mass term which does not come from the five dimensional bulk. An important point about the presence of more extra dimensions is that it provides the existence of many antisymmetric tensor fields. In five dimensions for example we have the two, three, four and five forms. From the physical viewpoint, they are of great interest because they may have the status of fields describing particles other than the usual ones. As an example we can cite the spacetime torsion [9] and the axion field [10,11] that have separated descriptions by the twoform. Besides this, String Theory shows the naturalness of higher rank tensor fields in its spectrum [12,13]. Other applications of these kind of fields have been made showing its relation with the AdS/CFT conjecture [14]. In the RS scenario much has been considered on these tensors. Localization of the zero mode of p−forms in delta like branes was first studied in Ref. [15] where it was claimed that, in D spacetime dimensions, only forms with p < (D − 3)/2 have a zero mode localized. However, it is well known that in the absence of a topological obstruction, the field strength of a p−form is Hodge dual to the (D − p − 2)−form [16]. Using this property is was shown that in fact only for the 0−form and its dual, the (D − 2)−form, the fields are localized [17,18]. Recently the authors of Refs [19,20] showed that this is also related to the gauge fixing of the form fields. This make the problem of localization worse, since the vector field is not localized for any spacetime dimension. Beyond the zero mode, massive modes are important to be considered. Despite the fact that they are not localized, unstable massive modes can be found over the brane by using, for example, the transfer matrix method [21,22,23,24]. Resonances of form fields has been found to exist for thick and thin branes [25,26,27,28,29,30,31,32,33]. Recently the Ghoroku mechanism was used to trapp the zero mode of q−form field [34]. The point is that the introduction of the mass terms break the Hodge duality and the argument of Refs. [17,18,19,20] is not valid anymore. However this solution keeps the above cited undesired features of the Ghoroku mechanism. In order to solve the above issues, recently a new proposal called "geometrical localization mechanism" was born [35,36]. Looking for a covariant version of the Ghoroku mechanism some of the present authors found that both mass terms can be obtained from a bulk action if the Ricci scalar is coupled to a quadratic mass term of the gauge field [35]. Beyond solving the covariantization problem, it also eliminated from the beginning one of the free parameters. The last one is fixed by the boundary conditions leaving no free parameters in the model. The mechanism also keep the advantage of do not adding any new degrees of freedom. Another good property is that it provides the trapping of the gauge fields for any smooth version of the RS scenario [35]. Soon latter many developments of the idea was put forward. The same non-minimal coupling with the Ricci scalar was proven to work for q−forms and ELKO spinors fields [37,38,39]. For non-abelian gauge fields it has been found that the non-minimal coupling with the field strength used in Ref. [40,41] should also be introduced and the mechanism works [42]. A phenomenological prevision of the model has been found for branes with cosmological constant different of zero: a precise residual mass of the photon must exist and this is proposed to be a probe to extra dimensions [43]. Recently the same mechanism was shown to emerge from a conformal hidden symmetry of the Randal-Sundrum model [44,45]. In this new scenario fermions are shown to be universally trapped to the membrane by adding a non-minimal coupling with torsion. A new phenomenological prevision was found: a minimum value for the torsion of the membrane [44]. Soon latter a smooth version of the model was constructed in [45]. Despite providing the solution to the localization problem, the mechanism raises some questions. When the non-minimal couplings to gauge fields was generalized to includes the Ricci and the Einstein [46], it has been found that the last one do not provides a localized solution. The coupling with metric tensor also do not provides a localized zero model and this suggests that tensors with null divergence do not provide a trapped gauge field. When massive modes are considered, a curious result is that for all smooth versions considered no resonances was found. This raises the question if this is an universal property of the mechanism [47]. In this paper we study the coupling of the Kalb-Ramond(KR) field with tensor fields. The analysis of this field is very rich since it is a tensorial object and more complex non-minimal couplings are possible. Beyond the above cited importance of the KR fields, this generalization can provide more information about the properties and structures that can possibly be universal in the geometrical localization mechanism. This paper is organized as follows. In the second section we make a briefly review the RS scenario in co-dimension one. In section III we study the localization of the zero mode of KR field coupled to Ricci, Einstein, Horndeski and Riemann tensors. In the section four we study the possible existence of unstable massives modes for KB field coupled to Ricci, Einstein, Horndeski and Riemann tensors in a RS, kink and domain wall scenarios. In the section five we study the localization of the zero mode of the p−form field coupled to Ricci, Einstein, Horndeski and Riemann tensors. In the six section we study the possible existence of unstable massive modes of the p−form field coupled to Ricci, Einstein, Horndeski and Riemann tensors in a RS, kink and domain wall scenarios. Finally, in the conclusions we discuss the results. 2 Co-dimension one Randall-Sundrum Scenario Due to the variety of geometrical objects needed in this manuscript, in this section we briefly review the RS scenario in co-dimension one brane world in Ddimensional space-time and construct explicitly all the geometric tensors needed. The coordinates of the whole space-time are x M , M = 0, 1, 2, · · · D − 1 with x D−1 ≡ z the coordinate transverse to the brane and x µ , µ = 0, 1, 2, · · · , D − 2 is the usual Minkowski coordinates. The metric is ds 2 = e 2A(z) η M N dx M dx N where η M N = diag(− + + · · · +) and the equations of motion are given by [4] where Λ is the cosmological constant and V is the brane tension. The conformal form of the metrics provides a simple way to obtain the needed geometrical quantities. Interestingly this will also provide a covariant description of the model. First we must remember that, under a conformal transformationg M N = e 2ϕ g M N , we have for the Christoffel symbols The transformation of the Ricci tensor and Ricci scalar arẽ where ∇ I is the covariant derivative. From these we can get the transformation of the Einstein tensor, given bỹ In the RS case we have g M N = η M N and ϕ = A(z) and this gives us for the components of the Christoffel symbols For the components of the Ricci scalar, Ricci and Einstein tensors we have With the above results we get for the Einstein equations with solution Therefore we see that the solution is identical as in the five dimensional case but with the tension of the brane and the cosmological constant depending on the spacetime dimension. It is important to point that in this manuscript, non-minimal couplings with quadratic higher order antisymmetric tensors will be considered. Therefore we must look for higher order geometric tensors which has the same symmetries. The first geometric tensor of order four with this properties is the Riemann tensor. Under a conformal transformation it changes tõ where is the Kulkarni-Nomizu product defined by When considering the RS metrics the components are simplified to (4) However, the curvature tensor has non-null divergence. As said in the introduction, we must also analyze tensors with null divergence. A fourth order tensor with this property is the Horndeski tensor and has been coupled to the field strength of the vector field Ref. [40,41,42]. Curiously this tensor has all the desired symmetries. Here we will consider the coupling this tensor to the mass term of the form field. It is given by We should point that, since it has null divergence in any index, after contracting two of them we must obtain a tensor proportional to the Einstein tensor. In fact, by a direct calculation we get Under a conformal transformation we havẽ Using the RS metrics we obtain the components We will use the above results to study a variety of geometrical couplings which can renders localized modes for the fields. First we will study zero mode localization and then the massive modes. In the section 3 we will restrict our analysis to five dimensional case. The Kalb-Ramond zero mode case In this section we will make a direct generalization of the geometric coupling, presented in Ref. [46], for the KR field. This is gonna be a prototype for the due generalization to the q−form field in the section five. Beyond this, due to its importance it is worthwhile to make a separate study. We will consider the coupling to tensors of order two and four. 3.1 Kalb-Ramond coupled with a rank two geometric tensor In this subsection we will consider the coupling of the KR field to rank two geometric tensors. The action is given by where X M N is the anti-symmetric Kalb-Ramond two form field and We also use H M1M2 as a generic rank two geometric tensor and γ is a coupling constant. The above action provides the following equation of motion for the KR field 1 2 and from this, we get the identity The analyzes can be simplified if we observe that in all the cases the tensors has the same form, namely and all other components are null. For a free index equal to extra dimension index, i.e., M 3 = 4 we obtain from the equations of motion (6) the vectorial equation where in the above expression and from now on all the indexes are raised with η µν . Taking M 3 = µ 3 in (6) we obtain a tensorial equation Taking the free index M 3 = 4 in the identity (7), we obtain the gauge condition for the vector field, i.e., ∂µ 1 X µ14 = 0. For the free index M 3 = µ 3 we obtain a tensorial condition Therefore the KR field is not divergence free and we must decompose it as X µ1µ2 = where X µ1µ2 T has null divergence, as desired. From now on, we must show that the transversal and longitudinal parts of the field decouples in the equations of motion. For this, we first show that from the above definitions we get Using the above result in the field equations we arrive at From the definition of Y µ1µ24 L we can also show that and using Eqs. (9), (11) and (12) we can finally show that the longitudinal propagator term that appear in (15) is equal to The above identity can be used to decouples the fields in Eq. (15), and provides the final equation for the transversal component Performing the separation of variables in the form X µ1µ2 where the potential is given by To decouple the vector field and the longitudinal part of KR field we use the divergence equation (11) in (15). This procedure leads to To write the above equation in a Schrödinger-like form we must separate the variables in the form This procedure splits the equation (22) in the following set of equations where the potential of Schrödinger equation is given by The equations (20) and (25), with the potentials (21) and (26), governs the extra dimension component of KR and vector fields respectively. Now we must analyze the localization of the zero mode of the KR and vector fields. We first consider the KR case. For this we must to explicit the geometric tensor which couples with KR field in the action (5). We will see that this can be achieved without any specific form for the warp factor. The only condition is that the RS model is recovered for large z. First of all, the potential (21) can further be simplified if we note that H 0 are combinations of A 2 and A what gives us its final form Supposing now the solution for the zero mode ψ(z) = e σA(z) , we obtain from Eq. (20) with m = 0 the set of algebraic equations The solution γ = 0 is the solution for the free KR field and gives us a non localized field as expected. For λ 0 = 0 we have γβ 0 = 0 what also implies the free field solution which is not localized. The last possibility is γ = 0 and λ 0 = 0. With this we get the solutions Therefore to obtain a convergent solution for the zero mode of the transversal part of KR field, a necessary condition is that β 0 /λ 0 > 0. This is valid in all brane scenarios that recover the RS asymptotically. The first example is the Ricci tensor. From the last section we can see that and therefore λ 0 = −1, β 0 = −3, which gives us a localized zero mode if γ = −4 with σ = 7/2. Therefore, just as in the case for the vector field [46], the KR field we can be localized with the Ricci tensor. The Einstein tensor is the second example of a rank two geometric tensor. From the last section we can see that in this case and we have λ 0 = 3, β 0 = 3. This value provides a localized solution for the zero mode of transversal part of KR field for a coupling constant fixed by γ = 2/3 with σ = 3/2. Next we will analyze the localization of the zero mode of the vector component of the KR field. In this case, due to the transformation (23), the integral that must be finite is given by where ψ is the solution of (25) with m = 0. However, different of the tensor case, it is not possible to find analytical solutions of Eq. (25). However, a necessary condition for localizability is a convergent solution for large z. Since all the smooth versions considered here recover RS for large z, we can use this solution to test the localizability of the field. With these considerations we can use the Einstein equation (3), which will be valid for large z, to obtain that and therefore From the above relation we get for the potential (26) at large z This is the same potential found for the KR case, Eq. (21). Therefore the solution is the same, namely ψ = e σA , with the same σ and γ as before. However, since the integrand for the vector case is given by Eq.(34) the condition for localizability is different. In the limit considered here we can use (36) and the integrand of Eq.(34) reduces to e 2A ψ 2 . Therefore the condition for localization in this case is given by β 0 /λ 0 > −1. By this fact, we can conclude that for any case in which the KR is localized we also have a localized vector field. With this, we show that the hypothesis of Ref. [46] that tensors with null divergence do not trap zero modes is no valid. This is true since the Einstein tensor can localize the KR field. Kalb-Ramond coupled with a rank four geometric tensor In this section we will extend the coupling used in last subsection to a rank four geometric tensor. As said in the first section these tensors must have the same symmetries of the Riemann tensor in order to couple with the quadratic KR field. The action is given by where H M1N1M2N2 is a generic geometric tensor. The equations of motion are and from this we get the constraint Now we proceed to decompose these equations in components. As previously pointed, before this we should note that all tensors considered by us have the following structure With this, we obtain from Eq. and for the constraint (40) we get the following components As expected the KR field does not has null divergence and now we proceed to show that its longitudinal and transversal pieces, as defined in Eq. (12), decouples. For this we use the identities (13) in the equations of motion to obtain Now, by using the identity (16) and Eqs. (43), (44) and (12) we can show that and the longitudinal contribution decouples in Eq. (46). To decouple the vector component we just use Eq. (44) in (47). With this, we finally get for the decoupled equations of motion For the transversal part of KR field, making the separation X µ2µ3 where the Schrödinger's potential is given by To write the vector equation in a Schrödinger-like form we must separate the variables X ν4 =X ν (x)F (z)ψ(z), where and we get where the potential of Schrödinger equation is given by At this point we must analyze the localization of the zero modes of the fields. However we can see that the form of the equations that governs the zero modes are identical and in the last section. Therefore, if H decomposes as in (27) we will have that the tensor component is localized for λ 0 = 0 and β 0 /λ 0 > 0, with γ given by Eq. (31). For the Riemann tensor, by comparing Eq. (4) with Eqs. (41) and (27), we have and therefore λ 0 = 0. The conclusion is that the Riemann tensor do not trap the KR field. For the Horndeski tensor we have and λ 0 = −1/2, β 0 = −1/4, what gives a localized zero mode with γ = −3 and σ = 1. Just as before, the vector field is localized always that the KR field is localized. At this point is curious to see that relation with null divergence and localization seems to be inverted. The Horndeski tensor has null divergence and provides a trapped field, while the Riemann tensor do not. In order to obtain a final answer for this questions we have to generalize our results to p−form fields. We do this in the next sections, but before we analyze the possible resonances for the cases considered here. Kalb-Ramond massive modes In this section we study the possible resonant modes with Kalb-Ramond field coupled to Ricci, Einstein, Horndeski and Riemann tensors, through the transmission coefficient. The resonant modes appears when the transmission coefficient T is equal to 1, i.e, Log(T ) = 0. We analyze in three possible scenarios: Randall-Sundrum delta like brane, a brane generated by a domain wall and generated by a kink. We first observe that for all tensor coupling, the potential of Schrödinger equation in conformal metric has the form where β = σ = 7/2, 3/2, 1, α = σ 2 for Ricci, Einstein and Horndeski tensors respectively and α = 1/4 − 2γ, β = −1/2 for the Riemann tensor. In Randall-Sundrum delta like scenario The first brane scenario that we will study is the Randall-Sundrum scenario [4]. Despite the singularity this scenario has a historical importance and serves as an important paradigm in physics of extra dimensions and field localization. The warp factor of this scenario in a conformal form is given by In this scenario, the potential of Schrödinger equation is given by and is illustrated in Fig. 1. The regular part of U T (z) has a maximum at z = 0. For Riemann tensor α + β = −1/4 − 2γ. We must have γ < −1/8, in order to provide a positive maximum for the potential and a positive asymptotic behavior. For the massive case, the Eq. (20) provides the solution where C 1 and C 2 are constants with ν = σ + 1/2 for Ricci, Einstein and Horndeski tensors coupling and ν = √ −2γ for Riemann tensor coupling. Since the Bessel functions goes to infinity as (m T |z| + m T /k) −1/2 , no fixation of constants C 1 and C 2 produces a convergent solution. Then the massive modes are non-localized. To obtain more information about massive modes we can evaluate the transmission coefficient. For this, we will write the solution (62) in the form where H (1) ν and H (2) ν are the Hankel functions of first and second kind respectively, r and t are constants. The boundary conditions at z = 0 imposes (0) is the Wronskian at z = 0. Since the Wronskian is constant in Schrödinger equation, the transmission coefficient can be written as The transmission coefficient was plotted in Fig. 2(a) as function of E = m 2 T for the KR field in the Ricci, Einstein and Horndeski coupling and does not show peaks, indicating no unstable massive modes. The transmission coefficient was plotted for Riemann coupling in Fig. 2(b) and does not show peaks, indicating no unstable massive modes. For the vector field in Randall-Sundrum scenario the potential of Schrödinger equation, (26), can be written as This is the same potential of KR filed, so the behavior of the modes of vector field is the same, i.e., the zero mode is localized while the massive are not. The transmission coefficient for the massive modes is the same of KR field, Figs. 2(a) and 2(b). In brane scenario generated by a domain-wall In this section we will use the smooth warp factor produced by a domain-wall [33,48], which recover the Randall-Sundrum metric at large z for n ∈ N * . Using this metric in Eq. (21) we obtain the Schrödinger's potential for transversal part of KR field which is illustrated in Fig. 3, for Ricci tensor coupling with some values of n. The solution of massive modes of transversal of KB field can not be found analytically. To obtain information about this state we use the transfer matrix method to evaluate the transmission coefficient. The behavior of the transmission coefficient for Ricci, Einstein and Horndeski coupling is illustrated in Figs. 4(a), 4(b) and 4(c) for some values of parameter n. As we can see, for Ricci tensor coupling, resonant peaks appears when we increase the values of the parameter n indicating the existence of unstable massive modes. The same occur for Einstein tensor coupling. In the Horndeski coupling we observe the absence of resonant peaks. The behavior of the transmission coefficient for Riemann coupling is illustrated in Fig. 5(a) for some values of parameter n with γ = −2 and in Fig. 5(b) for some values of coupling constant γ with n = 1. As we can see, when we increase the values of n and |γ| we observe the appearance of resonant peaks, indicating the existence of unstable massive modes. For the reduced vector field the potential is given by Eq. (26). The components H 0 and H 1 vanishes at regular points near to the origin for all values of parameter n and the potential diverges at this same points (see Figs. 6(a) and 6(b)). These kind of divergence does not allow us to use the transfer matrix method to compute the transmission coefficient and to evaluate the existence of unstable massive modes. In brane scenario generated by a kink For a four dimensional brane generated by a kink, the warp factor is given by [21] A(y) = −4 ln cosh y − tanh 2 y, where the variable y relates with the conformal coordinate, z, by The behavior of potential of transversal part of KB field, Eq. (21), with this warp factor is illustrated in Fig. 7 for Riemann coupling. Like the previous case, the components H 0 and H 1 vanishes at regular points near to the origin and the potential diverges at this points. These kind of divergence does not allow us to use the transfer matrix method to compute the transmission coefficient of vector field and to evaluate the existence of unstable massive modes. The p−form zero mode case In this section we further develop the previous methods in order to generalize our results to the p−form field case in a (D − 1)-brane. We again must consider the coupling to tensors of order two and four. The p−form coupled with a rank two geometric tensor In this subsection we will consider the coupling of the p−form field to rank two geometric tensors. The action is given by The equations of motion are given by Similarly to the KR case, from the above equation we get the constraint Now we must decompose the p−form in D-dimensions to a p−form and a (p − 1)−form in (D − 1)-dimensions. For this we must expand Eq. (74) and use Eq. (8). We arrive at just two kinds of terms: one where none of the indexes are D − 1 and another where one of the indexes is D − 1 with αp = D − 2(p + 1). We should point that for the one form case the second of the above formulas is not valid. This is due to the fact that in this case the H tensor will not be anti-symmetrized with any index of the field. In fact for this case the equation is given by and the the vector field, in principle, should be considered separately. However we can unify both equations in the following way where κ = 0 for the gauge field and κ = 1 in the other cases. In the next section we will see that this will provide a powerful simplification of the problem. Just as in the KR case, the equation (75) gives rise to two equations. For one free index equals to D − 1 we get ∂µ 1 X µ1...µp−1 D−1 ≡ ∂µ 1 X µ1...µp−1 = 0, where we have used our previous definitions. Therefore we see null divergence condition for our (p − 1)−form field is naturally obtained upon dimensional reduction. For all index different of D − 1 we get ∂z κH 0 + H 1 κ + 1 e αpA X µ1...µp−1 + H 0 e αpA ∂µ p X µ1...µp = 0. The above equation says to us that the p−form does not has null divergence. In order to obtain a consistent zero mode over the brane we must decouple the longitudinal and transversal parts defined by from where we get and With these we can write the equations (79) and (80) Therefore, we see clearly from Eq. (86) that we have a coupling between the transversal part of the p−form field, the longitudinal part and the (p − 1)−form field. From Eq. (87) we see that the (p − 1)−form is coupled to the longitudinal part of the p−form field. As in the case of the KR field, we should expect that we have to uncouple the effective massive equations for the gauge fields X µ1µ2...µp T and X µ2...µp since both satisfy the null divergence condition in (D − 1) dimensions. Lets walk along and prove this now. First of all note that using ∂µ 2 X µ2...µp = 0 we can show that To study the mass spectrum of the p−form we must impose the separation of variables in the form X µ1...µp T (z, x) =X µ1...µp T (x)e −αpA/2 ψ(z) to obtain where To the (p − 1)−form field case we must separate the variables in the form X µ2...µp =X µ2...µp F (z)ψ(z), where This procedure provides the following set of equations where the potential of the Schrödinger equation is given by but we should be careful, since for this case the conditions for localizability is that is finite. In the next section we will see that we can find a master equation governing the trapping of fields. Therefore we stop here our analyzes. The p−form coupled with a rank four geometric tensor Finally we must study the last case of the manuscript, namely the coupling of the p−form field to rank four geometric tensors. Since we have already constructed and defined all the relevant quantities before this will be very direct. The action is given by and the equations of motion are given by Again, due to the anti-symmetry, we get the constraint Following the same steps as before, by using Eqs.(101) and Eq. (41), we obtain the coupled equations of motion for the p−form and (p − 1)−forms in (D − 1)dimensions where now κ = 0 for p = 2 and κ = 1 for the other cases. Since κ depends on the degree of the form and on the rank of the tensor, from now on we will use κp,r. In this notation we have that κp,r = 0 for the pairs (p, r) = (1, 2) and (p, r) = (2,4). For the constraint we get again from the component D − 1 of (102) ∂µ 1 X µ1...µp−1 = 0 and for all index different of D − 1 we get ∂z κp,rH 0 + H 1 κp,r + 1 e αpA X µ1...µp−1 + H 0 e αpA ∂µ p X µ1...µp = 0. As said before, the use of the parameter κp,r provides a powerful way to simplify and unify the problem of localization for p−form fields. As we can see Eqs. (81). Therefore this can be seem as a master equation that governs the p−form fields non-minimally coupled to gravity in RS scenarios. By separating the variables we also arrive at the following master equations that drives the spectrum of the reduced p−form and (p − 1)−form fields. and The analyzes is identical to the one performed in the second section. To the p−form we find that the field is localized if λ 0 = 0 and For the (p − 1)−form we have In the conclusion section we must discuss the general consequences of the above results. However we should stress the fact that simple formulas can be obtained depending only in few parameters for all cases considered. Now we must test if some unstable massive modes can be found over the brane with the above couplings. 6 The p−form massive modes As we saw in the last section, the transverse potential for Ricci, Einstein and Horndeski tensors couplings have the form with localizable zero mode if σp > 0, where σp = −D/2+p+β 0 /λ 0 +1. Is important to note that σp = p−1/2, p−1 for the Einstein, Horndeski tensor respectively. This implies that the possible unstable massives modes does not depend of dimension of the space in these cases. The Schrödinger transverse potential when the p−form field is coupled to the Riemann tensor has the form where γp < (α 2 p + 2αp)/8, to have positive asymptotic behavior. In the following sections we study the possible existence of unstable massive modes for the Ricci, Einstein, Horndeski and Riemann tensors couplings in Randall-Sundrum delta like, domain wall and kink scenarios. In Randall-Sundrum delta like scenario In this section we will use the warp factor given by Eq. (60). For the transversal part of the p-form field, the potential of Schrödinger equation, Eq. (116), is given by By making the same procedure of the sections above, we obtain the transmission coefficient νp (m T z + m T /k). We show in Fig. 10 the regular part of the Schrödinger potential in Randall-Sundrum delta like scenario for Ricci, Einstein and Horndeski couplings. As we can see the Ricci tensor gives a bigger maximum for the potential which means that only larger masses pass through the brane. We hope that only resonant peaks appears for large masses for Ricci tensor coupling. As we can see in Fig. 11(a), no resonant peaks appear in the Randall-Sundrum scenario for the Ricci, Einstein and Horndeski tensor coupling. For the Riemann coupling the transversal part of p-form field, the potential of Schrödinger equation, Eq. (117), is given by The transmission coefficient in this case is given by νp (m T z + m T /k). The maximum of Schrödinger potential grows as |γp| increases. As we can see in Fig. 11(b), resonant peaks appear in the Randall-Sundrum delta like scenario for the Riemann tensor coupling when |γp| increases. In brane scenario generated by a domain-wall Now we analyze all tensor coupling in a brane scenario generated a domain-wall. As in the Kalb-Ramond case, the warp factor used will be given by Eq. (69). As we can see in Fig. 12(a), no resonant peaks appear in Einstein and Horndeski coupling. However we have a resonant peak m 2 T = 10.5 for the Ricci tensor coupling. For the Riemann coupling, in Fig. 12(b), we observe three resonant peaks. In brane scenario generated by a kink Here we analyze all tensor coupling in a brane scenario generated by a kink. As in the Kalb-Ramond case the warp factor used will be given by Eq. (71). As we can see in Fig. 13(a), no resonant peaks appear in Einstein and Horndeski coupling again. However we have a resonant peak m 2 T = 67 for the Ricci tensor coupling. From Fig. 13(b), we can see that resonant peaks appear in the Riemann coupling again. Conclusion In this paper we analyzed the localization of p−form field in co-dimension one brane scenarios non-minimally coupled to gravity. First we consider the Kalb-Ramond field coupled to rank two and four geometric tensors. We show that the reduced fields can be decoupled in a similar way as in the vector field case. We analyze the localization of zero mode of the transversal part of KR field for a generic geometric tensor and found the conditions to localize it. For the vector component of the KR field, the study of localization is more complicated, due to the potential of the Schrödinger equation and the analise could be done only asymptotically. We find that for both, the reduce KR and vector component, the fields are localized for the Ricci, Einstein and Horndeski tensors but not for the Riemann tensor. We find that the value of the coupling constant is the same for both and therefore consistent. To analyze the localizability for general p, we use the values of β 0 /λ 0 and substitute in Eqs. (112) and (114) to obtain Table 1 Table 1 The localizability condition for the p−forms fields. this we can see that for some values of the parameters both components of the reduced p−form can be localized. For example, for the Ricci tensor and for D = 5 we have that the reduced p−form is localized for p > −1 and therefore for any value of p. For the the (p − 1)−form in the same case the condition is given by p < 11/2, what means all the cases, since in five dimensions the bigger value of p is five. However beyond this there is a second consistence condition. The coupling constants (113) and (115) must be the same. By imposing that γp = γ (p−1) we find that p = (D − 1)/2. Therefore the condition for having both components localized is universal and independent of the kind of coupling used and just depends on the dimension of spacetime. In five dimensions for example the Kalb-Ramond field generates a Kalb-Ramond plus a vector field trapped over the membrane for any kind of coupling. Another important result of the Table 1 is that, as said in the introduction, we can test some previous hypothesis of previous works. In Ref. [46] the coupling of the vector field with the Ricci and the Einstein tensor has been studied. It has been found that for the second case it is not possible to localize the field. The hypothesis was that tensors with zero divergence do not provides a localized field. However from the above table we can see that the Einstein tensor can trap any p−form in any dimension with only one exception: the case p = 1. Therefore somehow the hypothesis is right but is only valid for the vector field. The Ricci tensor also can trap any p−form in any D with one exception: the gauge field in D = 2. The Riemann tensor can not localize any field. The Horndeski tensor can trap any field since this coupling is possible for p > 1 and the localization condition is given by p > 3/2. Now we will consider massive modes. This is done for all geometric tensors for many kinds of smooth branes. In the case of massives modes, we used the transmission coefficient to observe possible unstable massive modes. The emergence of resonant peaks was observed when we increased the coefficients of A (z) and A (z). From Figs. (2(a)) and (2(b)) we observed the absence of resonance for KR field in RS delta like branes for all tensor coupling. The same occur for p-forms field in D = 10, for Einstein, Horndeski and Ricci coupling as we can see from Fig. (11(a)). In delta like brane, we observe the appearance of resonance only in Riemann coupling in D = 10 (see Fig. (11(b)). For domain wall branes, for KR field, we concluded from Figs. (4(a)), (4(b)), (4(c)), (5(a)) and (5(b)), the resonance appear only for Ricci and Riemann coupling with increasing value of parameter n. The same occur for p-forms field in D = 10. The conclusion is the same for kink branes. This can be explaining as follows. As we can see from Eq. (116), when we increase σp, A (z) predominates over A (z). The same occur for γp < 0 in Eq. (117). The behavior of U (z) looks like a double barrier, for domain wall and kink like branes (see the Figs. (3) and (7)). When we increase the values of the parameters, the width of the barrier increase and we have more probability to see resonant peaks. For the domain wall brane, for large n, A (z) ∼ |z| −2 for |z| > 1 and A (z) ∼ 0 for |z| < 1. That is, for large n and higger dimension, we have a Schrödinger potential like a double delta barrier with two deltas located at z = ±1. As pointed in the section 6, for the Einstein and Horndeski coupling, the Schrödinger potential does not depend on dimension of space. Consequently, the resonant peaks will appear for greater values of the form p in Einstein and Horndeski coupling. Since in the Ricci and Riemann tensor coupling, the potential depends on the dimension of space-time, it was observed, that these cases are more sensitive to the presence of unstable massive modes as showed in Table 2.
9,804.2
2018-01-17T00:00:00.000
[ "Physics" ]
Spherically Optimized RANSAC Aided by an IMU for Fisheye Image Matching : Fisheye cameras are widely used in visual localization due to the advantage of the wide field of view. However, the severe distortion in fisheye images lead to feature matching difficulties. This paper proposes an IMU-assisted fisheye image matching method called spherically optimized random sample consensus (So-RANSAC). We converted the putative correspondences into fisheye spherical coordinates and then used an inertial measurement unit (IMU) to provide relative rotation angles to assist fisheye image epipolar constraints and improve the accuracy of pose estimation and mismatch removal. To verify the performance of So-RANSAC, experiments were performed on fisheye images of urban drainage pipes and public data sets. The experimental results showed that So-RANSAC can effectively improve the mismatch removal accuracy, and its performance was superior to the commonly used fisheye image matching methods in various experimental scenarios. Introduction Fisheye cameras have a wide field of view (FOV). The images acquired by fisheye cameras have richer visual information than those acquired by perspective cameras, which is conducive to extracting and tracking more visual features [1]. In infrastructure monitoring applications, such as bridge inspection, tunnel inspection, and drainage pipeline disease detection [2][3][4], due to the long mileage, narrow internal space, and lack of texture in the inspection scene, visual localization is prone to problems with extracting sufficient features and error accumulation, and it is difficult to ensure the stability and reliability of localization. In this case, wide-angle fisheye images are more conducive to the matching and tracking of visual features, thereby improving the reliability of image registration and camera pose estimation and further improving the visual localization accuracy. Therefore, in a complex environment, using a fisheye camera for visual localization is more advantageous than using traditional cameras. However, the special properties of fisheye images also bring challenges to image matching: fisheye images compress visual information while acquiring a wide FOV, so objects in fisheye images undergo nonrigid deformation; at the same time, perspective projection models cannot describe the imaging process of fisheye cameras, so a specific imaging model is needed to accurately describe the two-view geometry of fisheye images. In view of these problems, the existing matching methods are not effective. On the one hand, traditional feature matching methods are sensitive to distortion, fast motion, and sparse texture, so a large number of outliers will be generated in fisheye image matching, resulting in a decline in the matching accuracy. On the other hand, the mismatch removal algorithm represented by random sample consensus (RANSAC) [5] usually requires the establishment of specific geometric models (e.g., essential matrix). These models are usually based on perspective projection and cannot accurately describe the two-view geometry between fisheye images, which will lead to errors in fisheye image matching. These errors may then lead to inaccurate pose estimation or localization failure. Therefore, it is of great importance to develop robust and reliable fisheye image matching methods that can lay the foundation for high-precision localization of complex scenes. To solve the problem of poor matching accuracy and inaccurate pose estimation caused by severe fisheye image distortion, we propose an IMU-assisted fisheye image matching method called spherically optimized RANSAC (So-RANSAC). First, a fisheye spherical model was adopted to restore the epipolar geometry of fisheye images. Then, the relative rotation angle was estimated by IMU propagation. On this basis, IMU-assisted RANSAC was adopted to improve the accuracy of pose estimation, and the putative correspondences set was filtered by the estimated pose to achieve reliable fisheye image matching. The contribution of this paper is two-fold. First, we propose the So-RANSAC for fisheye image matching. A fisheye spherical model is proposed to reconstruct the epipolar geometry of fisheye images that projects the putative correspondences on the original fisheye images to a sphere. By combining the spherical model, IMU, and RANSAC, we propose an outlier removal method that is adaptive to fisheye distortion thus reducing the influence of fisheye deformation on pose estimation and improving the outlier removal accuracy. Second, we performed an experiment on an infrastructure monitoring application of urban drainage pipe inspection. The fisheye images of pipes were acquired using a self-developed pipe capsule robot. The experimental results validate the superiority of our method in fisheye image matching. Related Work The process of fisheye image feature matching can be abstracted into two stages [6]: a feature matching stage and an outlier removal stage. In the feature matching stage, the features are extracted and described by descriptors, and then feature matching is performed based on the descriptors to obtain the putative correspondences set. However, this set contains outliers; therefore, the second stage is to filter the putative set, keep as many inliers as possible, and remove outliers. In the following, we review the related works and briefly classify and introduce them according to the above two stages. In the fisheye image feature matching stage, the main factor that affects the accuracy is severe distortion. Traditional matching methods are sensitive to distortion and cannot guarantee accuracy and robustness in fisheye image matching tasks. Therefore, the key to improving the accuracy of fisheye image matching is how to deal with distortion. According to their different principles of processing deformation, we divided the current fisheye image feature matching methods into two groups: methods based on geometric correction and methods based on distortion models. Methods based on geometric correction adopt rectification algorithms to remove distortion and then perform traditional matching. The first step is to calibrate the fisheye camera [7-9], project the fisheye image onto a new image plane through projection transformation to correct the image [10][11][12][13], and, finally, apply traditional matching methods (e.g., scale-invariant feature transform (SIFT) [14], oriented FAST and rotated BRIEF (ORB) [15]). This group of methods has a simple process flow; however, there are two shortcomings: one is that the rectification process will cause the loss of information [16], which loses the advantages of the wide FOV, and the other is that reprojection will lead to image artifacts [17]. The methods based on distortion models build improved feature matching descriptors that are based on fisheye distortion models to directly match fisheye images. The affine-invariant detectors, Harris-Affine and Hessian-Affine [18], have good adaptability to local deformation, and it was verified in [19] that they can achieve good performance against fisheye distortion. Another idea has been to introduce fisheye distortion models into the scale space to improve the SIFT algorithm (e.g., Spherical SIFT [20]; Omni SIFT [21]; sRD-SIFT [22], and the division model [23]). This group of methods avoids the artifacts caused by reprojection, and the improved feature descriptor is adaptable to distortions, and effectively improves the accuracy of fisheye image feature matching. Feature matching results inevitably include outliers; thus, outlier removal is necessary. There are two difficulties in removing outliers from fisheye images. First, fisheye images do not conform to perspective geometry, so epipolar constraints in pose estimation (e.g., essential matrix) will lead to errors. Therefore, methods based on geometric models (e.g., RANSAC) become inaccurate. Second, severe distortion leads to a large number of outliers. To address the challenge of outlier removal, researchers have proposed nonparametric outlier removal methods independent of geometric models [24] that are based on the assumption of local geometric consistency. The assumption is that the geometric properties of image features remain consistent before and after deformation in a small local region. The locality preserving matching (LPM) [25] algorithm looks for adjacent points of putative correspondences in a local region and takes the distance between adjacent points as the metric standard. The locality affine-invariant matching (LAM) [26] algorithm is based on the principle that the area ratio before and after affine transformation remains unchanged, and it uses the adjacent points in the local region of putative correspondences to construct the affine-invariant triangle, which can improve the matching accuracy in the case of deformation. The four-feature-point structure (4FP-Structure) [27] algorithm combines point and line feature constraints to construct a strict geometric relationship composed of four pairs of putative correspondences, and the matching accuracy is further improved. This kind of nonparametric method has a higher adaptability to distortion. However, if the constraints are loose, the accuracy will be low, and if the constraints are strict, the efficiency will be reduced. Moreover, these methods do not straightforwardly deal with fisheye distortion, so there are still challenges in fisheye image matching. Another idea is to use additional information to assist image matching. Additional information is provided by means of pre-calibration, multi-sensor fusion or fixed connections between the camera and the vehicle platform to provide a reference for matching. The recursive search space method (SIFT RFS ) [28] uses global positioning system (GPS) data to provide the camera pose and improves the fisheye epipolar geometry model through spherical projection so that the essential matrix can be used for pose estimation. The inliers are selected by predicting the positions of the points after they are projected onto the adjacent images. However, the depth of the point in the pose estimation process is estimated only by experience. The one-point RANSAC method was proposed in [29]; this method affixes a fisheye camera on a vehicle platform and establishes nonholonomic constraints through the instantaneous center of rotation (ICR) of the wheel. However, this method requires the camera to be installed along the rear axle, and the x-axis must be perpendicular to the rear axle; thus, the application scenarios are limited. A four-point RANSAC method is proposed in [30], which uses the attitude provided by an IMU affixed to a camera to assist pose estimation and does not require camera-IMU calibration. However, this method is used only for perspective cameras. The adaptive fisheye matching algorithm [31] also uses fisheye epipolar constraints, but the constraint is constantly adjusted based on the Kalman filter prediction state to improve the pose estimation accuracy. This kind of multi-sensor fusion method can effectively improve the accuracy of pose estimation and outlier removal by using additional information, and the results of RANSAC-type methods can be integrated into a visual oedometer as the initial value. However, such methods are usually limited to specific hardware requirements and application scenarios or are not suitable for fisheye images. In addition to the above works, learning-based methods provide a new way to address fisheye image matching problems. In the feature matching stage, deep learning methods can significantly improve the accuracy of matching [32]. However, existing studies mainly focus on rectilinear perspective images. For fisheye images with significant distortions, deep learning methods still face challenges. Learning from images for transformation model estimation is limited when applied to images under complex and serious deforma-Remote Sens. 2021, 13, 2017 4 of 18 tion [6]. A deep architecture was developed in [32] to learn to find good correspondences for multiple types of images, but the recall rate was low in severely distorted scenes. An end-to-end framework was introduced in [33] with the aim to enhance both precision and recall in the fisheye image matching process, but they transformed the fisheye image to a rectilinear perspective image to remove the radial distortion, which will lead to the loss of image information. To develop learning-based methods for fisheye image matching, the key lies in how to model the distortion in a deep neural network, but there is no representative method at present. However, the success of learning-based methods in the field of conventional images indicates the potential of deep learning for fisheye image matching. Spherically Optimized RANSAC Aided by an IMU The motivation for this paper was to explore an inertial-assisted matching method suitable for fisheye images in view of the limitations of the previous multi-sensor-assisted matching method. The So-RANSAC algorithm proposed in this paper was inspired by previous works [22,[28][29][30]. This method can be divided into three stages: (1) fisheye model construction: the internal parameters and the distortion parameters of the fisheye camera were obtained by calibration, and the fisheye camera imaging model was recovered from the fisheye image; (2) image feature matching: feature points were extracted by an affineinvariant detector; (3) outlier removal: So-RANSAC method was proposed to construct the fisheye spherical coordinates, and then outliers were removed. Compared with the previous methods, So-RANSAC not only improves the outlier removal accuracy by using the IMU, but also has good adaptability to fisheye distortion. The main innovation of this paper is the construction of a fisheye spherical model that restores the fisheye epipolar constraints and improves the outlier removal accuracy. The overall process of our method is shown in Figure 1. ies mainly focus on rectilinear perspective images. For fisheye images with significant distortions, deep learning methods still face challenges. Learning from images for transformation model estimation is limited when applied to images under complex and serious deformation [6]. A deep architecture was developed in [32] to learn to find good correspondences for multiple types of images, but the recall rate was low in severely distorted scenes. An end-to-end framework was introduced in [33] with the aim to enhance both precision and recall in the fisheye image matching process, but they transformed the fisheye image to a rectilinear perspective image to remove the radial distortion, which will lead to the loss of image information. To develop learning-based methods for fisheye image matching, the key lies in how to model the distortion in a deep neural network, but there is no representative method at present. However, the success of learning-based methods in the field of conventional images indicates the potential of deep learning for fisheye image matching. Spherically Optimized RANSAC Aided by an IMU The motivation for this paper was to explore an inertial-assisted matching method suitable for fisheye images in view of the limitations of the previous multi-sensor-assisted matching method. The So-RANSAC algorithm proposed in this paper was inspired by previous works [22,[28][29][30]. This method can be divided into three stages: (1) fisheye model construction: the internal parameters and the distortion parameters of the fisheye camera were obtained by calibration, and the fisheye camera imaging model was recovered from the fisheye image; (2) image feature matching: feature points were extracted by an affineinvariant detector; (3) outlier removal: So-RANSAC method was proposed to construct the fisheye spherical coordinates, and then outliers were removed. Compared with the previous methods, So-RANSAC not only improves the outlier removal accuracy by using the IMU, but also has good adaptability to fisheye distortion. The main innovation of this paper is the construction of a fisheye spherical model that restores the fisheye epipolar constraints and improves the outlier removal accuracy. The overall process of our method is shown in Figure 1. Camera Model and Feature Matching Fisheye camera models do not conform to the perspective geometry; that is, they do not conform to the collinear condition, so the fundamental matrix or essential matrix under the perspective geometry cannot be directly applied to fisheye images. To handle this problem, the fisheye camera needs to be calibrated, and the two-view geometry needs to be Remote Sens. 2021, 13, 2017 5 of 18 restored based on a specific model. Therefore, in this stage, the fisheye camera calibration method proposed in [7] was adopted. The principle can be expressed as: where (u, v) represents the image pixel coordinates, X represents the world coordinates, to the principal point of the image, a 0 , a 1 ,···, a n is the set of coefficients to be calculated (usually, n = 5), and P is the projection matrix. After calibration, a collinear relationship is established between the pixel coordinates of the points in the fisheye image and the world coordinates of the spatial points. In the feature matching stage, the Hessian-Affine detector [18] is used to extract feature points. The principle of the Hessian-Affine algorithm is that the deformation between the local regions of fisheye images can be approximated by an affine transformation. Based on this principle, affine regularization is carried out on the local regions of extracted features to reduce the influence of deformation on feature description. Finally, the extraction and matching of feature points are completed, and the putative correspondence set is obtained. Fisheye Image Matching Aided by an IMU In the outlier removal stage, we propose So-RANSAC to remove outliers while correctly estimating poses. Traditional RANSAC first randomly extracts a minimum sample from the putative set to estimate a pose model and then divides inliers and outliers based on this model. The optimal pose is constructed through an iterative process of sample selection and estimation. However, traditional RANSAC is sensitive to a high proportion of outliers, and the pose estimation step is based on perspective geometry. In this paper, RANSAC is improved by projecting the fisheye image correspondences onto a sphere and restoring the epipolar geometry for pose estimation, and the matching process is assisted by an IMU. The process of So-RANSAC is shown in Figure 2. The putative correspondence set is converted into spherical coordinates in the fisheye spherical projection part; the relative rotation angle is obtained by IMU propagation in the assistance of the IMU part as a constraint of pose estimation. By combining the above two parts, four-point RANSAC [30] is used to remove outliers, and, finally, the matching result is obtained. Fisheye Spherical Projection The fisheye images undergo severe distortions such that the perspective geometry of rectilinear perspective images cannot be directly applied to fisheye images [34]. Specific projections (e.g., panoramic or log-polar) can reduce the distortions, while the spherical Fisheye Spherical Projection The fisheye images undergo severe distortions such that the perspective geometry of rectilinear perspective images cannot be directly applied to fisheye images [34]. Specific projections (e.g., panoramic or log-polar) can reduce the distortions, while the spherical projection is a non-deformed model for the fisheye images. The projection of the visual information on the sphere correctly handles the information without introducing distortion [21]. Therefore, we adopted the spherical projection to restore the epipolar geometric relationship of the fisheye image and apply the RANSAC method to the fisheye image. The projection of image coordinates onto a sphere was conducted on the basis of calibration as shown in Figure 3. Results Outlier removal Fisheye Spherical Projection The fisheye images undergo severe distortions such that the perspective geometry of rectilinear perspective images cannot be directly applied to fisheye images [34]. Specific projections (e.g., panoramic or log-polar) can reduce the distortions, while the spherical projection is a non-deformed model for the fisheye images. The projection of the visual information on the sphere correctly handles the information without introducing distortion [21]. Therefore, we adopted the spherical projection to restore the epipolar geometric relationship of the fisheye image and apply the RANSAC method to the fisheye image. The projection of image coordinates onto a sphere was conducted on the basis of calibration as shown in Figure 3. Fisheye image 1 Fisheye image 2 Fisheye cameras can be modeled by a unit sphere and a perspective camera [9], which means the fisheye imaging process can be performed in two steps: First, we project the world points onto the unit sphere. This step is similar to perspective cameras and conforms to the collinear condition. Then, points on the unit sphere are projected onto the fisheye image plane, and fisheye distortion is mainly produced in this step. According to this imaging process of a fisheye camera, we can also restore the fisheye epipolar geometry in two steps: First, we reproject the image points back to the unit sphere Fisheye cameras can be modeled by a unit sphere and a perspective camera [9], which means the fisheye imaging process can be performed in two steps: First, we project the world points onto the unit sphere. This step is similar to perspective cameras and conforms to the collinear condition. Then, points on the unit sphere are projected onto the fisheye image plane, and fisheye distortion is mainly produced in this step. According to this imaging process of a fisheye camera, we can also restore the fisheye epipolar geometry in two steps: First, we reproject the image points back to the unit sphere through calibration to avoid the influence of distortion. Then, we can restore the collinear relationship between the spherical points and the world points. In Figure 3, (u 1 , v 1 ) is the image pixel coordinates of world point P. (u 1 , v 1 ), P 1 , and P do not satisfy the collinear condition due to the influence of fisheye distortion. Under the calibration model [7], f (u, v) was used to represent the image distortion. We converted an image pixel point (u, v) to the point (u, v, f(u, v)) on the sensor plane thus taking the fisheye distortion into consideration. The point (u, v, f(u, v)) is related to a ray emanating from the viewpoint O to the world point, and this relation is expressed by Formula (1). Therefore, the points on the sensor plane and the world points conform to the perspective projection. Finally, we normalized the points on the sensor plane to the unit sphere. Through the above process, we modeled the fisheye distortion and restored the collinear condition. The details are as follows: According to Formula (1), the coordinates (u, v, f(u, v)) restore the collinear geometry by calibration [7]. (u, v, f(u, v)) are normalized to the unit sphere and converted into spherical coordinates (Xs, Ys, Zs) for the subsequent implementation of the pose estimation algorithm. The normalization and projection are accomplished by the following formula: [Xs, Ys, where (u 1 , v 1 ) and (u 2 , v 2 ) are the image pixel coordinates of world point P projected onto fisheye images A and B. The fisheye camera is calibrated to obtain (u 1 , v 1 , f (u 1 , v 1 )) and (u 2 , v 2 , f(u 2 , v 2 )), and the spatial coordinates are normalized to spherical coordinates O 1 and O 2 to obtain P 1 (Xs 1 , Ys 1 Zs 1 ) and P 2 (Xs 2 , Ys 2 Zs 2 ), so the collinear relationship can be reconstructed, and the relative motion between the two fisheye images can be described by essential matrix. By combining this spherical model and the IMU-aided RANSAC, we extend the RANSAC method for rectilinear perspective images and propose the So-RANSAC with fisheye distortion adaptability. Relative Rotation Angle The IMU can obtain accurate short-term poses, so when the accuracy of visual pose estimation is not good, the pose obtained by the IMU is used as the constraint of RANSAC to improve the matching accuracy. In this paper, the theoretical basis of IMU-aided matching was the invariance of the rotation angle of a rigid body in different coordinate systems [35]. It was further proved in [30] that when the camera and IMU are fixed, the rotation angle obtained by the IMU can be directly regarded as the rotation angle of the camera without the need for camera-IMU calibration. Based on this theory, we first used the IMU affixed to the camera to obtain the measurement data, then Madgwick's method [36] was used to obtain the attitude, and, finally, we converted the attitude into the equivalent rotation vector and rotation angle in which the rotation angle was used to assist with outlier removal. The complete propagation of the IMU state is complex; thus, for the sake of clarity, we used a simplified navigation equation to describe the IMU orientation propagation: where C b n is the attitude matrix of IMU, which represents the conversion from the body frame (b) to the navigation frame (n), and ω b ib is the real angular velocity measured by the IMU, and × represents the conversion of the vector into the skew-symmetric matrix. The change in attitudeĊ b n over time is described by Equation (5). After the attitude at each moment of the IMU is obtained by IMU propagation, the relative transformation between the attitudesĊ b n of adjacent moments is calculated to obtain the rotation matrix R. R describes the relative motion betweenĊ b n of adjacent moments, and then R is expressed as a function of the equivalent rotation vector φ by Rodrigue's rotation formula [37]. where φ represents the three-dimensional equivalent rotation vector, the vector direction of the equivalent rotation vector represents the direction of the axis of rotation, and the magnitude of the norm φ represents the magnitude of the rotation angle. RANSAC Aided by the IMU RANSAC estimates the essential matrix, E, through a five-point method. For a correspondence s and s', the two-view geometry is described by the essential matrix: where t represents the displacement, [t] × represents the skew-symmetric matrix, and R represents the rotation matrix. The essential matrix E has 5 degrees of freedom, and at least 5 correspondences are needed to estimate it. The rotation matrix is expressed by Rodriguez's rotation formula. The vector direction of the equivalent rotation vector φ in Formula (6) is expressed by the unit rotation axis vector µ in three-dimensional space, the magnitude of the norm φ is expressed by the Remote Sens. 2021, 13, 2017 8 of 18 rotation angle θ, and the rotation matrix is described by R(θ, µ). At the same time, according to Lie Algebras [38], the skew-symmetric matrix is rewritten by Formula (9), Formula (6) is transformed into Formula (10), and Formula (8) is converted into Formula (11): To estimate E(θ, µ, t), the parameters to be calculated are the rotation angle θ, unit rotation axis vector µ = (µ x , µ y , µ z ) T , and displacement t = (t x , t y , t z ) T , for a total of 7 unknowns. Since the scale is unknown, µ and t are normalized, so there are 5 degrees of freedom. For the RANSAC task, the reduction in the degree of freedom means the reduction in the number of points required in the minimum sample set, and the influence of the outliers is reduced, so fewer iterations are required, and the theoretical accuracy of the method will also be improved [30]. We adopted four-point RANSAC to estimate E(θ, µ, t), and we provided the rotation angle θ through the IMU to reduce the degrees of freedom of pose estimation. After introducing θ, E had only 4 degrees of freedom, and 4 correspondences were required to estimate the pose, which simplified the calculation. Subsequently, the obtained pose model was used to calculate the reprojection error of each point in the putative set, and the inliers and outliers were classified. After iterating, the optimal pose and inliers set were finally obtained. The above content is the principle of the So-RANSAC algorithm on the fisheye spherical point set S; the algorithm flow is shown in (Algorithm 1). Algorithm 1 So-RANSAC aided by IMU Input: putative set M, relative rotation angle θ, fisheye camera calibration parameters Initialization: 1. The putative set M is projected onto the sphere, and the spherical set S is obtained 2. for i=1:N do 3. Select a minimum sample set (4 correspondences) from S 4. The essential matrix E is estimated, and the attitude of the model is given by θ 5. The reprojection error of S is calculated according to model E, and the number of inner points N inliers is calculated. 6. The model with the largest N inliers is regarded as the best model 7. end for 8. Save the optimal model E optimal and the inliers Output: Inliers In summary, the method proposed in this paper adopted four-point RANSAC to integrate the IMU and visual information, and we used the IMU to improve the effect of outlier removal. This did not require camera-IMU calibration, which was convenient to apply. More importantly, the reliability of the epipolar constraint was improved by constructing the fisheye spherical coordinate model, and the accuracy of RANSAC was improved. Experimental Data To verify the method proposed in this paper (So-RANSAC), image matching experiments were carried out in a real application and using public data sets. The real application was the detection of urban drainage pipes. The fisheye images of the drainage pipe were acquired by a self-developed pipe capsule robot. The robot had a small, portable structure and traveled in the pipeline by drifting; it also had a high work efficiency and autonomously localized and detected pipeline issues. An IMU and a fisheye camera were integrated into the robot. The type of IMU was an ICM-20689 highperformance six-axis microelectromechanical systems (MEMS), and the fisheye lens was a high-definition 220 • wide-angle lens. The main hardware specifications of the capsule robot are shown in Table 1. The appearance of the capsule robot is shown in Figure 4. The original data taken by the robot were 1080p video data, and the frame rate was 60 frames per second. The video was sampled at 30 frames per second to obtain discrete fisheye images as shown in Figure 5. Due to the complex internal environment of drainage pipes, the factors that affected the fisheye image matching problem included motion blur and lens occlusion caused by the water environment, fisheye distortion, and the lack of texture caused by the pipe surface material. Therefore, the fisheye image matching task of this scene was challenging. In addition, we used fisheye images from the Technical University of Munich's (TUM) monocular visual odometry data set [39]. The scene in the data set is an indoor environment. Compared with the pipeline data, the TUM images have richer textures, no violent motion artifacts, and better image quality as shown in Figure 6. This data set was selected for comparison to verify the robustness of the proposed method in different scenarios. The TUM data set provided the ground truth of the camera pose, that is, the absolute pose information at each moment, and the poses are represented by quaternions. In this experiment, the quaternion data were converted into relative rotation angles, which are regarded as auxiliary information. In addition, we used fisheye images from the Technical University of Munich's (TUM) monocular visual odometry data set [39]. The scene in the data set is an indoor environment. Compared with the pipeline data, the TUM images have richer textures, no violent motion artifacts, and better image quality as shown in Figure 6. This data set was selected for comparison to verify the robustness of the proposed method in different scenarios. The TUM data set provided the ground truth of the camera pose, that is, the absolute pose information at each moment, and the poses are represented by quaternions. In this experiment, the quaternion data were converted into relative rotation angles, which are regarded as auxiliary information. In addition, we used fisheye images from the Technical University of Munich's (TUM) monocular visual odometry data set [39]. The scene in the data set is an indoor environment. Compared with the pipeline data, the TUM images have richer textures, no violent motion artifacts, and better image quality as shown in Figure 6. This data set was selected for comparison to verify the robustness of the proposed method in different scenarios. The TUM data set provided the ground truth of the camera pose, that is, the absolute pose information at each moment, and the poses are represented by quaternions. In this experiment, the quaternion data were converted into relative rotation angles, which are regarded as auxiliary information. environment. Compared with the pipeline data, the TUM images have richer textures, no violent motion artifacts, and better image quality as shown in Figure 6. This data set was selected for comparison to verify the robustness of the proposed method in different scenarios. The TUM data set provided the ground truth of the camera pose, that is, the absolute pose information at each moment, and the poses are represented by quaternions. In this experiment, the quaternion data were converted into relative rotation angles, which are regarded as auxiliary information. Image Matching Results The ground truth of the fisheye image matching experiment was manually selected to ensure correctness. The So-RANSAC was compared with RANSAC [5] and the current state-of-the-art methods: LPM [25], vector field consensus (VFC) [24], and four-point RANSAC [30]. The matching quality was evaluated through the precision, recall, and Fscore, which were calculated by Formulas (12)- (14). In addition, we counted the number of correct matches (NCMs) and the success rate (SR) as the evaluation metrics. Image Matching Results The ground truth of the fisheye image matching experiment was manually selected to ensure correctness. The So-RANSAC was compared with RANSAC [5] and the current state-of-the-art methods: LPM [25], vector field consensus (VFC) [24], and four-point RANSAC [30]. The matching quality was evaluated through the precision, recall, and F-score, which were calculated by Formulas (12)- (14). In addition, we counted the number of correct matches (NCMs) and the success rate (SR) as the evaluation metrics. For the parameter settings of each matching method, the confidence of the RANSACtype methods was set to 0.99, and the threshold of the reprojection error of the inliers during iteration was set to 3 pixels. The parameters of the LPM and VFC algorithms were set according to the default parameters of the source code provided by the authors. The IMU measurements in the pipeline were processed in accordance with the method in Section 3.2.2, using Madgwick's method [36] to obtain the attitude at each moment and then converted to the relative rotation angle. The TUM data set provided the ground truth of the attitude, which was directly converted into the relative rotation angle. Detailed information on the experiment is shown in Table 2. The average number of correct matches (NCMs) and the success rate (SR) of matches were calculated on the pipeline data set as shown in Table 3. To calculate the matching success rate, we first estimated the correct pose using the correspondences of the ground truth; then, we calculated the residuals of the putative correspondences under the estimated pose of the ground truth and regarded the correspondences with residuals less than 3 pixels as the correct matches. If the number of correct matches of an image pair was less than four, we considered that the match had failed. Due to the complex characteristics of pipeline images (e.g., motion blur, fisheye distortion, and the lack of texture), the RANSAC-type methods may fail when the number of correct correspondences is less than four pairs or there are too many outliers. Therefore, the matching success rate of the three RANSACtype methods was only 96%. The introduction of IMU assistance did not improve this situation, but it did increase the number of correct matches. The LPM and VFC methods are based on local geometric constraints, so they are more adaptable to these situations, and their matching success rates were both 100%, and they retained more correct matches than RANSAC-type methods. However, it should be noted that although the matching success rate can reflect the adaptability of the matching method under harsh conditions, if the restriction is too loose, the high matching success rate may also lead to undiscovered outliers. Therefore, in order to measure the performance of the matching methods, it is also necessary to consider other evaluation metrics (e.g., precision and recall). Figure 7 shows the matching results of sample image pairs selected from the pipeline and TUM data sets. In the pipeline image matching example in Figure 7, the LPM algorithm considered only the geometric consistency in the local area (usually within a few pixels), and the precision was only 58.37% in the case of repeated texture in the pipeline image. However, due to the fact of its loose geometric restrictions, the recall reached 95.42%. The VFC algorithm assumed that the correct matching points should conform to the same vector field model, but due to the serious distortion in fisheye images, this assumption was inaccurate, so the precision was only 66.38%. However, it is worth mentioning that although the accuracy of the above two methods is not high, they have a high recall and can retain most of the correct correspondences. Therefore, in tasks requiring real-time performance, they can be used as coarse matching method to provide initial values for subsequent optimization. RANSAC, four-point RANSAC, and So-RANSAC have the same basic principles, but the difference lies in two points: whether to use IMU assistance and whether to use a fisheye spherical model. Neither RANSAC nor four-point RANSAC adopted the fisheye model; the precision of RANSAC was 82.69%, and the precision of fourpoint RANSAC improved to 89.91% after the addition of IMU assistance. The So-RANSAC method adopted fisheye spherical coordinates in the pose estimation stage, which was more adaptive to distortions, so the precision improved to 90.23%, thus indicating the feasibility of this spherical model. In the example of the TUM data set image in Figure 7, the performance of each matching method is relatively improved due to the better image quality and richer texture. By comparison, the precision values of the LPM and VFC algorithm were still relatively lower than those of the RANSAC-type methods, but the recall values were high. The So-RANSAC method still outperformed RANSAC and four-point RANSAC. the same basic principles, but the difference lies in two points: whether to use IMU assistance and whether to use a fisheye spherical model. Neither RANSAC nor four-point RANSAC adopted the fisheye model; the precision of RANSAC was 82.69%, and the precision of four-point RANSAC improved to 89.91% after the addition of IMU assistance. The So-RANSAC method adopted fisheye spherical coordinates in the pose estimation stage, which was more adaptive to distortions, so the precision improved to 90.23%, thus indicating the feasibility of this spherical model. In the example of the TUM data set image in Figure 7, the performance of each matching method is relatively improved due to the better image quality and richer texture. By comparison, the precision values of the LPM and VFC algorithm were still relatively lower than those of the RANSAC-type methods, but the recall values were high. The So-RAN-SAC method still outperformed RANSAC and four-point RANSAC. Figures 8-9 quantitatively compare the precision, recall, and F-score of each method in five sets of experimental data (the detailed information on the five sets are shown in Table 2). For each set, we calculated the mean value of the matching results of all the images (precision, recall, and F-score), and displayed them via line chart and histogram. Finally, all the data were integrated to calculate the overall precision, recall, and F-score, which are shown in Table 4. Figures 8 and 9 quantitatively compare the precision, recall, and F-score of each method in five sets of experimental data (the detailed information on the five sets are shown in Table 2). For each set, we calculated the mean value of the matching results of all the images (precision, recall, and F-score), and displayed them via line chart and histogram. Finally, all the data were integrated to calculate the overall precision, recall, and F-score, which are shown in Table 4. By comparison, it can be seen from Table 5 that the precision of the LPM algorithm was 66.49%, but the recall was 96.82%, which was the highest among the five methods. The VFC algorithm was sensitive to distortion, and its precision was the lowest (65.57%). So-RANSAC achieved the best performance with a precision of 92.57% and a recall of 88.41%; its F-score was 90.44%, both precision and F-score were the highest. The performance of four-point RANSAC ranked second; the precision and the recall of four-point RANSAC were better than those of RANSAC with IMU assistance, and the F-score was 85.53%. The precision of RANSAC was 83.26%, which was better than that of the LPM and VFC algorithms, but the recall was the lowest (75.73%). ing method is relatively improved due to the better image quality and richer texture. By comparison, the precision values of the LPM and VFC algorithm were still relatively lower than those of the RANSAC-type methods, but the recall values were high. The So-RAN-SAC method still outperformed RANSAC and four-point RANSAC. Figures 8-9 quantitatively compare the precision, recall, and F-score of each method in five sets of experimental data (the detailed information on the five sets are shown in Table 2). For each set, we calculated the mean value of the matching results of all the images (precision, recall, and F-score), and displayed them via line chart and histogram. Finally, all the data were integrated to calculate the overall precision, recall, and F-score, which are shown in Table 4. By comparison, it can be seen from Table 5 that the precision of the LPM algorithm was 66.49%, but the recall was 96.82%, which was the highest among the five methods. The VFC algorithm was sensitive to distortion, and its precision was the lowest (65.57%). So-RANSAC achieved the best performance with a precision of 92.57% and a recall of 88.41%; its F-score was 90.44%, both precision and F-score were the highest. The performance of four-point RANSAC ranked second; the precision and the recall of four-point RANSAC were better than those of RANSAC with IMU assistance, and the F-score was 85.53%. The precision of RANSAC was 83.26%, which was better than that of the LPM and VFC algorithms, but the recall was the lowest (75.73%). In addition, the performance of the five methods on the TUM data set was superior to their performance on the pipeline data, because the image quality was better and the image textures were richer in the TUM data set. In addition, the motion blur and repeated textures of the pipeline images reduced the matching accuracy. In general, the So-RANSAC method had good performance in real and experimental scenes and was robust to complex environments. Reprojection Error RANSAC-type methods carry out pose estimation while removing outliers, and the result can be used as the initial value in visual odometry or simultaneous localization and mapping (SLAM). To verify the pose estimation accuracy, the correspondences of the ground truth in all the image pairs were reprojected according to the estimated pose, and the reprojection error v i was calculated. Reprojection errors of RANSAC, four-point RANSAC and So-RANSAC were compared and evaluated by calculating the mean absolute error (MAE) and root mean square error (RMSE) of the ground truth points set. Detailed information on the experiment is shown in Table 5. The experiment shows that the reprojection error of four-point RANSAC was smaller than that of RANSAC, indicating that IMU assistance can improve the pose estimation accuracy, while So-RANSAC achieved the optimal result, indicating that the fisheye spherical model was helpful for enhancing the adaptability to distortion and improving the pose estimation accuracy. Computation Time In terms of the computation time, we recorded the runtimes of the various algorithms on 100 pipeline images and compared the average runtimes. The computational efficiency results are shown in Table 6. The time complexity of the LPM algorithm was O( N log(N) ), which was the fastest, and the average matching time was only 0.013 s, but the accuracy was the lowest. The time complexity of the VFC algorithm was O(N 3 ), which ranked second in terms of efficiency. The iterative solution process of RANSAC resulted in a relatively slow computational efficiency, which was more than 10 times that of the LPM algorithm, but the average efficiency was still controlled within 1 s, and the matching accuracy was higher. The four-point RANSAC algorithm with the addition of IMU assistance increased the average runtime to 0.651 s, while the So-RANSAC algorithm with the addition of the spherical model increased the average time to 0.719 s. It is worth mentioning that the RANSAC-type methods can obtain inliers and pose at the same time, which can provide an accurate initial value for subsequent bundle adjustment of SLAM and make it converge faster. Therefore, RANSAC-type methods are more suitable for tasks requiring localization than other nonparametric methods. Discussion The IMU will inevitably produce errors in the measurement process. Since this paper focused on the outlier removal method using visual information, and the relative rotation angle provided by the IMU was only used for the relative restriction in the epipolar geometric, we did not analyze the IMU errors in detail. In this chapter, the influence of IMU error on matching accuracy is discussed theoretically and experimentally. In general, translation estimation of IMU is much more sensitive to noise compared to rotation estimation, and the relative rotation measurements are more stable. In addition, we only used IMU information as auxiliary information to improve the accuracy of outlier removal. In the estimation of E(θ, t), although the error of θ will affect the accuracy in theory, in practice this effect will not be obvious when the error is small. Since the errors of IMU are time dependent, long-term pose estimation will cause trajectory drift and error divergence, but the effect of the error on the pose estimation in the short time is tolerable. The sampling interval between two images in this paper was only 1/30s, and in this short time interval, the influence of the IMU error on RANSAC would not be significant. We carried out comparative experiments to show how the noise from the IMU influences the accuracy of the So-RANSAC method, and the experimental results are shown in Table 7. Due to the limitations of the pipeline environment and equipment, it was difficult to obtain the ground truth of pipeline data at present. Therefore, we compared the matching precisions of So-RANSAC using the pose obtained by Madgiwick's method (MARG) and the Kalman-based algorithm (Kalman), and we discuss the impact of IMU error on matching results. We used the open-source codes in [36] to implement the above methods. It was proved in [36] that the MARG method has higher accuracy than Kalman-based algorithm, so the results of Kalman can be regarded as data with greater noise. In addition, we set-up simulation data for comparison. We assumed that the relative rotation angle θ (obtained by MARG) was measured as (1 + e)θ by the IMU, where e reflects the noise, and we set e as 0.02. In practice, the error of θ provided by IMU was much smaller than e. Therefore, through this simulation experiment, the influence of the IMU noise on So-RANSAC can be better tested. As shown in Table 7, the precision of MARG was improved by approximately 0.1% than that of Kalman, and the average correct matching number was improved by 0.14 pairs, while the matching success rate remains unchanged, indicating that small IMU errors have indeed affected the matching precision, but the impact was small. By comparing the simulation data (1 + e)θ with the MARG method, it can be seen that when the error increased, the average number of correct matches decreased by approximately 1 pair, and the matching precision decreased by approximately 7%, but the matching success rate still remains unchanged. This is because the failure of RANSAC usually occurred when the number of correct correspondences was less than four or there were too many outliers (the rate of the outlier was higher than 90%). Therefore, the above experiments indicate that small IMU errors in a short period of time will not cause serious impacts on the matching result. How to integrate IMU errors into the matching model is a meaningful research direction. In visual-inertial SLAM, the IMU error is eliminated through a tightly coupled optimization process. In photogrammetry, the IMU error can be eliminated by bundle adjustment. However, since the focus of this paper is on the matching of two images, overall optimization is not the focus of this paper. Meanwhile, due to the limitations of current equipment, data and theory, at this stage we have not integrated the IMU error into the matching model, which is a limitation of our method. However, this limitation does not affect the innovation of our method, the above experiments and discussions are sufficient to indicate the superiority of So-RANSAC in fisheye image matching. Conclusions To improve the accuracy of fisheye image matching, we proposed an outlier removal method called So-RANSAC, which integrates IMU visual information to deal with the challenge of fisheye distortion. We used the relative rotation angle of the IMU to assist in pose estimation via RANSAC. Then, we introduced fisheye spherical coordinates to reconstruct the fisheye epipolar geometry; this model can enhance the adaptability to distortion and improve the matching quality. We conducted experiments on drainage pipe fisheye images, and the experimental results show that So-RANSAC can accurately remove outliers and achieve robust fisheye image matching results for infrastructure monitoring. The limitation of So-RANSAC is that the geometric model of IMU-assisted matching can be further improved. At present, only the relative rotation angle was used to reduce the degrees of freedom in pose estimation by one, and the pose information provided by the IMU was not fully utilized. Therefore, there is still much room for improvement in the IMU-aided method to improve the accuracy of fisheye image matching.
11,380.6
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Analysis of the performance of a coherent SAC-OCDMA–OFDM–DWDM system using a flat optical frequency comb generator for multiservice networks This research proposes the architecture of a SAC-OCDMA–OFDM–DWDM system using a flat optical frequency comb generator and coherent communication. Its aim is to create a new OCDMA communication system that is specifically intended for multiservice networks. The suggested architectural design is numerically simulated and analyzed using OptiSystem and MATLAB. To quantify the performance of the proposed system, SNR, EVM, and BER were used as metrics. Furthermore, a comparison between the two codes, EDW and RD, has been presented. Thus, it has been revealed that the RD code system has better performance in terms of robustness and number of users than the EDW code. Besides, by analyzing the BER for several symbol rates and comparing it to the pre-FEC threshold, the proposed system demonstrates its effectiveness against linear and nonlinear effects. The results also show that optical noise affects signal quality; the mathematical analysis used allowed us to determine the impact of noise on the number of potential active users. Introduction In recent years, and due to the high demand for bandwidth, especially for 3D/4K/HD video streaming, e-learning, and cloud computing applications, optical code-division multiple-access (OCDMA) has emerged as one of the promising solutions as a multiple access technique for high-capacity passive optical networks (PONs) (Mrabet et al. 2020; 2 Proposed system Figure 1 depicts the block diagram of the proposed system, which is composed of a flat optical frequency comb OFC generator in order to generate the OCDMA codes. The frequency spacing between each frequency is the same as in a DWDM de-multiplexer. The latter is used to separate each frequency or wavelength so that the power combiners may easily create the OCDMA codes afterwards. The user signal is then modulated with each coded signal. After the transmission, a second flat OFC generator is utilized as a local oscillator, whose output is precisely the frequency where no interference is present, and it is mixed with the received signal before being detected using a balanced detector. The user data signal is the baseband of each received signal. This operation is charged to the low pass filter and OFDM demodulator in order to recover it. Figure 2 shows the SAC-OCDMA encoder for this system, which consists of a flat OFC generator, DWDM de-multiplexer, and optical power combiners. The flat OFC signal is generated by a continuous wave (CW) laser modulated with a radio frequency (RF) signal using a dual-parallel Mach Zehnder modulator (DP-MZM) (Shang et al. 2015;Tran et al. 2019), where the CW laser's electric field is expressed as: where E 0 is the amplitude of the electric field, C is the speed of light, 0 is the CW laser wavelength which is 1552.52 nm in this system and that is equivalent to f 0 = 193.1 THz , and 0 is the laser phase. Moreover, two RF signals are applied to the arms of the DP-MZM with the same frequency f 1 and with different voltages. Hence, the role of the DC bias voltages V b 1 , V b 2 and V b 3 is to control the transmission points of the DP-MZM. For the first arm, it is biased to the maximum transmission point by putting V b 1 = 0 V to suppress the odd sidebands. In the second arm, V b 2 has the same voltage as the switching bias voltage of the DP-MZM, thus, the transmission point will be biased to the minimum transmission point in order to suppress the career and even sidebands. Thereafter, when V b 3 is set to 0V this means that 3 will be zero and there will be no phase shifting at the output of the DP-MZM, in order to get seven frequencies or wavelengths with a spacing of f 1 . The electric field of the DP-MZM output is: where J i denotes the Bessel function of ith order of the first kind, m 1 and m 2 are the modulation indexes of the first and second arm of the DP-MZM, and 3 is the bias angle. Hence, Fig. 3 represents the Bessel function of first kind. When the amplitude of the intersection between J 0 and J 2 is the same as the intersection between J 1 and J 3 , these two points mean that the DP-MZM can generate multiple optical sidebands, creating a frequency comb, where m 1 = 1.84 and m 2 = 3.05 , which means that six sidebands and the main career are all with the same amplitude, which is expressed by: The modulation indexes are expressed as follows: (2) where RF1 and RF2 are the amplitudes of the RF signals, V 1 and V 2 are the half-wave voltages for each arm of the DP-MZM respectively. Figure 4 depicts the CW laser and the flat OFC spectrums (Shang et al. 2015). In this system, f 1 is going to be 25 GHz , which is the equivalent of 1 = 0.2 nm . In order to split each wavelength to construct the SAC-OCDMA codes, a DWDM de-multiplexer with a spacing of 1 is used for this purpose. Then a set of power combiners will be employed to combine the selected wavelengths to construct the SAC-OCDMA code for each user. Concerning the EDW and RD codes, the weight and the number of users have been fixed at three to get the same code length, which is six. As a result, the code length of the EDW code N EDW is defined as follows (Menon et al. 2012;Abd El-Mottaleb et al. 2020): where k is the number of users. The code length of the RD code is given by the next expression ( where W denotes the code weight. Tables 1 and 2 show the EDW and RD codes for each user. The same number of users in this case leads to have the same MAI interferences, so the cross-phase modulation (XPM) (Tithi and Majumder 2020) has almost the same effect on both signals during the transmission. Figure 5 represents the EDW and the RD spectrums. Since the length of both codes is six, just 6 successive sidebands will be selected for the coherent modulation. After the code construction, the code's spectrum will be modulated with an OFDM signal using an optical coherent modulator as presented in Fig. 5. Hence, a pseudo random binary sequence generator is employed as a user data. Therefore, a mapping system is used to generate the in-phase and the quadrature component sequences. This system employs the QPSK and 16-QAM modulation will be employed. After separating the I and the Q components, the OFDM modulator will be employed. After modulating the two components, the general electrical OFDM generated signal with N sc subcarriers in the k th symbol period can be expressed as (Chen et al. 2014): where C k,n designates the complex coefficient on the n th subcarrier in the k th symbol and T is the OFDM symbol time. The OFDM modulator parameters are: The number of subcarriers is 512 and the IFFT points = 1024 with a cyclic prefix of the transmitted symbol. The I(t) and Q(t) of the electrical OFDM signal is modulated with the SAC-OCDMA user code, dual drive Mach-Zehnder modulators (DD-MZM) modulate each OFDM component. Both of the DD-MZMs are biased towards the null transmission point due to minimization of the radio to optical up-converter nonlinearities (Shieh et al. 2008). V b ∕2 is the bias voltage where V b is the switching bias voltage of the DD-MZMs. The optical Q(t) component doesn't require an optical phase shifter after modulation since the phase is shifted in the OFDM modulator (Sheetal and Singh 2018). As a result, a power combiner is used to combine both components. The output of the coherent optical OFDM modulator represents one user. Additional power combiners are utilized to combine all the users, which can be expressed by: where N u is the number of users, A Ik and A Qk are the amplitudes of each component for each user in OCDMA encoded signals, which are proportional to the I and Q components, respectively. Modulation indexes and the phase shifts of the DD-MZMs, I k (t) and Q k (t) are the OFDM I/Q components for each user (Shieh et al. 2008), f k and k are the set of frequencies and the phase that identify each user. The EDW and the RD output spectrums are shown in Fig. 6. After the transmission of the combined signals, they will be decoded by splitting each wavelength alone using a DWDM de-multiplexer. Then an identical set of power combiners employed in the SAC-OCDMA encoder is used in order to construct each user spectrum for the second time. Figure 7 shows the SAC-OCDMA decoder. Coherent detection is the technique that is used to restore the OFDM signal. Furthermore, the local oscillator (LO) is mandatory for this technique. In this proposed system, the LO part is a second flat OFC generating seven wavelengths with a spacing of 1 , followed by another DWDM de-multiplexer that splits wavelength, then a selection of wavelengths that are the same as the received signal that do not contain spectral interference, and finally, putting each selected wavelength, which represents the users and the received signal, into the inputs of the 90 • hybrid coupler in the receiver of each user. Figure 8 shows the optical coherent detector including a 90 • hybrid coupler and two balanced detectors, for the recovery of I(t) and Q(t) components (Painchaud et al. 2009), After mixing the received signal with the LO signal, the output electric fields of the 90 degree hybrid coupler are written: where E s (t) and E LO (t) are the electric fields of the received signal and the LO signal, the received signal contains six modulated signals, three of them are the interference spectrums. The following expression can be used to infer it: And the LO signal can be expressed: and A s6 (t) are the powers for each wavelength, and the square of A LO is the power of the LO signal. After mixing the received signal with the LO signal and detecting the electric signal using a balanced detector, assuming the all phases are equal, the output photocurrents can be given by: where R is the photodetector responsivity, and Knowing that LO is the same as one of the six wavelengths in the selected spectrum to be demodulated, IF is going to be zero, and the selected spectrum will be amplified and shifted to become a baseband signal as a result of mixing with the LO signal. Figure 9 shows the balanced detector output spectrums of both codes. The received signal components is filtered using low pass cosine roll-off filters with a rolloff coefficient of 0.2 in order to maintain the baseband part of the received signal, and eliminate all the undesirable frequencies, and minimize the inter-symbol interferences in the OFDM signal. Next an OFDM demodulator, QPSK or 16-QAM decoder and digital signal processing (DSP) are used to restore the original transmitted signal, observe and study the system performance. The DSP is used generally with analog-to digital converters, its role is mainly to compensate for digitally the channel impairments such as chromatic dispersion (CD), polarization mode dispersion (PMD) (Amari et al. 2017), and all phase noises using adaptive equalizers by removing the rate of rotation of the constellation using finite impulse response filters. The DSP employs frequency offset estimator and career phase recovery algorithms for recovering the transmitted signal. Mathematical analysis The theoretical performance of the EDW and RD codes of the optical coherent detection system are investigated. The signal to noise ratio (SNR) of the received signal determines the estimated BER value, the theoretical SNR is expressed by: where I 2 represents the power spectral density of the received photocurrent, and 2 is the variance of the noise sources in the transmission system. Thermal noise, shot noise, and amplified spontaneous emission noise (ASE noise) are the main sources of noise in optical (18) SNR = I 2 2 Fig. 9 Received spectrums of the third user of a EDW code, b and RD code coherent detection. The intensity noise is neglected in this work because the intensity noise of the transmitted signal is eliminated by the transmission media high losses and the LO intensity noise is eliminated by balanced detection (Buscaino et al. 2019). Before detailing each noise, the transmitted signal average power should be detailed in order to determine the received photocurrent and the noise power. A 90 • hybrid coupler is utilized in the optical coherent detection system to split the in-phase I and quadrature Q components of the transmitted signal, which is modulated by a coherent modulator. Knowing that the Q component is phase shifted by 90 • , in order to simplify our analysis, the I component is considered because there is no phase shifting. The root mean square of the received photocurrent of the I component in the output of the 90 • hybrid coupler is: where P r and P LO are the received power of the received signal and the LO power in the receiver. The received signal is an OCDMA encoded signal. In order to get the received electrical signal using coherent detection ( 90 • hybrid coupler and balanced detector), the mathematical proprieties signal power of EDW and RD codes are defined by. From (Abdullah et al. 2008), the EDW code average power for spectral direct detection (SDD) is defined by the following expressions: where P is the signal effective power at the receiver, W denotes the code weight, and N represents the code length. From (19) and (20), the photocurrent of the I component for EDW code is expressed by the next formula: Concerning the RD code, according to (Fadhil et al. 2009), the following expression defines the RD code's power for SDD detection: By replacing (22) in (19), the photocurrent of the RD code signal is: Consequently, the variance of the noise sources can be determined; we can express it by the next formula: where 2 shot is the shot noise, 2 ASE is the ASE noise and 2 thermal is the thermal noise. In this analysis, the shot noise of the LO in the receiver is the dominant noise, and as described in the previous section, the LO signal contains only the selected wavelength that will be recovered from the transmitted signal. This noise can be modeled by the next variance formula: where q is the electron's charge and B is the effective bandwidth of the photodetector's receiver. The use of inline optical amplifiers which generate noise that degrades the quality of the received signal is referred to as ASE noise. In this analysis, the local oscillator-ASE beating noise is the dominant ASE noise. It can be expressed by: where sp is the spontaneous emission factor, G is the amplifier gain, h is the Planck's constant, and v is the photon's frequency. The thermal noise of EDW is the same as the RD code because it is always independent of the received photocurrent. The expression of the thermal noise is: K B is Boltzmann's constant, T n is the absolute receiver noise temperature, and R L is the receiver load resistor. After defining all the currents and the noise, the SNR for the EDW and RD codes can be calculated using the following expressions: Table 3 shows the main parameters used to calculate the SNR. To have the most direct measure that will aid in the analysis of the performance of both codes, the estimated BER is calculated by the following formula (Mrabet et al. 2018): M is the high order of the advanced modulation format (M-PSK or M-QAM), since the transmitted data uses these mapping types to be employed in OFDM modulation. Analytically, to compare the performance of the EDW and RD codes for optical coherent detection, the number of active users should be set, and it should be determined which code may provide the maximum number of users. The BER is the main parameter that can judge which code is the best. To compute the BER, the code length will be defined mainly as a function of the number of users k . It is possible to calculate it using expressions (6) and (7). The number of users is shown in Fig. 10 of the SAC-OCDMA coherent detection system for the EDW and RD codes using 16-QAM-OFDM modulation for different code weights and different bit rates in the presence of ASE noise. Clearly, the RD supports a larger number of active users for different code weights and different bit rates due to its low cross correlation characteristic. The MAI presence in the transmitted signal spectrum is reduced compared to the EDW code, which its suppression is simple and easier to implement. In addition, the presence of ASE noise decreases sharply the number of users due to its effects on the received signal, as a result, inline amplifiers are not recommended in this system. However, in this system, the RD code cannot be considered better than the EDW code in terms of robustness against linear and nonlinear impairments, the performance should be studied when the number of users and the code weight are equal for each code, in order to find out which code is performing better. Fig. 10 Log(BER) versus number of users of optical coherent detection for different code weights and different bit rates in the presence of ASE noise Simulation results and discussion A co-simulation technique is employed through the usage of OptiSystem and MATLAB. Furthermore, since the optical system is carried on the OptiSystem, MATLAB is in charge of executing the simulation, analyzing, processing, and presenting the results. The BER is calculated from the measured EVM from the constellation diagram. Moreover, the EVM provides an accurate measurement to evaluate multi-level and multi-phase modulation systems such as QPSK and 16-QAM modulations. Thus, it can be defined as the difference between the reference symbol vector and the measured symbol vector illustrated in the constellation, it can be expressed by (Schmogrow et al. 2012): where P err is the average power of the error vector of the received data, including all the linear and nonlinear impairments, P ref is the average power of the reference symbol vector. In this simulation, the optimum CW laser injected power into the transmitter and into the LO in the receiver for each user is defined by varying the injected power and determining which power corresponds to the minimum EVM in order to minimize the optical fiber nonlinear effects specifically the Kerr's effect and the self-phase modulation (SPM), as well as get sufficient power for the transmission. Figure 11 shows that for both of codes, EDW and RD, the minimum EVM corresponds to an injected power of 10 dBm for symbol rate of 10 GBd using QPSK modulation over 60 Km including 50 Km of single-mode fiber (SMF), 10 Km of dispersion compensating fiber (DCF) and an erbium-doped fiber amplifier (EDFA) for attenuation compensation (Sheetal and Singh 2018). The figure also illustrates that when the injected power is adjusted, the RD code outperforms the EDW. The influence of the noise presence is studied by adding ASE noise to the transmitted signal and evaluating the performance of the system for both codes. Moreover, the optical signal-to-noise ratio (OSNR) is the optical power divided by the ASE noise, the OSNR is measured over a reference OSNR where its ASE noise bandwidth is 0.1 nm at 1550 nm, by adjusting the ASE noise, the OSNR will also be also varied, which will influence (31) EVM = √ P err P ref Fig. 11 The EVM versus the CW laser injected power the performance of the EVM, the BER can be calculated using the following expression (Freude et al. 2012;Schmogrow et al. 2012): The constant r is depending on the modulation format, from (Freude et al. 2012), r QPSK = 1 and r 16−QAM = √ 9 5 . 12 Log(BER) in function of OSNR for symbol rate of 10 GBd a using QPSK modulation, b using 16-QAM modulation Figure 12 represents the log(BER) in function of the OSNR for symbol rate of 10 GBd for each user over several distances, which are: back-to-back (BtB) transmission, 50 km using SMF and 60 km which was described previously, and comparing its performance with pre-forward error correction (pre-FEC) threshold of 2.17 × 10 −3 in order to find out if the transmitted data can be recovered (Agrell and Secondini 2018). From Fig. 12a, all the curves are below the pre-FEC threshold when the QPSK modulation is employed when the OSNR starts from 5 dB. However for the performance of 16-QAM modulation shown in Fig. 12b, the threshold is respected after 9 dB for BtB transmission for both codes, 10 dB for 60 km for both codes too, almost 11 dB for RD code and EDW code for 50 km. Figure 13 shows the log(BER) versus the OSNR for the distance of 60 km described before, Fig. 13a shows that the symbol rate of 15 GBd for each user, starting from 7 dB the QPSK system, is also below the pre-FEC threshold for both codes. Concerning the 16-QAM Fig. 13 Log(BER) versus OSNR for a distance of 50 km-SMF + 10 km-DCF + EDFA for different symbol rates, a 15 GBd, b 20 GBd modulation, the EDW and RD codes verified the threshold at almost 12 dB. For the symbol rate of 20 GBd for each user shown in Fig. 13b, the pre-FEC threshold is verified for higher OSNR. Concerning the QPSK modulation, the EDW code respects the threshold at almost 7 dB, and the RD code is still below the pre-FEC threshold. The threshold is also respected for OSNR of 15 dB and 17 dB for the RD and EDW codes, respectively. Concerning the discussion of the collected results, due to the DWDM system used in this proposed system, the XPM is induced from DWDM adjacent wavelengths, which specifically results in nonlinear phase noise due to the cumulative intensity dependence of the refractive index in the optical fiber, which heavily distorts the received signal. It is very clear from Figs. 11, 12 and 13 that the RD code system performs better than the EDW code for all the cases studied. Furthermore, as shown in Fig. 6b, wavelength interferences occur at the extremes of the RD code spectrum. On the other hand, for the EDW code, the interferences are located in the vicinity of the users wavelengths as shown in Fig. 6a. Moreover, the overlap of adjacent spectrums results in the XPM effect with interfered spectrums, which degrades the quality of detected signals. However, for the RD code, the overlap is limited to the interfered spectrums due to their positions at the extremity of the transmitted spectrums. Figure 14 shows the constellation diagrams of both EDW and RD codes of the third user for a symbol rate of 20 GBd for BtB transmission, 120 km using twice the 60 km optical link which was described previously ((50 Km − SMF + 10 Km − DCF + EDFA) × 2) , and 300 km using the same optical link five times ((50 Km − SMF + 10 Km − DCF + EDFA) × 5) . The IQ skew is highly apparent in the BtB transmission due mainly to the electrical generation of the I(t) and Q(t) which are not separated exactly by 90 • , which distorts the received signal. For 300 km QPSK system, the quality of the received signal constellation using RD code is better than the EDW. The same thing is observed for the constellation of 16-QAM system at a distance of about 120 km, where the received symbols are spread out clearly in the EDW code more than in the RD code because of the influence of overlapping discussed already. The RD code outperforms the EDW thanks to the low cross-correlation feature, which provides the maximum number of users. The localization of the interferences in the received signal demonstrates the robustness of the RD against optoelectronic noise and the nonlinear effects of the optical fiber for coherent communication systems. Conclusion We have presented the simulation results of a coherent SAC-OCDMA-OFDM-DWDM system dedicated to multiservice networks. The architecture of the proposed system employs a flat OFC generator for code generation and DWDM system for code construction. Then, after applying the most known SAC-OCDMA codes, EDW and RD, to this proposed system to see the performance of its detection technique, the latter shows its effectiveness against linear and nonlinear effects by analyzing the BER for several symbol rates and comparing it with the pre-FEC threshold, mainly due to the conjunction of SAC-OCDMA-OFDM with the DWDM system despite the XPM and its influence on the performance of the transmitted signals. A comparison was established between the two codes, EDW and RD. The EDW code has been discovered to be faster than the RD code. The results also demonstrate the effect of optical noise on signal quality when transmitting some type of data over the proposed system. From this perspective, it is intended to complete this work with experimental data and also use new codes for further research, with mathematical features that take into account the positions of wavelength overlap in the code, the linear and nonlinear effects, particularly the XPM, which makes it possible to reduce the XPM as a result of the new codes' design. These new codes will be used in 5G applications such as Radio-over-Fiber (RoF) and Free Space Optics (FSO) communications. Funding The authors have not disclosed any funding. Conflicit of interest The authors have not disclosed any competing interests.
6,017.2
2022-01-03T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Development of functional in vivo imaging of cerebral lenticulostriate artery using novel synchrotron radiation angiography The lenticulostriate artery plays a vital role in the onset and development of cerebral ischemia. However, current imaging techniques cannot assess the in vivo functioning of small arteries such as the lenticulostriate artery in the brain of rats. Here, we report a novel method to achieve a high resolution multi-functional imaging of the cerebrovascular system using synchrotron radiation angiography, which is based on spatio-temporal analysis of contrast density in the arterial cross section. This method provides a unique tool for studying the sub-cortical vascular elasticity after cerebral ischemia in rats. Using this technique, we demonstrated that the vascular elasticity of the lenticulostriate artery decreased from day 1 to day 7 after transient middle cerebral artery occlusion in rats and recovered from day 7 to day 28 compared to the controls (p < 0.001), which paralleled with brain edema formation and inversely correlated with blood flow velocity (p < 0.05). Our results demonstrated that the change of vascular elasticity was related to the levels of brain edema and the velocity of focal blood flow, suggesting that reducing brain edema is important for the improvement of the function of the lenticulostriate artery in the ischemic brain. edema formation and inversely correlated with blood flow velocity (p < 0.05). Our results demonstrated that the change of vascular elasticity was related to the levels of brain edema and the velocity of focal blood flow, suggesting that reducing brain edema is important for the improvement of the function of the lenticulostriate artery in the ischemic brain. Keywords: angiography, elasticity, lenticulostriate artery, synchrotron radiation S Online supplementary data available from stacks.iop.org/PMB/60/1655 (Some figures may appear in colour only in the online journal) Introduction Lenticulostriate arteries (LSAs), branching from the middle cerebral artery (MCA) in the human and rodent brain, are major feeding arteries for the corpus striatum in the sub-cortex and extremely important for ensuring regular brain nutrition supply (Marinkovic et al 2001). The restoration of blood flow in LSAs is highly related to neural survival and functional recovery in corpus striatum after ischemic stroke. During the acute phase of ischemia, blood flow is absent in LSAs and then restored after reperfusion. The endothelial cells of LSAs can be damaged during the ischemic phase, which further affects the reperfusion ability of LSAs. According to our previous study (Lu et al 2012), functional angiogenesis of micro-vasculature is a main target of ischemic neural repair which firstly requires a sufficient blood supply from major arteries. Therefore, restoration of LSAs is a key point in therapy and rehabilitation of ischemic stroke in the sub-cortex. Development of new therapeutic methods is facilitated by an in vivo imaging technique which can monitor the restoration ability of LSAs with high spatial resolution, especially for pre-clinical small animal studies. However, current in vivo techniques for assessing cerebral vasculature and arterial functions still suffer from insufficient imaging resolution. Microscopic computed tomography (Micro-CT) provides a good resolution (about 10 μm) for tissue samples but with a much lower resolution (larger than 100 μm) in vivo. Furthermore, Micro-CT imaging cannot provide blood flow or functional information (Holdsworth and Thornton 2002). Digital subtraction angiography (DSA) is another x-ray based imaging method for assessment of vascular abnormalities in both clinical and animal studies, but with limited spatial resolution (larger than 100 μm). Magnetic resonance angiography (MRA) has better resolution (about 50 μm) in imaging and shows the cerebral vasculature and hemodynamics of small animals in vivo (Shih et al 2012, Liu et al 2014. Nevertheless, all of these techniques still do not allow direct visualization of small intracranial arteries such as penetrating arteries, LSAs and newly formed small vessels. Doppler micro-ultrasonography is the most useful method to obtain the vascular hemodynamics in clinic and animal studies, but its spatial resolution is reduced with an increased depth of penetration (Greco et al 2012). Laser speckle contrast imaging (LSCI) and near-infrared fluorescence imaging provide sufficient resolution (about 15 μm) and could provide both vasculature and hemodynamic information in vivo. Unfortunately, the penetrating depth of the optical imaging intrinsically limits their applications in sub-cortex studies (Welsher et al 2011, Hong et al 2012, Lin et al 2013. For imaging vascular functions, radial artery pulse wave analysis of optical imaging data is proposed to examine the arterial elasticity (Cohn et al 1995, Zheng andMayhew 2009). Synchrotron radiation x-ray based angiography (SRA), a state-of-the-art technique, provides a useful in vivo imaging tool for rodent cerebral vasculature (Kidoguchi et al 2006, Lu et al 2012, Lin et al 2013, Shirai et al 2013. Previous studies demonstrated the SRA was capable of investigating the morphology of LSAs in mice with 30 μm high spatial resolution . Here, we reported the application of in vivo functional SRA (fSRA) for measuring the elasticity and blood flow velocity (BFV) of the LSAs in rats based on a transient middle cerebral artery occlusion (tMCAO) model. To explore the potential mechanism, we further analyzed the relationships among the elasticity of LSAs, blood flow velocity (BFV) and brain edema after ischemic injury. fSRA An SRA experiment was conducted at the BL13W beamline of Shanghai Synchrotron Radiation Facility (SSRF). The imaging setup is shown in figure 1(a). After monochromatization of x-ray light sprouting out from a bending magnet, x-ray energy of 33.2 KeV with flux of 2.38 × 10 10 photons second −1 mm −2 was obtained for the imaging purpose. To control the ionizing radiation dose, a x-ray shutter was placed before the sample stage. A PCO x-ray charge-coupled device (CCD) camera (pixel size of 9 × 9 μm, FOV of 20 × 4.5 mm, PCO-TECH Inc, Germany) was placed 65 cm away from the sample stage and used to obtain x-ray transmission images continuously with a 4 fps frame rate and exposure time of 35 ms. For animal preparation, an angiographic tube (connecting a PE10 tube to a PE50 tube) was inserted into the external carotid artery (ECA) to the bifurcation of common carotid artery (CCA) for contrast agent injection ( figure 1(b)). Before imaging, the rat was placed vertically to the beam path on its left side. During imaging, 150 μl of non-ionic iodine contrast agent (Ipamiro, Shanghai, China) with a concentration of 175 mgI ml −1 (350 mgI ml −1 , diluted to 50% volume ratio with saline) was injected into the ECA through the angiographic tube at an injecting rate of 133.3 µl s −1 which was controlled by a micro-syringe pump (LSP01-1 A, Longerpump, Baoding, China). Two layers of angiographic images were acquired to obtain the entire hemisphere vasculature. In other words, we conducted SRA twice in each animal by moving the animal up and down. In the imaging procedure, sequential x-ray transmission images I(x, y, t) of the rat brain were recorded (figure 2(a)). Because the injected contrast agent provided sufficient absorption contrast of blood flow and vasculature, Berr-Lambert's law was used to obtain the absorption maps I x y t ( , , ) abs which were proportional to the density distributions of contrast agent (equation (1)). In this study, relative blood flow and vascular elasticity were estimated from the absorption maps I x y t ( , , ) . abs = − I x y t I x y I x y t ( , , ) ln ( ( , , 0)) ln ( ( , , )) abs (1) where I(x, y, t) was the recorded images (there is no contrast agent injection when t = 0). Before the estimation of relative blood flow and vascular elasticity, image sequences I x y t ( , , ) abs were firstly calculated. Then based on the absorption map, the blood vessels were manually segmented. The binary images of blood vessels were used to obtain the vessel center line (ridge) and corresponding outlines automatically. The diameter r 0 of each vessel was calculated for each ridge point (figure 2(b)). Then, the absorption data of the cross-section line at each ridge point in I x y t ( , , ) abs was extracted based on the perpendicular relation between ridge and cross-section. Then the spatio-temporal dynamics I rt (r, t) of each cross-section were constructed from the image sequences I x y t ( , , ) abs (figure 2(c), figure S1(b))(stacks.iop.org/ PMB/60/1655). As the first-pass of the contrast agent (time = 0), the I rt (r, t) demonstrated diffusion shape and reached saturation after time t 0 . During this period, the spatial range and density of contrast agent were spreading and accumulated. In this study, the spatio-temporal dynamics of the contrast agent in I rt (r, t) (t < t 0 ) were fitted to a Gaussian surface (equation (2)). As an absorption imaging technique, the contrast agent at other depths along the light path may introduce noise in the I rt (r, t) of the corresponding cross-section. However, a surface fitting technique can efficiently suppress the noise and provide a robust estimation of fitting parameters. Based on a recent study (Hong et al 2012) the normalized slope of density versus time is proportional to the blood velocity. Therefore, after the estimation of fitting parameters in equation (2), the relative velocity v of each cross-section was calculated using equation (3). The elasticity E of each cross-section was calculated using equation (4). Experimental design Protocols of animal experiments used in this study were approved by the Institutional Animal Care and Use Committee (IACUC), Shanghai Jiao Tong University, Shanghai, China. Thirtysix adult male Sprague-Dawley rats (Sppir-BK Inc, Shanghai, China) weighing 250-280 grams were used in this study. Animals (5 groups, n = 6 per group) underwent magnetic resonance imaging (MRI) and SRA 1, 3, 7, 14 and 28 d after tMCAO to detect brain lesions. Animals characterized with MRI and SRA without tMCAO were used as sham (n = 6). The mean arterial blood pressure (MABP) was measured before MRI by an automatic sphygmomanometer (BP-98A, Softron, Tokyo, Japan), using the noninvasive tail cuff technique. Surgery procedure of transient middle cerebral artery occlusion The surgical procedure for tMCAO was described previously (Tang et al 2014). Briefly, rats were anesthetized with ketamine (100 mg kg −1 ) and xylazine (10 mg kg −1 ) intraperitoneally. Rats were then supinely placed on a surgical board and body temperature was maintained at 37 °C by a heating pad (RWD Life Science, Shenzhen, China). After a midline neck incision was made, the ECA, the CCA and the internal carotid artery (ICA) were isolated under an operating microscope (Leica, Wetzlar, Germany). Then, the pterygopalatine artery (PPA) was ligated to improve the model stability and to eliminate interference. A 4-0 suture (20 mm, Dermalon, 1744-31, Covidien, OH) coated with silicone rubber (length = 3 mm, diameter = 0.4 mm, Heraeus Kulzer, Hanau, Germany) was inserted into the ECA stump, reversed into the ICA and finally to the ostium of the MCA (a slight resistance was felt). The success of occlusion was characterized as the reduction of cortical blood flow down to 20% of the baseline, which was measured by a laser Doppler flowmetry (Moor Instruments, Devon, England). After 90 min of occlusion with ligation of the CCA, the suture was removed and the CCA was released for the reperfusion procedure. Magnetic resonance imaging examination A MRI examination was performed before and after the tMCAO using a 3T MR apparatus (Signa3T, GE Healthcare, CT) using an animal head coil with T2-weighted fast spin-echo sequence (TR = 2000 ms, TE = 40 ms, matrix = 160 × 192, FOV = 6.0 × 6.0 cm; slice thickness =1.0 mm; inter slice distance = 0 mm). Rats were anesthetized with ketamine/xylazine intraperitoneally during MRI. After MRI reconstruction, brain edema volume was measured using ImageJ and calculated by subtracting the non-edema volume in the ipsilateral hemisphere from the total volume in the contralateral hemisphere and then dividing by the volume of the contralateral hemisphere. Statistical analysis Brain edema volume, MABP and diameter of ICA, posterior cerebral artery (PCA), MCA and anterior cerebral artery (ACA) were presented as mean ± SD. Vascular elasticity, blood flow velocity and diameter of the LSAs were expressed as median ± (25th, 75th centiles). All data were compared using a one-way ANOVA followed by Student's t-test. A probability value of less than 5% was considered statistically significant. Elasticity changes of LSAs after tMCAO We manually segmented the LSAs from the vasculature of each rat and calculated the elasticity of the LSAs using the fSRA method (equation (4)). The elasticity map of the entire vasculature and enlarged view of LSAs are shown in figure 3. Statistical results demonstrated that the elasticity of the LSAs was reduced from day 1 to day 7 after tMCAO (p < 0.001) and then increased from day 7 to day 28. Compared to LSAs, the elasticity of the MCA was not changed after tMCAO (Data not shown). Changes of LSAs' elasticity paralleled with brain edema developments after tMCAO To assess the developments of brain edema after tMCAO, we calculated the brain edema volume using T2-weighted MRI. We found that the brain edema occurred in MCA area and increased until day 3, following which, edema volume decreased at day 7. Then, tissue hydration appeared at day 14 and continued to increase to day 28 (figure 4). Tissue hydration is presented as the lack of tissue structure with high-intensity signal. The changes of LSAs' elasticity paralleled with brain edema after tMCAO, which was gradually decreased until day 7 and then recovered after day 7 (figure 4). MABP shows no significant changes after tMCAO (figure 4), indicating that the brain edema and LSAs' elasticity were not influenced by MABP in this study. Changes of LSAs' elasticity were inversely correlated to BFV after tMCAO We also calculated the relative BFV in LSAs using fSRA method (equation (3)) to investigate the relations between vascular elasticity and BFV. Interestingly, the changes in BFV were inversely correlated to vascular elasticity. The relative BFV of LSAs was reduced at day 1 and then increased from day 1 to day 7, peaked at day 7, finally decreased and recovered from day 7 to day 28 (figure 5). Diameter of LSAs was not changed after tMCAO Using SRA, we studied the ipsilateral brain vascular morphology after tMCAO. The morphology of the ICA, the PCA, the MCA and their branches such as LSAs and cortical penetrating arteries can be obtained in vivo. There are no statistically significant changes in the diameters of LSAs, ICA, PCA and MCA after tMCAO, even though brain edema changed after tMCAO ( figure 6, table 1). This result indicated that the evaluation of LSAs' elasticity was not due to the morphology changes of LSAs (figure 6). To increase image contrast, the original data of figure 2(a) was enhanced by an adaptive histogram equalization method. However, for quantitatively comparing the morphologic changes under different conditions, figure 6 just presented the original data. Additionally, figure 2(a) showed the entire brain vasculature while figure 6(a) was only a small region. So the contrast in figure 6(c-h) seems reduced with respect to figure 2(a). Discussion The functional recovery of LSAs is closely related to rehabilitation after cerebral ischemia. Current imaging methods for arterial functions cannot provide sub-cortical arterial elasticity and BFV both in high resolution. Here, we report fSRA technique as a new tool to simultaneously obtain anatomical, hemodynamic and elastic information in rodent arteries. In this study, changes in LSAs after tMCAO are investigated using the fSRA method. After tMCAO, the changes in LSAs' elasticity demonstrate a correlation with the changes in BFV and brain edema. However, the pathogenesis of this phenomenon was still unclear. Brain edema was mainly caused by cytotoxicity in the early stage and angioedema in the later stage after stroke. The cerebral blood volume can be changed after stroke and is promoted by exchange of water between capillaries and surrounding tissues (Krieger et al 2012). In consideration of the self-adjusting capacity of the brain, the arterial functions may be selfregulating to maintain the cerebral water content after stroke. Fluctuations in small arterial elasticity may also be caused by functional or structural alterations that are closely linked to endothelial dysfunction (Grey et al 2003). During brain ischemia, the LSAs lack blood supply, which seriously impacts the endothelial cells' survival (Hayashi et al 1998). Many studies have reported that protecting the endothelial cells by VEGF treatment can significantly improve the blood flow after brain reperfusion ( Hayashi et al 1998, Ferrara et al 2003. The alteration of vascular functions was also related to the changes in the Nitric oxide-dependent vasodilatation pathway, which may be associated with endothelial dysfunction (Lamireau et al 2002). Therefore, up-regulation of vasoconstrictor receptors (for example, endothelin type B, angiotensin type 1 and 5-hydroxytryptamine type 1B/1D receptors) in cerebral arteries after different types of stroke was revealed in recent studies (Edvinsson and Povlsen 2011). It implied that those therapies may improve the outcome after stroke via improving the vascular functions. Lin et al (Lin et al 2002) reported that the cerebral blood flow velocity increased in the ipsilateral cortex from day 1 to day 14 and peaked at day 7 after stroke monitored by MRI, however, our results showed that the BFV of LSAs was only increased from day 1 to day 7, and peaked on day 7. This result suggests that the blood flow alteration may differ between cortex and sub-cortex after stroke. It is also interesting that the cerebral blood flow was decreased to less than 50% of baseline between 1 and 2 d after intracranial hemorrhage (ICH), which is contradictory with ischemic stroke (Yang et al 1994). Given the many benefits of fSRA over other basic imaging methodologies, this imaging technique could also be utilized in other cerebrovascular and cardiovascular research, such as intracranial aneurysms and atherosclerosis, which is often due to hemodynamic and arterial dysfunction (Cebral et al 2011, Rautou et al 2011. fSRA may also have potential applications for monitoring arterial functions in hypertension, diabetes and arthrosclerosis caused by high cholesterol which may cause cerebrovascular and cardiovascular disease (Grey et al 2003). Furthermore, due to the similar principles of SRA and DSA, the theory of fSRA may also be used in clinical diagnoses of hemodynamic and vascular elasticity to analyze arterial functions in patients by post processing the DSA data. However, fSRA still has some disadvantages that need to be overcome in the future. Firstly, the ionizing radiation effects caused by the synchrotron radiation x-ray still are uncertain, which is a major challenge to transferring it to clinical use, though it has already been used in clinical research (Elleaume et al 2000). Secondly, the limited frames per second of the CCD camera makes it difficult to accurately track the BFV for small arteries (Lin et al 2013). The frame frequency in Spring-8 was 30 fps when small animal SRA was conducted, which makes the measurement of the BFV more accurate (Kidoguchi et al 2006, Jenkins et al 2012, Shirai et al 2013. Conclusion In summary, we reported a reliable in vivo technique, fSRA, to measure the sub-cortical arterial elasticity and BFV based on synchrotron radiation angiography in small animals. Using this novel method, we found that the LSAs' elasticity fluctuated after ischemic stroke and may be related to BFV and brain edema change.
4,498.2
2015-02-21T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
Ionospheric Observations of the Solar Eclipse on Oct. 24, 1995 at Chung-Li The main purpose of this paper is to illustrate the usefulness of ionosonde observations in the study of the effects due to the transit of a solar eclipse. A sequence of ionograms were obtained during the eclipse on October 24, 1995 by the Chung-Li Digisonde (situated at 24.9°N, 121.5°E, 35.2°N mag­ netic dip). With fuzzy classification techniques being used, an algorithm was devised to automatically scale digital ionograms. In corresponding time, significant depletions of foE andfoF 1 were observed and were indica­ tive of a response to the eclipse effect. Furthermore, a search for the pro­ duction of atmospheric gravity waves (AGW) induced by the solar eclipse was investigated using the iso-frequency virtual height profiles with time. Time-frequency spectral analysis using the Wigner-Ville distribution was applied to obtain an AGW with a period of 18.5 min in the Fland the lower part of the F2-layers. The induced AGW began at the start of the solar eclipse and ended a couple of hours after the completion of the eclipse. In contrast, the iso-frequency profiles with time were also spectrally ana­ lyzed with the maximum entropy method (MEM). ( INTRODUCTION Ionospheric eclipse observations make valuable contributions to study the transient prop erties of ionizing radiation from the Sun and to explore chemical and transport processes in the ionosphere. In a number of papers, experimental and theoretical eclipse results have been compared to derive the rates of electron production and the loss of ionization. It is well known that a solar eclipse decreases the electron density in the E, Fl and the lower part of the F2layer. The critical frequency in the F2-region is more related to transport and thermal pro-cesses. More recent investigations have been concerned with the search for atmospheric grav ity waves induced by a solar eclipse. Chimonas and Hines (1970) suggested that as the lunar shadow sweeps at supersonic speed across the Earth, the cooling atmosphere acts as a continu ous source of gravity waves and builds up a bow wave. The first observation of an induced gravity wave signature in the form of column electron content fluctuations was reported by Davis and da Rosa ( 1970). Their observations indicated an oscillatory disturbance with a quasi period of about 20 min at about the time predicted for the arrival of atmospheric gravity waves due to the solar eclipse of March 7, 1970. However, Davies (1982) reviewed the re ported experimental evidence and concluded that their direct relationship had not been proven satisfactorily. Walker et al. (1991) later showed direct evidence for the induced atmospheric gravity wave (AGW) by a solar eclipse and obtained a period of 30-33 min. Using the same spectral analysis technique of the maximum entropy method (MEM) (see the review by Ulrych and Bishop, 1975), as that in the work done by Walker et al. (1991), Cheng (1993) also ob served the production of an eclipse AGW but obtained a different period of 17-2� min at the same observatory in Chung-Li. This paper aims at illustrating the usefulness of ionosonde observations in the study of the ionospheric effects due to the transit of a solar eclipse. To obtain ionospheric parameters, an automatic ionogram scaling algorithm incorporating a fuzzy segmentation method is presented in Section 2. This algorithm is applicable to ionograms recorded by modern digital ionosondes, measuring echo amplitude, pulse group delay and wave polarization as the input information. The scaled parametersfoE,foFJ,foF2 and hpF2 on the eclipse day are shown and compared to the results obtained on the days both before and after the solar eclipse. In Sections 3 and 4, the techniques of time-frequency analysis based on the Wigner-Ville distribution (WVD) are in troduced to the spectral analysis of iso-frequency virtual height profiles with time. Using the non-stationary characteristics of the induced AGW by the solar eclipse, the authors have de termined a period of 18. 5 min to the induced AGW. IONOSONDE SOUNDING RESULTS The eclipse of October 24, 1995 at Chung-Li began at 0320 UT (1120 LT), reached a peak of 53% disk obscuration at 0433 UT (1233 LT) and ended at 0546 UT (1346 LT). Rapid sequence ionograms above the Chung-Li Digisonde Observatory (situated at 24.9°N, 12 l .5°E, 35. 2°'N magnetic dip) were obtained at 5-min intervals. The digisonde, an advanced digital ionosonde (Bibl and Reinisch, 1978;Reinisch 1986), can measure the following parameters: (1) group travel time of h'; (2) echo amplitude; (3) phase; (4) Doppler frequency; (5) angle of arrival; (6) wave polarization, that is, separation of ordinary and extraordinary waves; and (7) wave-front curvature. The range is measured by the synchronization of the transmitter pulse and sampling time and has a resolution of 2.5 or 5 km. The information on amplitude was obtained by coherent integration between 16 and 256 quadrature samples in 1/2 dB steps. To scale ionospheric parameters on the solar eclipse day, a method for the automatic identifica tion of ionograms traces using fuzzy classification techniques was previously used (Tsai et al., 1996). A measure of the continuity or discontinuity between ionospherically reflected echoes can be obtained using a fuzzy relation that describes a set of rules for echo connectedness. Based on such measures, the segmentation processing of ionograms can be defined, and their properties are obtained. Segments representing the ordinary and extraordinary reflections from the E-and F-layers can easily be differentiated from multiple hop echoes. One reason for adding complexity to the identification process stems from the existence of the sporadic £ layer (£ , -layer) . As a result, the echo traces from the E-region exhibit complex structures on an ionogram when both the E-and £ , -layers exist at similar heights and are "f uzzy" connected. It is noted that the corresponding signal amplitude reflected from the E , -layer is much stronger than normal E-layer echoes. It is purposed that signal amplitude information be used to distin guish the normal E trace and the E , trace from the main E segment. The E-and £ , -layer parameters could therefore be scaled. Furthermore, the elimination of multiple E , echoes connected to the F-layer trace could be approached by a similar process using echo amplitude information. The main processing procedures are listed as follows: ( 1) interference/stray signals in ionograms are identified and removed; (2) segment ionograms using a fuzzy continuity technique are obtained; (3) the main segment is determined as the F-layer segment; (4) the E-and £ , -layers are interpreted from the £-layer segment; (5) multiple E , echoes overlapping to F-layer signals are eliminated; (6) smoothing, extrapolating and interpolating processing of the F-layer trace is performed; and h'F2,foF2, fxI, and M(3000)F2) are determined in the International Union of Radio Sci ence (URSI) IIWG format. It is well known that in the £-region and the lower F-region production and loss processes are thought to dominate the variations of electron density. As the disc of the Sun is covered either partially or fully by the Moon during a solar eclipse, the solar flux in each wavelength region is progressively reduced. Figure 1 gives scaledfoE andfoFl versus time plots ob served on the eclipse day and the days both before and after the solar eclipse. Significant depletions ofjoE andfoFl in the corresponding time of the eclipse are clearly evident. The foE (joFJ) was reduced from 3.3 (4.5) MHz to a minimum value of 2. 7 (3. 8) MHz, but then recovered. Relative to the noneclipse time, the associated electron density was decreased by up to 33% at the eclipse maximum. Furthermore, a time delay of the foF l minimum with respect to the foE minimum can be observed. The lag of the electron density minimum in creases with altitude as might be expected in the E-region and the lower F-region of an eclipse ionosphere. For the F2-region, Figure 2 showsfoF2 and hpF2 profiles with time on the eclipse day and the previous and next 'control ' days. It can be deduced that the reduction infoF2 (referred to the two control days) occurred after the start of the solar eclipse; the maximum reduction infoF2 was observed not at the time of the eclipse maximum but somewhat later. Additionally, the value ofjoF2 decreased in the first half of the eclipse period but increased in the other half; hpF2 fell whenfoF2 increased and rose whenfoF2 dropped. In the early eclipse observations, there were no coherent results to show a consistent pattern in the behaviour of foF2. Certainly, the effects of transport are important in the ionospheric F2-region. P2. P3. P4. Lung-Chih Tsai & Jann-Yenq Liu The Wigner-Ville distribution (WVD) (Wigner,193 2), the origins of which lie in quan tum statistical mechanics to describe the position and momentum of a particle, is the basis for all of the contemporary developments in time-frequency spectral analysis. Following the no tation of Boudreaux- Bartels (1985), the WVD of an analytic signal z(t) is expressed as: 2 The properties of the WVD that distinguish it from other signal representations are summa rized in Table 1. These properties are discussed in most overviews by Boudreaux-Bartels (1985). It is noted here that, unlike the Fourier transform, which determines the energy distri bution of a signal only as a function of frequency, time-frequency representations can be ap plied to describe both the time and frequency characteristics of a signal. The WVD has been adapted to the case of discrete-time signals. One of the numerical procedures to evaluate the WVD uses equally-spaced signal samples z(n T) and FFf techniques to compute the value of the WVD along a discrete time-frequency grid (Boudreaux-Bartels, 1985), by which: 2T Most of the properties of the \VVD carry directly over to the discrete-time case, but some cause problems associated with aliasing. It has been found that these aliasing components are not present if the signal is either oversampled by a factor of at least two or if it is analytic. However, the observed virtual heights are real and not analytic. To approach the Nyquist rate in the time-frequency spectral analysis of iso-frequency virtual height profiles with time, a transformation from real signals to analytic ones is required (Boashash, 1988). Furthermore, other than eliminating the problem of aliasing contributions, the Wigner-Ville distribution of an analytic signal is capable of avoiding the interaction items between positive and negative frequencies. The analytic signal z(n) corresponding to a real signal s(n) is defined in the time domain as: where H[s(n)} represents the Hilbert transform of s(n). Alternatively, the analytic signal can be defined in the frequency domain as: A SEARCH FOR INDUCED ATMOSPHERIC GRAVITY WAVES Chimonas and Hines (1970) suggested a theoretical concept for the production of atmo spheric gravity waves induced by a solar eclipse. According to them, "as the lunar shadow sweeps at supersonic speed across the Earth, the cooling spot acts as a continuous source of gravity waves that build up into a bow wave, much as a rapidly moving boat produces a bow wave on the surface of the water it crosses. " In the early eclipse observations, Walker et al. (1991) and Cheng (1993) observed at the same observatory in Chung-Li and applied the same maximum entropy method to the spectral analysis of iso-frequency virtual height profiles with time but obtained different periods of 30-33 min and 17-23 min, respectively, for the induced AGWs. Though the observed eclipses occurred on different days, namely on March 18, 1988 andSeptember 23, 1987, the induced AGWs should have been consistent in period as dis cussed in the work of Chimonas and Hines (1971) . In contrast, the results analyzed by the WVD, Figures 3 and 4 show the power wave spectra obtained using the maximum entropy method for various iso-frequency virtual height profiles with time on the eclipse day and the two control days first. As shown in Figure 3, three spectrum curves at 4.5, 4. 6 and 4. 7 MHz sounding frequencies indicate three main waves obtained on the eclipse day with the periods of 27, 18 and 11 min in which the first two of these are close to the wave periods observed by Walker et al. (1991) and Cheng (1993) individually. Furthermore, three wave amplitudes decrease with sounding frequency or altitude. Figure 4 shows that all three waves at 27, 18 and 11 min are much stronger on the eclipse day when compared with those on the two control days, but it is difficult to determine which is the induced AGW. From the suggestion of Hines (1970 and1971), it is assumed that the induced AGW occurs during the eclipse period and continues to propagate for a few hours. With this non-stationary character istic in use, in this study a period of 18. 5 min for the induced AGW was obtained from the time-frequency spectral analysis based on the Wigner-Ville distribution. Figure 5 shows the WVD spectrum of the virtual height profile with time at a sounding frequency of 4.5 MHz. The strongest wave was obtained at a wave period of 27 min but occurred during a nonpredicted time from 1 UT to 6 UT . A weak wave with a period of 18. 5 min can be identified from the start of the solar eclipse to a couple of hours after the end of the eclipse and is expected to be the induced AGW. The thjrd wave wjth aperjod of J J mfo a J so occurred during a nonpredjcted time from 1 UT to 6 UT. In comparison, Figures 6 and 7 show the WVD spectra of the virtual height profiles with time at 4. 5 MHz on the two control days separately. These plots clearly describe both the time and frequency characteristics of a signal spectrum. Figure 8 shows the corresponding three-dimensional perspectives of the 'windowed ' WVD spectrum by a narrow period pass filter at 18.5 min for Figure 5. This 18.5 period AGW induced by the solar eclipse is consistent with the observations by Cheng (1993) and the early work reported by Davis and da Rosa (1970). Further evidence for the induced wave is shown by thefoE andfoFI profiles in Figure 1. Clearly, there are short ripples at -20-min periods occurring after the eclipse maximum. DISCUSSION AND CONCLUSIONS In this study an automatic ionogram scaling algorithm incorporating fuzzy classification techniques has been developed and applied to a sequence of ionograms recorded on the eclipse day of October 24, 1995 at Chung-Li. The scaling results indicate significant depletions of foE andfoFI during the eclipse time. Compared to the noneclipse time, the associated elec tron density was decreased by up to 33% at the eclipse maximum. Furthermore, to search for induced AGWs by a solar eclipse, the maximum entropy method has since been used to derive the associated wave frequency. The maximum entropy method, like the traditional Fourier transform, determines the energy distribution of a signal only as a function of frequency but, unfortunately, obscures the associated time characteristics. In contrast, the Wigner-Ville dis tribution yields both temporal and frequency information. In the FI and the lower part of the F2-layer, the time-frequency spectrum of the iso-frequency virtual height profiles in this study shows a weak wave with an 18.5-min period obtained at the predicted time from the start of the solar eclipse to a couple hours after the end of the eclipse. This weak wave may be blanked by other strong waves caused by other transport or thermal processes on the ionosphere, but it is suggested that it is the corresponding AGW induced by a solar eclipse. However, it is realized that the present results, particularly those pertaining to observe the ionosphere using a single instrument at a particular observatory, cannot offer an ultimate confirmation. Future studies of the induced AGWs by a solar eclipse should be conducted in greater detail utilizing two or more instruments at a particular site or, even better, at various sites.
3,701.6
1997-01-01T00:00:00.000
[ "Physics" ]
ANALYSIS OF INTERNET DATA CUSTOMER DISRUPTION SERVICES TARGET COMPLETION TIME AT THE BUSINESS GOVERMENT ENTERPRISE & WIFI OPERATION UNIT PT. TELECOMMUNICATIONS INDONESIA. TBK (PERSERO) AREA OF NORTH SUMATRA The development of technology and information that continues to occur very rapidly around the world has resulted in developments in the use of the internet in the world. The internet connects one person to another, provides information, as a means of entertainment, as well as a means of communication. Currently, the internet has become a part of human life where it has become a basic need in addition to the need for food, clothing and shelter. The internet, which knows no space and time boundaries, has provided various conveniences for human life in carrying out daily activities, which has led to a high number of internet users in Indonesia. To find out information on customer satisfaction and customer complaints regarding the resolution time for internet data network disruptions, a policy of setting a target time for troubleshooting by the company is 3.2 hours which starts after the report is received from the customer to the officer via a communication device, then the officer checks at the location until normal internet data customer service returns. It is very important that customers experience the service quickly and with quality when repairs are carried out, but based on the reality on the ground there are still variances in complaint resolution as can be seen in Table 1.1. In 2021 from July to December the target completion time was not reached, in 2022 from January to December the target completion time was not reached and in 2023 it was also not achieved, this is a phenomenon that must be carried out a study of the disruption resolution target not being achieved to improve internet data customer service in the North Sumatra region. According to monitoring results through the telkomcare application, it can be seen that the achievement of the standard completion time in the last three years (2021 -2023) has never reached 100%. 1.2.Formulation of the Problem Based on the background of the problems above, the formulation of the problem in this study is not achieving the target time for completing internet data customer service interruptions, so to answer this problem there are several research questions. 1. How to deal with internet data customer service disruptions. 2. What causes the time standard not to be reached in solving internet data disturbances. 3. What solutions can be given to the problem of internet data interference. Research Objectives This research has objectives that must be achieved and refers to the formulation of research problems to find out and examine the following. 1. Identify actual conditions in solving internet data disturbances. 2. Knowing the causes of non-achievement of time standards in the completion of internet data disruption services. 3. Providing solutions to the causes of failure to resolve internet data disruption services that are not according to time standards. LITERATURE REVIEW 2.1.Services Service is helping and providing everything that is needed by other people, such as guests or buyers. In the field of management, some experts describe the word "Service" as follows. Self Awareness and Self Esteem Instilling self-awareness that serving is a duty that must be carried out by maintaining selfdignity and that of the other party being served. Empathy and Enthusiasm Show empathy and serve customers with passion. Reform Trying to always improve service. Vision and Victory Look to the future and provide good service to win all sides. Initiative and Impressive Provide services with full initiative and impress the parties served. Care and Cooperative Show concern for customers and foster good cooperation. Empowerment and Evaluation Empower yourself in a directed manner and always evaluate every action that has been taken. 2.2.Customer A customer is someone who continuously and repeatedly comes to the same place to satisfy their desires by having a product or getting a service and paying for the product or service. Customer service The definition of customer service is customer service is the ability of employees who are knowledgeable, capable, and enthusiastic in delivering products and services to internal and external customers in a way that can satisfy needs, both identified and unidentified, for a positive end result. Customer service is a variety of activities in all business areas that try to combine starting from ordering, processing, to providing service results through communication to strengthen cooperation with consumers. Customer perceptions of value and quality are often determined by the customer service that accompanies a company's main product. Even customer service will be the main weapon in an effort to win the competition, as many companies have the same product to offer to customers. Customers need complete and clear information, faster service, service convenience, and so on. Customer Satisfaction Definition of Customer Satisfaction Satisfaction is a person's feeling of pleasure or disappointment resulting from a comparison between his impression of the performance (or outcome) of a product and his expectations. Kotler Anderson stated that customer satisfaction contributes to a number of crucial aspects, such as creating customer loyalty, increasing company reputation, reducing price elasticity, reducing future transaction costs, and increasing employee efficiency and productivity. There are five principles of customer satisfaction as follows: 1. Customer Satisfaction is a strategic and critical weapon that results in increased market share and increased profits. Customer Satisfaction Factors There are five main factors that must be considered customer satisfaction factors, namely as follows. 1. Product quality, that is, customers will be satisfied if the results of their evaluation show that the products they use are of high quality. 2. Quality of service or services, namely customers will be satisfied if they get good service or as expected. 3. Emotional, namely customers will feel proud and gain confidence that other people will be amazed at customers when using service products with certain services that tend to have a higher level of satisfaction. Satisfaction is obtained not because of the quality of the product but social values that make customers satisfied. 4. Price, namely products that have the same quality but set relatively low prices will provide higher value to customers. 5. Cost, namely customers who do not need to incur additional costs or do not need to waste time getting a product or service tend to be satisfied with the product or service. 2.4.Quality Definition of Quality Quality means the level of good or bad something, the degree or level of quality. Quality means that something has good quality or quality. The international definition of quality (BS EN ISO 9000:2000) is a level that indicates a set of inherent characteristics and meets certain standards. Some experts also have a definition of quality such as Juran (1962) says "quality is conformity with the purpose or benefits." Furthermore, Deming (1982) said that "quality must aim at meeting customer needs now and in the future." This means that quality must be based on customer satisfaction itself. Service quality Service quality is how much the difference between the expectations and reality of customers for the service they receive. Service quality can be identified by comparing customer perceptions of the service they actually receive with the actual service they expect. According to Tjiptono, service quality is the level of excellence expected and control over that level of excellence to fulfill customer desires. The company's ability to provide quality service to consumers is one of the company's success factors. Service quality is a cognitive evaluation of consumers when delivering a company's product or service. If the service provided by the company to consumers is good, then this process will result in high consumer satisfaction, as well as a high tendency to repurchase. Product quality Product quality is an important factor for consumers to determine the selection of company products. The products offered by the company must be well tested and meet the minimum standards according to the provisions. Basically, consumers prefer products that have good quality to meet their needs and desires. If the company wants to maintain its competitive advantage in the market, the company must understand what consumers want to differentiate its products from competitors' products. Product quality dimensions include performance, durability, compliance with specifications, features, reliability, aesthetics, and the impression of quality received by consumers. According to Philip Kotler in a research journal. reliability, accuracy, ease of processing and revision, and other valuable attributes. Quality is the main tool to achieve product positioning. Quality shows the level of expertise of a brand or product in carrying out roles and expectations. Product quality can be seen from the measure of how long the product's durability is, so that it can be trusted by consumers. reliability, accuracy, ease of processing and revision, and other valuable attributes. Quality is the main tool to achieve product positioning. Quality shows the level of expertise of a brand or product in carrying out roles and expectations. Product quality can be seen from the measure of how long the product's durability is, so that it can be trusted by consumers. Research Thinking Framework Research thinking framework is an abstract concept that is used to build a research. The theoretical framework explains the relationship between the variables in the study and provides a basis for developing hypotheses. The theoretical framework can also serve as a guideline for data collection and analysis in research. Theoretical framework is usually built based on theory and literature relevant to the research topic. The theoretical framework can contain concepts, operational definitions, and relationships between variables in research. Making a theoretical framework is very important in research because it can help clarify the research focus, direct data collection, and assist in the interpretation and analysis of the data obtained. In addition, the theoretical framework can also be used to compare research results with the results of other studies that have been done before. The research framework can be seen in Figure 2. .Type of Research The research used in this context is qualitative research. Qualitative research does not involve calculations and numbers, because the focus is on providing an actual and systematic description of the factors, characteristics, and relationships between certain phenomena. The aim is to explore and strengthen predictions about a phenomenon that occurs based on data obtained from the field. In this paper, qualitative research is intended to dig up facts and provide a systematic picture of the reality on the ground. Bogdan and Taylor (1975) state that qualitative research is research that produces descriptive data in the form of written or spoken words from people and observable behavior. In this study, researchers used qualitative research to be able to identify the actual conditions in solving internet data disturbances and the causes of not achieving the time standard in completing internet data disturbance services. In qualitative research the researcher analyzes the situation and reports the situation in an analysis result in the research and provides recommendations that will be used to improve the resolution time for internet data disturbances. PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region has three customer segments that will become research samples, namely the Business, Enterprise and Government segments. Research Locations and Research Time The location in this study is the company PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region based on Jl. WR Supratman No.11, Proklamasi, Kec. Siantar Bar., City of Pematang Siantar, North Sumatra and has areas namely Kabanjahe, Kisaran, Padang Sidempuan, Pematang Siantar, Rantau Perapat, and Sibolga. This research was carried out from March to completion. Types and Sources of Data Sources of data used in this study are as follows. Primary data Primary data is data collected directly from the source through research or observations conducted by researchers. Primary data is usually in the form of data that is new and has never been collected before, and is collected with a specific purpose to answer the research question that is being carried out. Primary data in this study are data obtained by researchers in the form of direct observations and interviews related to handling customer complaints at PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region. Secondary Data Secondary data is data that has been previously collected and processed by other parties, and can then be reused by researchers to answer research questions being carried out. On research as well as from books, internet and other literature. 3.4.Data Collection Method Data collection techniques are carried out to facilitate researchers in obtaining valid and reliable data. In this study, data collection will be carried out where the respondents will be helpdesks that are directly related to handling customer complaints. In this study the data collection method is as follows. 1. Observation. Observation, namely making direct observations in the field of research objects contained in the Business Government Enterprise & WIFI Operation Unit of PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region regarding handling customer complaints Interview (Interview). Data collection by means of interviews was carried out to find out information from sources. the method of collecting data obtained from interviews is through employees in divisions related to handling complaints at PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region, in the form of direct interviews related to the research variable, namely handling customer complaints. 2. Questionnaire. Questionnaires are one of the most popular data collection methods among social and business researchers, due to their ability to gather diverse and relevant information. With a questionnaire, researchers can collect data on various topics such as the causes of internet data disturbances, completion of reporting internet data disturbances at PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region. Data Processing Method Data Processing Method is a process for converting raw data into useful information. This method includes a variety of techniques and tools to clean, analyze, and interpret data. The goal is to gain a deeper understanding of a phenomenon or problem being researched and make more informed and effective decisions. This research utilizes data processing methods using Fishbone Diagrams. We use this method to identify the factors that influence the occurrence of a failure or problem, and to analyze the relationship between these factors. By using this method, it can produce more comprehensive research results and can provide more effective solutions in dealing with existing problems. The following fishbone diagram in this study can be seen in Data analysis method Data analysis techniques are used by researchers to solve problems that arise in the company. After processing data, it is important to carry out in-depth analysis and design recommendations for solutions to overcome problems or improve business performance. One way to get more detailed and in-depth information is through deep interviews with relevant stakeholders or employees. With deep interviews, researchers can gain deeper insights and perspectives on the situation being researched, and gain valuable input from those with extensive experience and knowledge. Research Results PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra Business Unit Government Enterprise & WIFI Operation is a business unit operating in North Sumatra, Indonesia. This company is part of PT. Telekomunikasi Indonesia, tbk (Persero), which is the largest telecommunications company in Indonesia. This unit serves three main customer segments, namely Business (business), Enterprise (company), and Government (government). In this case, PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra Business Unit Government Enterprise & WIFI Operation provides telecommunications and internet services to business, corporate and government customers in the North Sumatra region. The company's Mission Statement is to manage the BGES & WIFI Operation function to support the attainment of information. This shows that companies have a goal to effectively and efficiently manage BGES (Business Government Enterprise & WIFI) operations to support quality exchange of information for their customers. The job responsibilities of this unit are to achieve expansive unit performance by disseminating work programs to staff. In this case, the task of this unit is to achieve the set performance targets by communicating and conveying the work programs that have been set to all staff of the unit. Respondent Profile The population in this study were staff and technicians at the Business Government Enterprise & WIFI Operation Unit, totaling 67 people. In this study using purposive sampling in determining the respondents. Respondents were selected which were directly related to handling customer complaints at PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region based on Jl. WR Supratman No.11, Proklamasi, Kec. Siantar Bar., City of Pematang Siantar, North Sumatra and has regions namely Kabanjahe, Kisaran, Padang Sidempuan, Pematang Siantar, Rantau Perapat, and Sibolga. In this study, direct interviews were conducted and questionnaires were distributed to staff and technicians at the Business Government Enterprise & WIFI Operation Unit. In this study, the sample size was determined using the Slovin formula. The Slovin formula is as follows. By using the Slovin formula, the number of samples that will be obtained as respondents in this study as many as 58 respondents. The following is a profile of the respondent consisting of the respondent's name, the respondent's age, the respondent's gender and the respondent's job placement. 4.3.Identification of Customer Complaint Handling Business Processes Based on the results of field observations, this research reveals that the business process for handling customer complaints consists of several steps. The initial step is receiving complaints, in which customers express their complaints through various available communication channels. Furthermore, these complaints are systematically recorded for management and further analysis. The next step is complaint analysis, in which the complaint handling team analyzes the root cause of the problem underlying the complaint. After that, determine corrective actions that are in accordance with the aim of resolving customer complaints. Solution implementation is carried out by implementing predetermined corrective actions, followed by follow-up with customers to ensure their satisfaction is fulfilled. The results of this study provide insight into effective business processes in handling customer complaints, with the hope that they can help companies improve service and relationships with customers. The following is the result of research in the form of a complaint handling business process at PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region. In an effort to understand and improve customer experience, it is important for companies to track and analyze the number of complaints received from each customer segment. In this study, using detailed documented data and describing the number of complaints received from the Business, Enterprise, and Government segments in PT. Telekomunikasi Indonesia, tbk (Persero) for North Sumatra region can be seen in Table 4.2. To gain a deeper understanding of the pattern of complaints from each customer segment within a specific timeframe, complaint data was analyzed from January 2021 to February 2023. The results of this analysis are presented in the form of a histogram. The histogram provides a clear and easy-to-understand visualization of number of complaints received from different customer segments during the period. The histogram of the number of untimely complaints for the period January 2021-February 2023 by Customer Segment can be seen in Figure 4.2. Figure 4.2. Histogram of the number of Complaints for the January 2021-February 2023 period based on Customer Segments From the histogram graph above, we can see that the customer segment that most often experiences untimely handling of complaints is enterprise with 54 damages, while in the business segment 6 times and government 3 times. From this histogram diagram it is obtained that the distribution of data is not normal where delays in handling often occur in the enterprise segment. In addition to analyzing customer segments, this study also analyzes documented complaint data in detail, with a focus on work areas aimed at identifying patterns and trends that may occur, as well as providing more comprehensive insights to improve service quality. 239 To gain a more in-depth understanding of the pattern of complaints from each work area within a specific timeframe, analyze complaint data from January 2021 to February 2023. The results of this analysis are presented in the form of a pie chart. Following is a pie chart of the number of untimely complaints by work area, which can be seen in Figure 4.3. Based on graph 4.3. From what we can see, the area that most often experiences untimely handling of complaints is the range where as much as 41% occurred in the range, 21% in Rantau Perapat and 17% in Pematang Siantar. From this diagram it is obtained that the distribution of data is not normal where most of the complaints that are not on time occur in the range. CONCLUSIONS AND SUGGESTIONS 5.1. CONCLUSION Based on the analysis and discussion of research regarding the resolution of internet disturbance complaints not on time at PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra region, it can be concluded that there are several causative factors that make the standard time for handling complaints not achieved, which can be summarized as follows. PT. Telekomunikasi Indonesia, tbk (Persero) North Sumatra Business Unit Government Enterprise & WIFI Operation is a business unit responsible for providing telecommunications and internet services to business, corporate and government customers in North Sumatra, Indonesia. As part of PT. Telekomunikasi Indonesia, tbk (Persero), the largest telecommunications company in Indonesia, this unit has a mission to manage the BGES (Business Government Enterprise & WIFI Operation) function to support quality information exchange. The main objective of this unit is to achieve expansive unit performance by disseminating work programs to staff. This is done by communicating and conveying the work programs that have been determined to all unit staff members. Therefore, the main objective of this unit's task is to ensure that all staff have a clear understanding of the work program that must be carried out in order to achieve optimal unit performance.
5,023.2
2023-06-28T00:00:00.000
[ "Computer Science" ]
Derivation of feline vaccine-associated fibrosarcoma cell line and its growth on chick embryo chorioallantoic membrane – a new in vivo model for veterinary oncological studies Feline vaccine associated fibrosarcomas are the second most common skin tumor in cats. Methods of treatment are: surgery, chemotherapy and radiotherapy. Nevertheless, the usage of cytostatics in feline vaccine associated sarcoma therapy is limited due to their adverse side effects, high toxicity and low biodistribution after i.v. injection. Therefore, much research on new therapeutic drugs is being conducted. In human medicine, the chick embryo chorioallantoic membrane (CAM) model is used as a cheap and easy to perform assay to assess new drug effectiveness in cancer treatment. Various human cell lines have different tumors growth on CAM. In veterinary medicine such model has not been described yet. In the present article derivation of feline vaccine associated fibrosarcoma cell line and its growth on CAM is described. The cell line and the tumor grown were confirmed by histopathological and immunohistochemical examination. As far as we believe, this is the first attempt to create such model, which may be used for further in vivo studies in veterinary oncology. Introduction Feline vaccine-associated fibrosarcomas are the second most common kind of skin tumors found in cats. Predictive factors are vaccination against rabies and feline leukemia virus (FeLV) due to the aluminum adjuvant ions contained in these vaccines (Madwell et al. 2001;Couto et al. 2002). The exact mechanism of the formation of fibrosarcomas in cats is unknown, but it is suspected that the overexpressed inflammation of skin in the place of injection changes into neoplasm. According to the latest reports the genetic predisposition may also be important (Nambiar et al. 2001). It has been proven that cats injected many times in the same place (especially in the intrascapular region) are more likely to develop tumors. Thus, according to the newest guidelines published by the Vaccine Associated Feline Sarcoma Task Force from the U.S. cats should be injected near to pelvic limbs changing sides, because in severe cases leg amputations can be performed. Initially diagnosis is made based on the history, clinical examination of the patient and results of a fine-needle aspiration biopsy of the tumor. The definitive diagnosis is based on a histopathological examination of the tumor after its surgical resection. Methods of treatment are: surgery, chemotherapy and radiotherapy. Surgery is the first method of treatment; however, not all cancers can be removed surgically, depending on the type, the malignancy level, the localization and size of the tumor and above all the condition of the patient. The additional treatment is chemotherapy and radiotherapy. It has been proven that chemotherapeutic agents such as doxorubicin, cyclophosphamide and vincristine show high efficiency against soft tissue sarcomas, including feline vaccine associated fibrosarcomas. Unfortunately, the use of cytostatics in adjuvant therapy of feline vaccine-associated fibrosarcomas is limited due to their adverse side effects, high toxicity and low biodistribution after intra venous (i.v.) injection (MacDonald 2009). New drugs, as e.g. nanoparticles conjuncted with cytostatics are currently being tested (Brigger et al. 2002). Animal models (most common rodent models) are used during preclinical in vivo studies. In human medicine, the chick embryo chorioallantoic membrane (CAM) model is also used as a cheap and easy to perform assay to assess a cancer's ability to metastasis, pro-and anti-angiogenic potential and to assess new drug's effectiveness (Dagg et al. 1956;Armstrong et al. 1982;Strum 1983;Ribatti et al. 2000;Vargas et al. 2007). As it has many advantages comparing with mouse model, we decided to create the first CAM model in veterinary medicine. The main advantage of this model is the ease of implementation and the faster tumor growth compared to other animal models (eg. on CAM it is 5-7 days, on rodent models about 3-6 weeks). Moreover, it is more ethical and a relatively cheap model and there is no need to keep animals. If experiments finish before hatching, according to the European law (Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes) the bioethical commission's approval is not necessary. The only disadvantage is that the tumors exist for only 7-10 days (because the chick embryo development lasts 21 days and the tumor cells are usually implemented between the 6th and the 8th day of the chick embryo cycle). This model is well described for the human neuroblastoma cell lines (IMR 32) (Balke et al. 2010) and human osteosarcoma cell lines (HOS, MG63, MNNG-HOS, OST, SAOS, SJSA1, U2OS, ZK58) (Mangieri et al. 2009). It has been approved by the United States Food and Drug Administration (FDA) as an alternative method for preclinical testing. That model in veterinary medicine may simplify conducting in vivo studies, enable better knowledge about biology of various tumors and assess the effectiveness of different novel drugs. Thus, the present study aimed to isolate a feline vaccineassociated sarcoma cell line and to cultivate it on CAM. We aimed to create a new model for further in vivo veterinary oncologic studies. As far as we believe, this is the first study concerning the use of CAM in veterinary medicine. Tumor sample The tumor sample was obtained from an 8 years old feline patient with a history and clinical signs typical for vaccineassociated feline fibrosarcoma. The patient had been diagnosed for the recurrence of the vaccine-associated feline fibrosarcoma in the intrascapular region -the same place where the primary tumor was surgically removed 2 years before. The fine-needle aspiration biopsy confirmed the vaccine-associated fibrosarcoma. Blood tests and chest radiograph were performed. The blood results were in the reference value and no metastases to the lungs were found. The patient was classified for surgery and a surgical resection was performed according to the standard procedure. The histopathological examination of surgically removed tumor sample confirmed the previously given diagnosis (feline vaccine-associated fibrosarcoma). Isolation of the feline vaccine-associated fibrosarcoma cell line (FVAF1) The procedure of cells isolation from tumor tissue has been described previously (Pawłowski et al. 2009;Król et al. 2010). Immediately after tumor resection the tumor was aseptically collected to the Dulbecco's Modified Eagle Medium (DMEM) containing flask and was transported to the cell culture laboratory. The tumor sample was then sliced and cultured overnight in collagenase containing medium DMEM according to the Limon et al. protocol (Limon et al. 1986) (modified by Dr Eva Hellmen, Swedish University of Agricultural Sciences, Sweden). The following day, the medium was centrifuged and pellet was suspended in a fresh culture medium. The cells were cultured under optimal conditions: a DMEM enriched with 10 % (v/v) heat-inactivated fetal bovine serum (FBS), penicillin-streptomycin (50 IU mL-1), and fungizone (2.5 mg mL−1) (reagents obtained from Sigma Aldrich, USA), in an atmosphere of 5 % CO 2 and 95 % humidified air at 37°C, and routinely sub-cultured every other day. Histopathological and immunohistochemical examination The tissue sample embedded in paraffin block was cut into 5 μm sections and baked in 37°C overnight. After dewaxing in xylene and rehydration in ethanol, for antigen retrieval, the slides were placed in 0.02 M citrate buffer, pH 6.0 and boiled in the decloaking chamber. The tumor type was established based on the histopathological features of vaccine-associated feline sarcomas which have been previously described in the literature (Goldschmidt and Hendrick 2002;Madwell et al. 2001;Vascellari et al. 2003), whereas the tumor grading was based on the criteria proposed by Couto et al. (2002). The feline vaccine-associated fibrosarcoma cells (FVAF1) were cultured on Lab-Tek (Nunc Inc., USA) 4-chamber culture slides and were then fixed with ethanol after 24 h. The immunohistochemical examination of the expression of vimentin, smooth muscle actin, desmin and cytokeratin was performed on the tissue sample as well as on the FVAF1 cell line to confirm the same expression of antibodies in tumor cells as in the tissue sample. The usage of primary antibodies for different fibrosarcomas, including feline fibrosarcomas, both in cell lines and tissue samples, was described in many previous studies (Vascellari et al. 2003;Madwell et al. 2001;Goldschmidt and Hendrick 2002) and PhD thesis ("Untersuchungen zur Transkription von Wachstumsfaktoren und Zytokinen an felinen Vakzinationsstellen" Löhberg-Grüne, 2009; "Klonierung feliner Fibrosarkomzelllinien und deren zytogenetische Charakterisierung" Wasieri, 2009). The samples were incubated in the Peroxidase Blocking Reagent (Dako, Denmark) for 10 min at room temperature prior to the antibody incubation. After 30 min incubation in 5 % bovine serum albumin (Sigma Aldrich, Germany), the following primary antibodies were used (diluted in 1 % bovine serum): monoclonal mouse anti-human cytokeratin (Clone MNF116); monoclonal mouse anti-human vimentin (Clone Vim 3B4); monoclonal mouse anti-human actin (smooth muscle) (Clone 1A4); monoclonal mouse antihuman desmin (Clone D33) all antibodies at the concentration 1:50, obtained from Dako (Denmark). According to the manufacturer's instructions the slides were incubated with antibodies at +4°C overnight or 1 h at room temperature. For the staining the anti-mouse EnVision kits (Labelled Polymers consist of secondary antirabbit antibodies conjugated with the HRP enzyme complex obtained from Dako) was used. To develop the coloured product, the 3,3′-Diaminobenzidine (DAB) substrate was used (Dako). Finally, the haematoxylin was used for nuclei counterstaining. The staining without the use of primary antibodies was done as a negative control for each immunohistochemical analysis. The pictures were taken using Olympus microscopy BX60 (Olympus, Germany). The tumor grade was established according to the grading scheme, previously adapted to the dog (Powers et al. 1995) and recently applied to feline vaccine associated fibrosarcomas (Couto et al. 2002;Vascellari et al. 2003), basing on cellular differentiation, presence and extension of necrosis within the neoplasm and mitotic rate. FVAF1 cell line growth on CAM The new method of implementation of feline tumors cell cultures was developed by modifying the method of implementation of human tumors (eg. neuroma, osteosarcoma) in the chick embryos' chorioallantoic membrane (Balke et al. 2010;Mangieri et al. 2009). The 30 chick embryos (Ross 308 line, Pankowski Jan Poultry Hatchery, Poland) were held in the CO 2 incubator (SMA Coudelou ZA 37210, France) under standard conditions (65 % humidity, 5 % CO 2 and 37.5°C) as soon as the embryogenesis started. On the 5th day of incubation a 5 mm×5 mm 'window' was made in the eggshell on the blunt end on each of the egg. The parchment-like membrane was carefully taken out and a sterile silicon ring was put into CAM of each egg in accordance to aseptic procedures. Medium hard silicon rings, which are 7 mm in external diameter, 5 mm in internal diameter and 2 mm thick, were specially designed for this experiment and produced by Zegir PTHU (Poland). They were sterilized before use. The 'window' in the egg's shell was closed using Polopor (Viscoplast, Poland) -a special adhesive tape with high air and water vapor. After 24 and 48 h eggs were candled to check the vitality and estimate the mortality associated with manual manipulation. The silicon rings were put into CAM 2 days before tumor cells inoculation to exclude the mortality of chick embryos caused by blood vessel damage and possible overexpressed inflammatory reaction due to putting silicon rings. Moreover, after those 2 days the silicon rings were more stable in the CAM as the blood vessels spread into it. On the 7th day of incubation FVAF1 cells (5×10 6 cells in 25 μl of medium per egg) were administered into 20 chick embryos. 7 chick embryos were used as a negative control (inoculated only with 25 μl of medium (DMEM) per egg). At this day, chick embryo's vitality was 85 % (1 out of 20 chick embryos were alive, 3 were dead probably due to mechanical manipulations connected with taking out the parchment-like membrane and putting silicon rings in CAM). Both tumor cells and medium were injected exactly into the silicon rings. Then, the 'window' in the eggshell were closed with the adhesive tape again. Eggs were candled 24 and 48 h later to check their survival. On the 12th day of incubation 17 out of 20 inoculated chick embryos were alive and were examined with the video otoscope (Welch Allyn MacroView™ Veterinary Otoscope 71032, USA) for tumor growth. However, no tumor growth was observed (phot.3). On the 16th day of the incubation 16 chick embryos were alive and tumor growth was observed. To confirm growth of the injected tumor cells, a histopathological examination of tumors was performed on the 18th day of the incubation, according to the procedures given above. We performed the histopathological examination on the day 18th day as the CAM started to degenerate at the day 18-19. Until the end of the experiment 5 out of 6 chick embryos used as a negative control with medium alone were alive and did not show any tumor grown. Histopathological and immunohistochemical analysis of the cell line FVAF1 The slides stained with haematoxylin-eosin showed the presence of multinucleated giant cells, pleomorphic cells with mild to marked atypia and mitotic figures. These findings appear to be most closely related to vaccine -associated feline sarcoma (VAFS) (Fig. 1a and b). Cell lines derived from vaccine-associated feline sarcoma were strongly positive for vimentin (+++) (Fig. 2a), some cells were positive for smooth muscle actin (+,+/−) (Fig. 2b), single cells positive for desmin (+,+/−) (Fig. 2c) and for cytokeratin (+/−) (Fig. 2d). Strong positive staining for vimentin confirms the mesenchymal origin of neoplastic cells. Feline vaccine associated fibrosarcoma growth on chick embryo chorioallantoic membrane The tumor growth was checked every 2 days since inoculation. Positive tumor development was reported when tumor angiogenesis was visible and when tumor size was of at least 2 mm in diameter. On the 16th day of incubation a tumor size of 5 mm (on average) in diameter was observed (Fig. 3a, b, c and d). The capillary vessels growing into the tumor were visible (Fig. 3c). The tumor had smooth surface, ovoid to spherical shape and was strongly attached to the CAM. It was located not exactly in the silicon ring, but a few milimeters out of it, either the tumor cells had spread through the vessels or the silicon ring position had slightly moved during embryogenesis and growth of the CAM and chick embryo. Histopathological examination The histopathological assessment of the tissue sample of small solid mass within CAM showed that the tumor was undifferentiated sarcoma (Fig. 4a and b). The mass was well demarcated, but non-encapsulated. Histologically, sarcoma consisted of pleomorphic, polyhedral, round or spindle-shaped cells with variations in size and shape of nuclei, nuclear hyperchromasia, prominent eosinophilic nucleoli, scant to abundant eosinophilic cytoplasm and indistinct cell borders. Some cells were bi-or trinucleated and some with eccentrically placed nuclei which resembled vacuolated cells. Neoplastic cells were mainly arranged without a specific pattern. Mitotic rate was low. Besides, the majority of neoformed vessels were present. No areas of necrosis within the tumor as well as no lymphocyte aggregates at the periphery of the mass were visible. Masson's trichrome staining confirmed the mesenchymal components of this tumor and allowed to evaluate the amount and the distribution of collagen (Fig. 4c). According to the grading system on the basis of cellular differentiation, mitotic rate, presence and extension of necrosis, the sarcoma was classified as grade II. Immunohistochemical examination Immunohistochemical examination delineated the mesenchymal components of tumor. The samples were strongly positive for vimentin (Fig. 5a). Some neoplastic cells were stained with smooth muscle actin antibody. Additionally, anti-actin and anti-desmin antibody labelled the walls of blood vessels (Fig. 5b and c). Neoplastic cells did not react with cytokeratin, which confirmed the nonepithelial origin of these cells (Fig. 5d). Only some foci of presumptive squamous epithelial cells showed cytokeratin expression. According to the Couto et al. grading system (Couto et al. 2002) the tumor was classified as grade II of malignancy. Discussion The only available commercial cell line of feline fibrosarcoma is from NBL Cell Line Collectiona part of American Type Culture Collection (ATCC, designation FC 77.T) and cannot be delivered to European countries (distributed only within the United States). That is the reason why the authors decided to isolate their own feline vaccine associated fibrosarcoma cell line (FVAF1), which gives the possibility to further investigate on molecular level and to find new methods of treatment. Comparing the proposed model to rodent model the main advantage of the CAM model is the fact that the experiment can be repeated many times and a large amount of chick embryos may be used in one time, which gives more reliable results. It results from the fact that it is an in vivo animal model that does not require the approval of a bioethic committee in European countries. Performing the same number of experiments and the same number of species on rats would be rather impossible not only due to animal law regulation but also for ethical reasons. Moreover, the tumor growth in the presented model lasts 10 days, while on rodent model from 3 to 6 weeks. Relatively fast tumor growth makes this model easier to utilise. Furthermore, it has the cognitive value showing possibility of feline vaccine associated fibrosarcoma growth on CAM. According to the available data in the field, human cancer cells growth on CAM are usually visible after 5-7 days after inoculation (on the 12th-14th day of incubation). In the animal model tumors were firstly visible after 9 days (on the 16th day of incubation) (Balke et al. 2010;Mangieri et al. 2009), what may suggest a slower growth of this particular cell line. The new element of the procedure (our modification) was the use of a video otoscope, which allowed visualization of the tumor growth relatively noninvasively without enlarging the 'windows' made in the egg's shells and without too extensive embryo manipulations. This technique has been developed by our team and it has been considered as highly effective and minimally invasive method in comparison to other methods described in subject literature of checking the growth of human cell lines Peroxidase-based EnVision™kit (DakoCytomation, Denmark) and haematoxylin counterstain in chick embryo chorioallantoic membrane using a microscope with an external source of light (Balke et al. 2010). The literature about CAM model show many limitations with observation and photograph documentation of the tumors growth using standard microscope with external source of light. In many cases it is impossible to make photographs without enlarging the "window" in the egg shell as during embryogenesis silicon rings and tumors are not always visible in the middle of the window. On the other hand, in screening experiments contractors have checked that enlarging the "window" in the egg shell increases the mortality of chick embryos, probably due to the higher water permeability making chick embryos dry out. The video otoscope enables the observation of all tumors despite their localization without enlarging the "window" in the egg shell. There are some problems with making high quality photographs of 3D living organisms, but that does not depend on the kind of equipment used. The biggest difficulty that the contractors had to deal with while performing screening studies was the high mortality of chick embryos connected with taking out the parchment-like membrane. Silicon rings were put on CAM on different days of embryogenesis (5th, 6th, 7th, 8th), noticing the lowest mortality at 5th day (around 10 %). The highest mortality was at 8th day reaching nearly 60 %, what was probably due to mechanical manipulations connected with easier damaging more developed blood vessels of CAM. This model, when performed before 11th day of chick embryo incubation, is available for xentographs as until this day the immune system of the chick embryo is immature As far as we believe, our experiment of the implantation of the animal cell line on CAM was a first such study in the field of veterinary oncology. However, further studies using different cell lines are required to use this easy model for investigation of cancer biology and to assess the effectiveness of new anticancer drugs. Animal models are essential for entering new therapeutic agents into the market and can not be avoided. We believe that showing the possibility of first feline fibrosarcoma cell growth on CAM will encourage other veterinary scientists to further investigate using other animal cell lines and may be a basis to creating an inexpensive and relatively easy to use new experimental model. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
4,737.6
2012-08-15T00:00:00.000
[ "Medicine", "Biology" ]
Crosstalk Analysis and Suppression of Single Chip SiC MOSFET Half-Bridge Circuit For the hard-switch applications of the half-bridge circuit based on SiC MOSFET, the crosstalk generation mechanism of the half-bridge circuit in the switching on and off process is analysed. The influence of the driving resistance and stray inductance of driving circuit on the crosstalk is simulated and analysed. In the single-chip half-bridge circuit, the Miller clamp method with BJT + diode is used. By increasing the driving resistance and reducing the clamp circuit inductance, the crosstalk suppression effect can greatly improve. It is an effective way to optimize the clamping circuit and reduce the clamp circuit inductance without affecting the switch speed. Introduction The half-bridge circuit is one of the basic circuit topologies widely used in synchronous rectification, full-bridge circuits, inverters and other power applications. The application of wide-bandgap power electronic devices represented by SiC MOSFET has further improved the performance of the half-bridge circuit [1,2]. Compared with the traditional Si IGBT as the core device, the half-bridge circuit based on the SiC MOSFET has lower switching loss and conduction loss at the same rated current. Thanks to the body diode structure of SiC MOSFET, the external anti-parallel diode is no longer needed, which means that the half-bridge circuit based on SiC MOSFET has higher efficiency and higher power density [3][4][5]. However, there is always the problem of crosstalk between upper and lower switches in hard-switch half-bridge circuits. SiC MOSFET devices with faster switching speeds will produce larger dv/dt during switching process, which makes the problem more serious [6,7]. Due to processing problems of the gate of SiC MOSFET, the threshold voltage and negative safety voltage are small. The positive and negative voltage spikes caused by crosstalk can easily lead to unexpected turning on of the device and even lead to gate breakdown and damage to the device [8][9][10]. Therefore, it is necessary to analyse the mechanism of crosstalk and suppress it to improve the reliability of the circuit. This paper introduces the crosstalk generation mechanism and influencing factors of the half-bridge circuit based on SiC MOSFET and proposes the method of Miller clamp crosstalk suppression. The crosstalk suppression of the single-chip SiC MOSFET parallel half-bridge circuit is analysed Crosstalk Generation Mechanism of Half-Bridge Circuit A typical half-bridge circuit and its driving are shown in Figure 1. The upper and lower switches (M1, M2) share the bus voltage Vbus in series. Therefore, when the drain-source voltage of one SiC MOSFET (which marked Vds) changes, the other device's Vds will also change. For the convenience of discussion, this paper assumes that M1 is the switch actively switching on and off and M2 does not perform active switching behaviours. In general applications, in order to avoid through failures, there is a dead time between the upper and lower switching signals of the half-bridge circuit. When the upper switch turns on and off, the lower switch is always in the off-state. Therefore, in the following discussion, the lower switch can always be set to the off-state. Because the half-bridge circuit is usually used in the symmetrical working state of the upper and lower switches, the device parameters and driving parameters of the upper and lower switches are set the same. Crosstalk during Turn-on The turn-on process of M1 can divide into two stages. In the first stage, the channel current of M1 rises and the current rise rate is marked as did1/dt. The changing current generates a forward voltage drop VL on the line stray inductance Lloop. In the second stage, Vds1 begins to fall and Vgs1 enters the Miller platform. During the drop of Vds1, the voltage across the Miller capacitor Cgd drops and a discharge current generate on Cgd. The magnitude of the discharge current are determined by the driving resistance, the driving positive voltage Vcc and the Miller platform voltage Vgsm. Assuming that Vgsm does not change during this process, the rate of changing of Vds1 can be expressed by formula (1): The junction capacitance Cgd1 represents the Cgd value when the drain-source voltage is Vds1. The Vds2 of M2 is divided into two stages accordingly. In the first stage, the M2 body diode (or antiparallel diode) is turned on, Vds2= 0V; in the second stage, the minority carrier recombination of the diode ends. The junction capacitance of M2 begins to charge and Vds2 rises. Vds2=Vbus-VL-Vds1, where Vbus is the DC voltage between the positive and negative buses in the half-bridge circuit. At the beginning of the second stage, Vds2 is very low and the junction capacitance Cgd2 of M2 is relatively large. The variation of Vds2 is mainly composed of Vdg2 and Vgs2. But this period is very short and Vgs2 is far from reaching the peak value. So the time period with lower Vds2 is not introduced and analysed and only the situation when Vds2 is higher is considered. When Vds2 is high, Cgd2 is very small relative to Cgs2 and the change of Vds2 mainly lies in Vdg2. The changing rate of VL is negligible. Therefore, the changing rate of the drain-gate voltage of M2 can be expressed as: As the Vdg2 of M2 increases, a charging current is generated on Cgs2. Part of the current flows to Cgs2 and charges it causing the gate-source voltage of M2 to rise, while another part of the current flows to the loop where Rg is located. The charging current of Cgd2 is marked as im. The charging current of Cgs2 is marked as igs. The current flowing through Rg is marked as ig, which is assumed to be unchanged. According to the circuit principle, the current relations (3)-(5) can be obtained as follows: Figure 2 shows the waveform diagram of crosstalk during the switching of the half-bridge circuit. It can be seen from the figure that at time t0, Vds2 starts to rise and Vds1 falls accordingly. At this moment, the value of Cgd2/Cgd1 is the largest. From (3), we can see that im is also the maximum at this moment. After that, the value of Cgd2/Cgd1 decreases rapidly and at the same time im continues to fall. From t0 to t2, the junction capacitance Cgs2 charges as igs and Vgs2 rises. Meanwhile, ig rises and igs falls. At t2, igs drops to zero and Vgs2 reaches the positive maximum value (Vgpm). If Vgpm is less than the threshold voltage Vth, M2 will not turn on, then the positive effect of crosstalk on the Vgs of M2 can ignore; if Vgpm exceeds Vth, the phase where Vgs2 exceeds Vth is marked as t1 to t3. Then in the time from t1 to t3 (t2 to t4 in Figure 2), M2 is turned on and the current id2 is superimposed on the current id1 of M1 to produce a higher turn-on current spike. After t2, im is not enough to support the Vgs2 and Cgs2 starts to discharge. Vgs2 drops until both Vgs2 and im drop to zero at t4. The crosstalk activation process ends. Crosstalk during Turn-off As shown in Figure 2, the crosstalk of the turn-off process starts at t5. Vds1 rises and Vds2 falls. The Miller capacitor Cgd2 of M2 flows through the discharge current and the direction is opposite to the positive direction defined by im in Figure 1. As Vds1 rises, Cgd1 falls and Cgd2 rises, so Cgd2/Cgd1 increases and im also rises. At t6, Vds1 reaches its maximum value (without considering the influence of Lloop) and im reaches its peak value. Almost at the same time, Vgs2 reaches a negative peak, which is marked as Vgnm. Then the change of Vds2 ends and Cgs starts to discharge. Vgs2 falls and im rapidly drops from the peak value to a part of the Cgs discharge current and continues to fall. At t8, Vgs2 and im drops to zero and then the turn-off crosstalk process ends. (5). However, the Vgnm of the turn-off crosstalk must consider the entire turn-off process. Therefore, the relationship between Cgd and Vds is very complicated. Single-chip Half-bridge Miller Clamp Circuit The main influences on crosstalk are as follows: driving resistance Rg, junction capacitance Cgs and Cgd, driving loop stray inductance Lg, load current Iload, and driving voltage Vcc. The junction capacitance by the selected MOSFET. The load current and driving voltage are usually not changeable for specific applications. Therefore, the only thing that can be optimized are the driving resistance and the stray inductance of the driving loop. We chose the Miller clamp method based on BJT + Diode to analyse the performance of crosstalk suppression in hard-switch half-bridge circuits. The half-bridge circuit based on SiC MOSFET Miller clamp is shown in Figure 3. The selected SiC MOSFET is Rohm's SCT3022KL. The selected BJT is Infineon's BCW68G and diode is ST Microelectronics' BAR42. The bus voltage is 800V. In order to better suppress the positive fluctuation of Vgs, a stabilizing capacitor connected to an external 1.8V auxiliary power supply add to the driving circuit so that Vgs is at -1.8V when the switch is off. The Influence of Driving Resistance on Crosstalk in Actual Circuit The driving resistance Rg is composed of the gate resistance Rgout and the internal driving resistance of the device. Different driving resistance means different switching speed. The amplitude of Vgs crosstalk fluctuation is also different. After the Miller clamp circuit is connected in parallel between the gate and source of the switch, the driving current basically does not flow through the clamp circuit. The switching speed still determines by the loop composed of the driving resistance Rg and the driving loop stray inductance Lg. The crosstalk process under different driving resistances is tested (the load current is 68A). The test circuit includes the simple driving half-bridge circuit without Miller clamp in Figure 1 and the clamp driving half-bridge circuit with BJT+Diode clamp in Figure 3. Since the real Vgs inside the device cannot be detected directly, the drain-source current is detected to reflect the crosstalk and suppression. In order to improve the measurement accuracy, the current is measured through a high-precision shunt resistance (shunt). The test results are shown in Figure 4 and Figure 5. Comparing Figure 4 and Figure 5, it can be seen that with the increase of the driving resistance, the overshoot of the turn-on current of the two test circuits is decreasing. The decrease in the overshoot of the turn-on current means that Vgpm decreases. This result is consistent with the simulation result in Figure 3. Comparing the current and voltage waveforms between the drain and source of the switch without a clamp circuit under the same driving resistance, it can be found that the changing rate of the voltage of the two is not much different, which shows that the switching speed of the two is the same; The overshoot of the turn-on current is quite different because of the Miller clamp. Figure 6 shows the turn-on loss of the SiC MOSFET with or without Miller clamp. It also can be seen from the two figures that the current overshoot amplitude of the switch with Miller clamp driving is lower and the overshoot time is shorter. So its turn-on loss is smaller, and as the driving resistance increases, the difference between the two will increase. The Effect of Clamp Loop Stray Inductance Crosstalk in Actual Circuit The Vgs forward crosstalk fluctuation of the half-bridge circuit with Miller clamp driving is difficult to be completely suppressed. Therefore, the clamping loop needs to optimize as much as possible to reduce stray inductance without affecting the switching speed. In order to understand the influence of the stray inductance Lgclamp on the crosstalk effect, simulation and actual measurement of the BJT clamp driving half-bridge circuit under different Lgclamp are carried out. The actual measurement circuit built according to the principal circuit is shown in Figure 7. The DUT is a switch that is always in the off-state. Each bridge arm has three SiC MOSFETs connected in parallel. The dashed box A\B\C corresponds to the clamping circuits. The stray inductance values extracted by the Q3 software between each clamp circuit and the corresponding switch are about 6 nH, 10 nH and 14 nH respectively. The bus voltage is 800 V, the driving resistance is10 Ω and the load current is 30 A. It can be seen that with the increase of Lgclamp, the peak value of the unexpected turning on current in the crosstalk switch and the unexpected turning on time both increases. Figure 9 shows the actual measured Vgs voltage waveform and drain-source current waveform on the crosstalk switch with different clamp loop inductances. It can be seen from the results that as Lgclamp increases, the forward fluctuation amplitude of Vgs of the crosstalk switch, the peak value of the unexpected turning on current and the unexpected turning on time all increase, which are consistent with the simulation results. In addition, as the increase of Lgclamp, the oscillation peak value after Vgs drops and crosses zero also increases. This test result verifies the previous simulation analysis result. Conclusions This paper focuses on hard-switch applications of the half-bridge circuit based on SiC MOSFET and analyses the crosstalk generation mechanism during switching on and off process. The influence of driving resistance and driving loop stray inductance on the effect of crosstalk is analysed and simulated. In the single-chip half-bridge circuit, the Miller clamp method with BJT + Diode is adopted and the suppression effect of crosstalk improve by increasing the driving resistance and reducing the inductance of the clamp loop. This shows that optimizing the clamp loop and reducing the inductance of the clamp loop are methods that are more effective without affecting the switching speed.
3,192
2021-02-01T00:00:00.000
[ "Engineering", "Physics" ]
Electro-Optical Imaging Technology Based on Microlens Array and Fiber Interferometer To reduce the volume and weight of traditional optical telescopes effectively, this article proposes an electro-optical imaging technology based on a microlens array and fiber interferometer. Pairs of microlenses in the microlens array collect light and couple it into a fiber interferometer to form interference fringes. Then the amplitude and phase of a large number of interferometer baselines are analyzed to generate images. In this work, the principle of electro-optical imaging technology has been analyzed according to the partially coherent light theory. The microlens-array arrangement method and baseline pairing method have been optimized for arbitrary targets. From the simulation results, it was found that the imaging resolution depends on the maximum baseline length, and the imaging quality could be effectively improved by adjusting the Nyquist sampling density and baseline pairing method. This technology can provide an important reference for the miniaturization and complanation of imaging systems. Introduction Without considering the influence of astronomical observation, the imaging resolution of traditional optical telescopes based on the principle of refraction can only be improved by increasing the aperture.However, the manufacture of large-aperture optical telescopes is an enormously expensive undertaking and also requires significant technical support.To overcome the limitations of the image resolution by the size of a single aperture, many novel optical telescopes based on interference imaging have been designed in recent years.For example, the Cambridge Optical Aperture Synthesis Telescope (COAST) has achieved imaging [1], and J. A. Benson et al. completed the Naval Prototype Optical Interferometer (NPOI) optical synthetic aperture telescope array and successfully performed observation and image reconstruction [2,3].In addition, several cutting-edge electro-optical imaging systems have been designed to greatly reduce the volume and weight of traditional optical telescopes, like the geosynchronous earth orbit (GEO) telescopes, which are an interferometric imaging system with arrays of telescopes.Other approaches include that of combining observations from a separated-element interferometer with interferometric data obtained by optical masking of a single-dish telescope [4].The most representative cutting-edge electro-optical imaging system is the Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER), which has been developed by Lockheed Martin and can realize a 10× to 100× reduction in size and weight by replacing the traditional lens stacks with silicon chip photonic integrated circuits (PICs) [5][6][7]. Most of the above telescopes are based on astronomical light interference and optical synthetic aperture imaging [8][9][10].The optical synthetic aperture imaging technique is to improve the resolution of single aperture imaging by measuring the interference result between the observation plane aperture pairs (ie, the spectral value of the observed target).In this technique, the interference fringe of a pair of apertures corresponds to a spectral value of the target, so the complete spectrum of the target is often obtained by changing the relative position between the pairs of apertures or increasing the number of apertures.As we all know, the optical fiber optical path has the characteristics of softness, long transmission distance and strong anti-interference.The use of optical fiber to realize light interference is an important application of interference phenomenon, and will greatly reduce the working distance of the interference system [11,12]. To reduce the volume and weight of traditional optical telescopes effectively, we propose in this paper a cutting-edge electro-optical imaging technique based on a microlens array and fiber interference.In this technique, the light from a scene was first collected by the microlens array and then coupled into the fiber interferometer.Note that each microlens pair was coupled by fiber to form an interference baseline, which had a significant effect on the imaging resolution.Thus, through the optimal microlens arrangement and pairing method, we could complete spectrum sampling of the scene and generate high-resolution images.To establish the optical simulation model, the imaging principle based on the partial coherence theory was first studied first.Then, a complex rectangular sub-aperture arrangement and a symmetrical baseline pairing method were proposed to improve the imaging resolution and quality further.Moreover, we investigated the impact of spatial frequency distribution (u, v) on imaging quality, including the resolution, field of view, and depth of focus of the imaging system.The simulation results provide a valid reference for the design of the actual system. Structure Design The structure of the electro-optical imaging system, which consists of a microlens array, fiber interferometer, and a processing module, is shown in Figure 1.The light information is collected and coupled into the fiber interferometer by the microlens array.Then, the interference fringes are detected and analyzed by the processing module.Finally, a computer generates an image based on the amplitude and phase information extracted from the interference fringes.Each fiber interferometer includes a pair of microlenses for measuring the spectrum of one baseline. The spectral distribution measurement was determined by the arrangement and pairing of the microlenses.As shown in Figure 1a, the rectangular microlens array uses a centrally symmetric baseline pairing method (e.g., pentagram and pentagram, diamond and diamond, etc.).We also proposed a complex rectangular sub-aperture arrangement method to optimize the microlens-array arrangement (Figure 1b).The complex rectangular sub-aperture arrangement achieved different sampling intervals at high, intermediate, and low frequencies. The performance of the fiber interferometer is closely related to the quality of interference fringes, whose core device is a 3-dB fiber coupler.Because achieving a fiber coupler coupling efficiency of 100% is difficult, the split ratio cannot be 1:1 [13].Interference arm length differences of the fiber optic interferometer will also introduce additional phase differences, which will eventually affect the true reflection of the stripe contrast and phase. Theory and Algorithm Figure 2 shows the principle and processes of the electro-optical imaging technology; they consist of four parts: interference between microlens pairs, calculation of complex degree of coherence according to fringes, spectral coverage, and image reconstruction. Theory and Algorithm Figure 2 shows the principle and processes of the electro-optical imaging technology; they consist of four parts: interference between microlens pairs, calculation of complex degree of coherence according to fringes, spectral coverage, and image reconstruction. Theory and Algorithm Figure 2 shows the principle and processes of the electro-optical imaging technology; they consist of four parts: interference between microlens pairs, calculation of complex degree of coherence according to fringes, spectral coverage, and image reconstruction.Using the Van Cittert-Zernike theorem in partial coherence theory [8], this study analyzes the mutual intensity of a light field illuminated by an incoherent source and reaches an important conclusion: when the linearity of the source and observation area are much smaller than the distance between them, the complex coherence coefficient over the observation area is proportional to the normalized Fourier transform of the source intensity.According to the principle shown in Figure 2, the complex coherence coefficient between Q 1 and Q 2 in aperture planes can be expressed as [14]: According to Equation ( 2), the distance between the sub-apertures Q 1 and Q 2 in a pair is called the interferometer baseline, which is closely related to the spatial frequency (u, v) of the detection scene.Therefore, we can optimally arrange the microlens array and use a better baseline-pairing method to realize spatial frequency coverage.Equations ( 1) and ( 2) provide a theoretical basis for interference imaging, which is different from the aberration imaging theory and the diffraction imaging principle. The first step in reconstructing the image is to solve the complex coherence coefficient of a pair of lenses.According to the partial coherence theory, the complex coherence coefficient of two apertures on the observation plane can be determined by measuring the interference fringes.The second step is to change the interference baseline and repeat the first step to achieve spectral coverage.Finally, the solution result is substituted into the Equations ( 1) and ( 2), and the inverse intensity of the target scene is obtained by using the inverse Fourier transform.The detailed process and the simplified version of the program are shown in Algorithm 1.If the interference fringe coordinate system is omitted, the fiber interference fringe intensity of any interference baseline can be expressed as [15]: In current interferometric measurements, the most commonly used phase calculation method is the four-step phase shift algorithm.This type of algorithm has extremely high measurement accuracy, anti-interference, and high calculation speed [16][17][18].In this study, the visibility and phase of interference fringes are calculated by a four-step phase shift formula.The result can be expressed as [17,18]: We adopt the parameters in Table 1 to simulate a point source imaging process rectangular microlens array, and the results are shown in Figure 3.For the numerical simulation, we used a square-arranged microlens array and a centrally symmetric arrangement.Figure 3a presents the light intensity distribution of the target source.Figure 3b shows the detected spatial frequency distribution.Figure 3c shows the image obtained by an inverse Fourier transform.The results validate the imaging principle and simulation model.phase of interference fringes are calculated by a four-step phase shift formula.The result can be expressed as [17,18]: We adopt the parameters in Table 1 to simulate a point source imaging process rectangular microlens array, and the results are shown in Figure 3.For the numerical simulation, we used a square-arranged microlens array and a centrally symmetric arrangement.Figure 3a presents the light intensity distribution of the target source.Figure 3b shows the detected spatial frequency distribution.Figure 3c shows the image obtained by an inverse Fourier transform.The results validate the imaging principle and simulation model. Parameters Symbol Value Object Numerical Simulation of the Imaging System According to the point-source imaging simulation, as compared with the target source (Figure 3a), the reconstructed image (Figure 3c) is obviously poor.Therefore, we perform further simulations by changing the system parameters in order to evaluate the imaging resolution and quality. First, we only change N ob , to simulate a resolution panel.With the same sampling interval, N ob determines the maximum interference baseline length (B max ).According to imaging Equations ( 1) and ( 2), the larger the interference baseline length, the richer the high-frequency information contained, and the higher the imaging resolution.The imaging achieved by the system when the maximum baseline (B max ) is 1.5, 1.7, and 1.9 is shown in Figure 4a-c.It is obvious that the image resolution was up to the length of the maximum baseline.This result also provides a basis for improving the resolution of the imaging system.3a), the reconstructed image (Figure 3c) is obviously poor.Therefore, we perform further simulations by changing the system parameters in order to evaluate the imaging resolution and quality. First, we only change Nob, to simulate a resolution panel.With the same sampling interval, Nob determines the maximum interference baseline length (Bmax).According to imaging equations ( 1) and ( 2), the larger the interference baseline length, the richer the high-frequency information contained, and the higher the imaging resolution.The imaging achieved by the system when the maximum baseline (Bmax) is 1.5, 1.7, and 1.9 is shown in Figure 4a-c.It is obvious that the image resolution was up to the length of the maximum baseline.This result also provides a basis for improving the resolution of the imaging system.The Nyquist sampling density affects the detected spatial frequency distribution.An extremely small sampling density can result in the loss of target information.In contrast, a high sampling density leads to information redundancy.However, the minimum sampling interval is limited to the sub-aperture size (r).Therefore, we have performed several repeated digital simulation experiments on the relationship between sampling density and imaging quality. In this regard, we introduce a sharpness evaluation function to evaluate the image quality.The commonly used spatial domain functions for sharpness evaluation are listed in the literature, among which the spatial domain functions represented by the variance function and the entropy function perform better [19].We use two imaging quality evaluation methods based on gray variance and entropy function. We use the resolution board as the source target and change the sampling interval.The relationship between the value of the gray-scale variance and entropy and the sampling interval is shown in Figure 5.We see that the imaging quality can be improved by increasing the sampling density, although these two evaluation functions are not sensitive to small local changes.In addition, the imaging system has no oversampling when the minimum sampling density is equal to the sub-aperture size (△u = 0.33).Thereafter, we used a photograph of an airplane (Figure 6) as the original target to perform the same simulation.Based on the results (Figure 7), we draw the same conclusion, although we compare the image quality visually in this case.The Nyquist sampling density affects the detected spatial frequency distribution.An extremely small sampling density can result in the loss of target information.In contrast, a high sampling density leads to information redundancy.However, the minimum sampling interval is limited to the sub-aperture size (r).Therefore, we have performed several repeated digital simulation experiments on the relationship between sampling density and imaging quality. In this regard, we introduce a sharpness evaluation function to evaluate the image quality.The commonly used spatial domain functions for sharpness evaluation are listed in the literature, among which the spatial domain functions represented by the variance function and the entropy function perform better [19].We use two imaging quality evaluation methods based on gray variance and entropy function. We use the resolution board as the source target and change the sampling interval.The relationship between the value of the gray-scale variance and entropy and the sampling interval is shown in Figure 5.We see that the imaging quality can be improved by increasing the sampling density, although these two evaluation functions are not sensitive to small local changes.In addition, the imaging system has no oversampling when the minimum sampling density is equal to the sub-aperture size ( u = 0.33).Thereafter, we used a photograph of an airplane (Figure 6) as the original target to perform the same simulation.Based on the results (Figure 7), we draw the same conclusion, although we compare the image quality visually in this case.It is well known that different sources have different frequency information.For some sources, the frequency information was concentrated in the low-frequency part; therefore, the sampling density was increased to obtain a high-quality image.This resulted in a corresponding increase in It is well known that different sources have different frequency information.For some sources, the frequency information was concentrated in the low-frequency part; therefore, the sampling density was increased to obtain a high-quality image.This resulted in a corresponding increase in It is well known that different sources have different frequency information.For some sources, the frequency information was concentrated in the low-frequency part; therefore, the sampling density was increased to obtain a high-quality image.This resulted in a corresponding increase in It is well known that different sources have different frequency information.For some sources, the frequency information was concentrated in the low-frequency part; therefore, the sampling density was increased to obtain a high-quality image.This resulted in a corresponding increase in system Appl.Sci.2019, 9, 1331 8 of 10 complexity.A satellite was used as the detection target; from the simulation results (Figure 8), it can be seen that the spectrum information of the satellite image is concentrated in the low-frequency part.We attempted to improve the image quality without increasing the number of microlenses, i.e., the complexity of the system.Then, the complex rectangular sub-aperture arrangement method was adopted to adjust the sampling density at high and low frequencies separately. In the results presented in Figure 8, the sampling interval is uniformly set to u = 0.33.In Figure 9, the sampling spectrum is divided into two parts, a high frequency and a low frequency, and the sampling intervals are respectively set to u1 = 0.19, u2 = 0.38.Similarly, the sampling intervals are set to u1 = 0.33, u2 = 0.66, u3 = 1.3 in Figure 10.The above settings ensure that the number of microlenses used in the three sampling modes is nearly equal. The simulation results are shown in Figures 8-10.Visual comparison shows no difference in image quality.This shows that the complex rectangular sub-aperture arrangement method proposed in this paper is feasible.We use variance and entropy to evaluate the subtle differences between the images.The results are presented in Table 2, which proves that higher-quality images could be obtained with the same number of microlenses.Appl.Sci.2019, 9, 1331 8 of 10 system complexity.A satellite was used as the detection target; from the simulation results (Figure 8), it can be seen that the spectrum information of the satellite image is concentrated in the low-frequency part.We attempted to improve the image quality without increasing the number of microlenses, i.e., the complexity of the system.Then, the complex rectangular sub-aperture arrangement method was adopted to adjust the sampling density at high and low frequencies separately. In the results presented in Figure 8, the sampling interval is uniformly set to △u = 0.33.In Figure 9, the sampling spectrum is divided into two parts, a high frequency and a low frequency, and the sampling intervals are respectively set to △u1 = 0.19, △u2 = 0.38.Similarly, the sampling intervals are set to △u1 = 0.33, △u2 = 0.66, △u3 = 1.3 in Figure 10.The above settings ensure that the number of microlenses used in the three sampling modes is nearly equal. The simulation results are shown in Figures 8-10.Visual comparison shows no difference in image quality.This shows that the complex rectangular sub-aperture arrangement method proposed in this paper is feasible.We use variance and entropy to evaluate the subtle differences between the images.The results are presented in Table 2, which proves that higher-quality images could be obtained with the same number of microlenses.Appl.Sci.2019, 9, 1331 8 of 10 system complexity.A satellite was used as the detection target; from the simulation results (Figure 8), it can be seen that the spectrum information of the satellite image is concentrated in the low-frequency part.We attempted to improve the image quality without increasing the number of microlenses, i.e., the complexity of the system.Then, the complex rectangular sub-aperture arrangement method was adopted to adjust the sampling density at high and low frequencies separately. In the results presented in Figure 8, the sampling interval is uniformly set to △u = 0.33.In Figure 9, the sampling spectrum is divided into two parts, a high frequency and a low frequency, and the sampling intervals are respectively set to △u1 = 0.19, △u2 = 0.38.Similarly, the sampling intervals are set to △u1 = 0.33, △u2 = 0.66, △u3 = 1.3 in Figure 10.The above settings ensure that the number of microlenses used in the three sampling modes is nearly equal. The simulation results are shown in Figures 8-10.Visual comparison shows no difference in image quality.This shows that the complex rectangular sub-aperture arrangement method proposed in this paper is feasible.We use variance and entropy to evaluate the subtle differences between the images.The results are presented in Table 2, which proves that higher-quality images could be obtained with the same number of microlenses. Conclusion This paper proposes a cutting-edge electro-optical imaging technology based on a microlens array and fiber interferometer, a centrally symmetric baseline pairing method, and a complex rectangular sub-aperture arrangement method.We then briefly analyze the theoretical basis of the imaging system and the method of analyzing the fringes.Finally, the imaging system was verified by digital simulation. The main conclusions are as follows: (1) the resolution of the imaging system can only be improved by increasing the maximum baseline length; (2) increasing the Nyquist sampling density increases the density of the spatial frequency points and improves the imaging quality; (3) by adopting the complex rectangular sub-aperture arrangement, better image quality can be obtained without increasing the complexity of the imaging system. Conclusions This paper proposes a cutting-edge electro-optical imaging technology based on a microlens array and fiber interferometer, a centrally symmetric baseline pairing method, and a complex rectangular sub-aperture arrangement method.We then briefly analyze the theoretical basis of the imaging system and the method of analyzing the fringes.Finally, the imaging system was verified by digital simulation. The main conclusions are as follows: (1) the resolution of the imaging system can only be improved by increasing the maximum baseline length; (2) increasing the Nyquist sampling density increases the density of the spatial frequency points and improves the imaging quality; (3) by adopting the complex rectangular sub-aperture arrangement, better image quality can be obtained without increasing the complexity of the imaging system. Figure 1 . Figure 1.(a) Rectangular microlens array using a symmetric pairing method; (b) microlens array using a complex rectangular sub-aperture arrangement method; (c) structure of the fiber interferometer. Figure 1 . Figure 1.(a) Rectangular microlens array using a symmetric pairing method; (b) microlens array using a complex rectangular sub-aperture arrangement method; (c) structure of the fiber interferometer. Figure 1 . Figure 1.(a) Rectangular microlens array using a symmetric pairing method; (b) microlens array using a complex rectangular sub-aperture arrangement method; (c) structure of the fiber interferometer. Figure 2 . Figure 2. Schematic of the cutting-edge electro-optical imaging technology. Figure 3 . Figure 3. (a) Light intensity distribution of target source; (b) detected spatial frequency distribution; (c) image obtained by inverse Fourier transform. Figure 4 . Figure 4. (a) Image obtained when B max = 1.5 m; (b) Image obtained when B max = 1.7 m; (c) Image obtained when B max = 1.9 m. Figure 5 . Figure 5. Relationship between the variance and entropy of the imaging result and the sampling interval. Figure 6 . Figure 6.(a) Light intensity distribution of the source; (b) airplane image spectrum center slice. Figure 5 . Figure 5. Relationship between the variance and entropy of the imaging result and the sampling interval. Figure 5 . Figure 5. Relationship between the variance and entropy of the imaging result and the sampling interval. Figure 6 . Figure 6.(a) Light intensity distribution of the source; (b) airplane image spectrum center slice. Figure 6 . Figure 6.(a) Light intensity distribution of the source; (b) airplane image spectrum center slice. Figure 5 . Figure 5. Relationship between the variance and entropy of the imaging result and the sampling interval. Figure 6 . Figure 6.(a) Light intensity distribution of the source; (b) airplane image spectrum center slice. Figure 8 . Figure 8.(a) Light intensity distribution of the source; (b) mutual intensity spectrum detected with the rectangular arrangement (△u = 0.33, N = 200); (c) actual imaging of the system. Figure 8 . Figure 8.(a) Light intensity distribution of the source; (b) mutual intensity spectrum detected with the rectangular arrangement ( u = 0.33, N = 200); (c) actual imaging of the system. 8652 Figure 8 . Figure 8.(a) Light intensity distribution of the source; (b) mutual intensity spectrum detected with the rectangular arrangement (△u = 0.33, N = 200); (c) actual imaging of the system. Table 1 . Simulation parameters of the imaging system. Table 1 . Simulation parameters of the imaging system. Table 2 . Imaging process and reconstruction algorithm. Table 2 . Imaging process and reconstruction algorithm. Table 2 . Imaging process and reconstruction algorithm.
5,265.4
2019-03-29T00:00:00.000
[ "Physics", "Engineering" ]
Time-Inconsistent Optimal Control Problems and the Equilibrium HJB Equation A general time-inconsistent optimal control problem is considered for stochastic differential equations with deterministic coefficients. Under suitable conditions, a Hamilton-Jacobi-Bellman type equation is derived for the equilibrium value function of the problem. Well-posedness and some properties of such an equation is studied, and time-consistent equilibrium strategies are constructed. As special cases, the linear-quadratic problem and a generalized Merton's portfolio problem are investigated. where the maps g(·) and h(·) explicitly depend on the initial time t in some general way. The optimal control problem associated with (1.1) and (1.8), called Problem (N), will not be time-consistent, or time-inconsistent, in general, meaning that a restriction of an optimal control for a specific initial pair on a later time interval might not be optimal for that corresponding initial pair. Some concrete examples will be presented in the next section. The purpose of this paper is to obtain time-consistent optimal controls (which should be more properly called equilibrium control) for Problem (N) mentioned above. Let us now briefly describe our approach. Inspired by [27,28], we introduce a sequence of multi-person hierarchical differential games as follows. For any N > 1, let Π be a partition of the time interval [0, T ] defined by Π : 0 = t 1 < t 2 < · · · < t N = T, with the mesh size Π given by Π = max 1≤k≤N (t k − t k−1 ). The differential game, denoted by Problem (G Π ), associated with partition Π consists of N players. The k-th player controls the system on [t k−1 , t k ) by taking his/her control u k (·) ∈ U[t k−1 , t k ]. The cost functional is constructed in a sophisticated way, by using some techniques of forward-backward stochastic differential equations (FBSDEs, for short) found in [16,17]. The interaction among the players are as follows: (i) The terminal pair (t k , X(t k )) of Player k is the initial pair of Player (k + 1); (ii) All the player know that each player tries to find an optimal control for his/her own problem; and (iii) Each player will discount the future costs in his/her own way, regardless of the fact that the later players will control the system. Under certain conditions, each player will have an optimal control, denoted byū k (·) ∈ U[t k−1 , t k ], for his/her own problem, as well as his/her own value function V k (· , ·) defined on [t k−1 , t k ] × lR n . Definē (1.9) and (1.10) We may callū Π (·) and V Π (· , ·) the Nash equilibrium control and Nash equilibrium value function of Problem (G Π ), respectively. When the following limits exist for someū(·) ∈ U[0, T ] and V : [0, T ] × lR n → lR, we call them a time-consistent equilibrium control and a time-consistent equilibrium value function of Problem (N), respectively. As a major contribution of this paper, we have derived the equilibrium Hamilton-Jacobi-Bellman equation which can be used to characterize the equilibrium value function V (· , ·), and it recovers the result for time-inconsistent deterministic linearquadratic problem presented in [27]. The well-posedness of such an HJB equation will be established for the case that the diffusion of the state equation does not contain the control. The general case is open at the moment, and we expect to present some more complete results in our future publications. As important and interesting special cases, we will construct equilibrium controls for stochastic LQ problem with general discounting and for generalized Merton's portfolio problem. Two Examples of Time-Inconsistent Optimal Control Problems In this section, we present two interesting examples of optimal control problems which are time-inconsistent. Interestingly, in the case that (2.9) holds, one has Thus, in this case, (2.13) does not hold. As a matter of fact, the problem is time-consistent and (2.13) should not be true. Some Preliminaries For convenience, let us rewrite the state equation and the cost functional below. (3.2) Clearly, our cost functional covers the non-exponential/hyperbolic discounting situations. By comparing the above state equation and cost functional, it seems that we may consider a little more general state equation of the following form: x, s, X(s), u(s))ds + σ(t, x, s, X(s), u(s))dW (s), s ∈ [t, T ], However, according to [28] (see also [27]), we know that when an equilibrium pair (see below for definition) is constructed, the eventual effective state equation will take the following form: Therefore, it suffices to consider the state equation of form (3.1). In what follows, we let T > 0 be a fixed time horizon, and U ⊆ lR m be a closed subset, which could be either bounded or unbounded (it is allowed that U = lR m ). We will use K > 0 as a generic constant which can be different from line to line. Let S n be the set of all (n × n) symmetric real matrices. Denote Recall from Section 1 that Note that in the case U is bounded, for different q ≥ 1, all the U q [t, T ] are the same as U[t, T ]. We introduce the following standing assumptions. (H1) The maps b : [0, T ] × lR n × U → lR n , σ : [0, T ] × lR n × U → lR n×d are continuous and there exist constants L > 0 and k ≥ 0 such that where |x| ∨ |y| = max{|x|, |y|}, and T ] × lR n → lR are continuous, and there exist constants L > 0 and q ≥ 0 such that Let us make a couple of remarks on (H1). First of all, if x → b(t, x, u) is uniformly Lipschitz, then the first two conditions of (3.5) hold. On the other hand, we point out that the first condition in (3.5) merely implies that x → b(t, x, u) is locally Lipschitz, and the second condition in (3.5) alone does not imply the global Lipschitz condition for the map x → b(t, x, u). A simple example that the first and the second conditions in (3.5) are satisfied but x → b(t, x, u) is not uniform Lipschitz is the following: It is clear that the above map is not uniformly Lipschitz in x, the first condition in (3.5) holds with k = 2, and we can check that Note that under (3.5), one has and |σ(t, x, u)| 2 ≤ L 2 1 + |x| + |u| 2 ≤ 3L 2 1 + |x| 2 + |u| 2 . Now, for (H2), we note that the nonnegativity of g(·) and h(·) can be replaced by the condition that both g(·) and h(·) are bounded from below. The following result is concerning the well-posedness of the state equation. Problem (N). For any given initial pair (t, x) ∈ [0, T ) × lR n , find aū(·) ∈ U[t, T ] such that J(t, x; u(·)). (3.11) From the examples presented in the previous section, we know that the above Problem (N) is timeinconsistent, in general. Our goal is to find time-consistent equilibrium controls and characterize the equilibrium value function, which will be made precise below. (3.15) This is a multi-valued map. Suppose we can define a map ψ : D(ψ) ⊆ D(H) → U such that H(τ, t, x, p, P ) ≡ lH(τ, t, x, ψ(τ, t, x, p, P ), p, P ) = inf u∈U lH(τ, t, x, u, p, P ) > −∞, ∀(τ, t, x, p, P ) ∈ D(ψ). (3.16) The set D(ψ) is called the domain of ψ, which consists of all points (τ, t, x, p, P ) ∈ D(H) such that the infimum in (3.16) is achieved at ψ(τ, t, x, p, P ). It is clear that ψ(·) is actually a selection of arg min lH(·), i.e., ψ(τ, t, x, p, P ) ∈ arg min lH(τ, t, x, · , p, P ), ∀(τ, t, x, p, P ) ∈ D(ψ). The map ψ(·) will play an important role later. Therefore, let us say a little bit more on it. Note that when U is bounded (since it is assumed to be closed, it is compact in this case), one has However, when U is unbounded, say, U = lR m , one might have To say something about the case of (3.18), let us present the following simple lemma. For any ε > 0, let Then there exists a u ε ∈ U such that Proof. First of all, fix a u 0 ∈ U . For any minimizing sequence u k ∈ U of f ε (·), we may assume that Thus, u k is bounded. Consequently, by the closeness of U , we may assume that u k → u ε ∈ U exists, which attains the infimum of f ε (·). Next, it is clear that f ε (·) decreases as ε decreases, and Now, for any δ > 0, there exists a u δ ∈ U such that On the other hand,f Hence, letting ε → 0, we getf 0 ≤f + δ. The following example shows that sometimes, ψ can also be continuous. with σ(· , ·) and R(· , ·) being continuous, and R(τ, t) > 0 for all (τ, t) ∈ D[0, T ]. Then lH(τ, t, x, u, p, P ) = pu + 1 2 Consequently, Clearly, both H and ψ are continuous. Also, we see that even if all the coefficients are very smooth, we cannot guarantee that H and ψ are as smooth as the coefficients, in general. Here is another example which shows that H and ψ could be as smooth as the coefficients. From the above discussion, we see that the situation concerning the map ψ(·) is very complicated. For the simplicity of presentation below, we adopt the following assumption. (H3) The map ψ : D[0, T ] × lR n × lR n × S n → U is well-defined and has needed regularity. We will address more general situations concerning ψ(·) in our future publications. Now, let us recall a standard verification theorem for Problem (C) stated in Section 1, which will be used below. For our later purposes, it suffices to consider Problem (C) with the discount rate δ = 0. We will denote The proof of the following result can be found in [8]. is a classical solution to the following Hamilton-Jacobi-Bellman equation: where lH 0 (t, x, u, p, P ) = b(t, x, u), p +tr a(t, x, u)P + g 0 (t, x, u). We make some remarks on the above verification theorem. First of all, to guarantee (3.21) to have a classical solution V 0 (· , ·), one may pose different conditions. A typical one is the uniform ellipticity condition: for some δ > 0. This condition implies that n ≤ d and σ(t, x, u) stays full rank for all (t, x, u). Thus, it does not include the case that (x, u) → σ(t, x, u) is linear, which is the case for LQ problems. On the other hand, for a standard LQ problem with deterministic coefficients, when the Riccati equation admits a solution P (·), the function V 0 (t, x) = P (t)x, x is a classical solution to the corresponding HJB equation for which the uniform ellipticity condition fails. The similar situation happens for the classical Merton's portfolio problem. This observation shows that there are quite different conditions under which the corresponding HJB equation admits a classical solution. In the following sections, from time to time, we will simply say that the relevant HJB equation has a classical solution without getting into detailed sufficient conditions for that. Likewise, suppose there exists a map ψ 0 : [0, T ] × lR n × lR n × S n → U such that Then the pair (X(·),ū(·)) appears in Proposition 3.6 is just a state-control pair satisfying the following and we do not have to have the Lipschitz continuity of the map In the following section, from time to time, we will just say some process is a solution to the relevant closed-loop system without mentioning if the drift and diffusion are Lipschitz continuous, etc. Time-Consistent Equilibria via Multi-Person Differential Games In this section, we are going to search time-consistent solution to Problem (N). Inspired by [27,28], we will take an approach of multi-person differential games. To begin with, let us first introduce some necessary notions. Let P[0, T ] be the set of all partitions and with the mesh size Π defined by the following: We introduce the following important definition. In what follows, the following definition which is equivalent to the above is more convenient to use. Multi-Person Differential Games. We now consider an N -person differential game, called Problem (G Π ), as briefly described in Section 1. Throughout this section, we assume that (H1)-(H3) hold. Let us start with Player N who controls the system on [t N −1 t N ). More precisely, for each (t, x) ∈ [t N −1 , t N ] × lR n , consider the following controlled SDE: We pose the following problem for Player N : The above defines the value function V Π (· , ·) on [t N −1 , t N ] × lR n , and in the caseū N (·) exists, by (4.12), we have Under proper conditions, V Π (· , ·) is the classical solution to the following HJB equation: By the definition of ψ : 17)), we may also write (4.15) as follows x ∈ lR n . (4.16) With such a solution V Π (· , ·) of (4.15) (or (4.16)), let us assume that the following closed-loop system admits a unique solutionX Then under (H3) and Proposition 3.6, an optimal controlū N (·) of Problem (C N ) for the initial pair (t N −1 , x) admits the following feedback representation: (4.18) andX N (·) ≡X N (· ; t N −1 , x) is the corresponding optimal state process. Next, we consider an optimal control problem for Player (N − 1) on [t N −2 , t N −1 ). Naturally, for any initial To determine the suitable cost functional, we note that Player (N − 1) can only control the system on [t N −2 , t N −1 ) and Player N will take over at t N −1 to control the system thereafter. Moreover, Player (N − 1) knows that Player N will play optimally based on the initial pair (t N −1 , X N −1 (t N −1 )) of Player N , which is the terminal pair of Player (N − 1). Hence, the sophisticated cost functional of Player (N − 1) should be Note that although Player (N − 1) knows that Player N will control the system on [t N −1 , t N ], he/she still "discounts" the future costs in his/her own way (see t N −2 appearing in the running cost on [t N −1 , t N ] and in the terminal cost at t N ). Now, if we denote then the cost functional (4.20) can be written as We see that the optimal control problem associated with the state equation (4.19) and the cost functional (4.21) looks like a standard one. But, the map x → h N −1 (x) seems to be a little too implicit, which is difficult for us to pass to the limits later on. We now would like to make it more explicit in some sense. Inspired by the idea of Four Step Scheme introduced in [16,17] for FBSDEs with deterministic coefficients, we proceed as follows. For the optimal state processX N which is equivalent to the following: Note that t N −2 appears in the drift of BSDE and in the terminal condition. This BSDE admits a unique adapted solution (Y N (·), Z N (·)) ≡ Y N (· ; x), Z N (· ; x) ( [17,29]), uniquely depending on x ∈ lR n . Further, one has It is seen that (4.17) and (4.23) form an FBSDE. By [16] (see also [17,29]), we have the following represen- as long as Θ N (· , ·) is a classical solution to the following PDE: 25) or equivalently, x ∈ lR n . (4.26) . We point out that in general, (4.27) With the above representation Θ N (· , ·) of Y N (·), we can rewrite the cost functional (4.21) as follows: We now pose the following problem: The above defines the value function By the definition of the map ψ(·) again (see (3.16)-(3.17)), we may also write the above as x ∈ lR n . (4.31) From (4.27), we see that in general, For any x ∈ lR n , suppose the following admits a unique solutionX N −1 (·): which, again by Proposition 3.6, is an optimal control of Problem (C N −1 ) with the initial pair (t N −2 , x). Now, for the optimal pair we make a natural extension on [t N −1 , t N ] as follows: We refer to such a pair Then (4.34) can be written compactly as Also, one has x ∈ lR n . (4.38) We point out that in general it may happen that, which means that the the sophisticated equilibrium pair might not be an optimal pair (for the given initial pair). Similar to the above, in order to state an optimal control problem for Player (N − 2) on [t N −3 , t N −2 ], we introduce the following BSDE on [t N −2 , t N ]: x) be the adapted solution of this BSDE. Then (4.37) and (4.40) form an FBSDE. Similar to the above, we have as long as Θ N −1 (· , ·) is the solution to the following PDE: x ∈ lR n . (4.42) Having the above preparation, we now consider, for any (t, x) ∈ [t N −3 , t N −2 ) × lR n , the state equation and the (sophisticated) cost functional We pose the following problem for Player (N − 2): Together with the previous definition, we see that V Π (· , ·) is now well-defined on [t N −3 , t N ] × lR n . Under proper conditions, V Π (· , ·) is a classical solution to the following HJB equation: Further, by the definition of the map ψ(·), we may also write the above as x ∈ lR n . (4.47) Also, similar to (4.27), we have that in general, The above procedure can be continued recursively. By induction, we can construct sophisticated cost functional J k (t, x; u k (·)) for Player k, and with the value function V Π (· , ·) satisfying the following HJB equations on the time intervals associated with the partition Π: and for k = 1, 2, · · · , N − 1, x ∈ lR n , (4.51) where, for k = 1, 2, · · · , N − 1, Θ k+1 (· , ·) is the solution to the following (linear) PDE: x ∈ lR n . (4.52) Then for any given x ∈ lR n , letX Π (·) be the solution to the following closed-loop system: and denoteū According to our construction, we have where u k (·) ⊕ Ψ Π (·) [t k ,T ] is defined the same way as (4.8)-(4.9). Similar to (4.39), for k = 1, 2, · · · , N − 1, we have in general that Since the involved N players in Problem (G Π ) interact through the initial/terminal pairs (t k , X(t k )), k = 1, 2, · · · , N − 1, one should actually denote Hence, (4.56) means that if we letū then (ū 1 (·), · · · ,ū N (·)) is a Nash equilibrium of the N -person non-cooperative differential game associated with J k x; u 1 (·), · · · , u N (·) , 1 ≤ k ≤ N (defined in (4.58)). 20 The formal limits. We now would like to look at the situation when Π → 0. Suppose we have the following: uniformly for (t, x) in any compact sets, for some V (· , ·). Under (H3), we also have uniformly for (t, x) in any compact sets, for By (4.56), we have Thus, passing to the limits, we have (4.5). Also, we have the following: for some R(r) with R(r) → 0 as r → 0. Hence, by Definition 4.1, Ψ(· , ·) is a time-consistent equilibrium strategy, and V (· , ·) is a time-consistent equilibrium value function of Problem (N). In the rest of this subsection, we will formally pass to the limits to find the equations that can be used to characterize the equilibrium value function V (· , ·). To this end, let us first write the equations for Θ k+1 (· , ·) in the integral forms: For k = 1, 2, · · · , N − 1, one has Let us look at V Π (· , ·). Let us make some remarks on (4.77). (i) It is an interesting feature of (4.77) that both Θ(τ, t, x) and Θ(t, t, x) appear in the equation where the later is the restriction of the former on τ = t. On one hand, although the equation is fully nonlinear, due to the fact that Θ(t, t, x) is different from Θ(τ, t, x), the existing theory for fully nonlinear parabolic equations cannot apply directly. On the other hand, it is seen that if Θ(t, t, x) is obtained from an independent way, then (4.77) is actually a linear equation for Θ(τ, t, x) with τ can be purely regarded as a parameter. (ii) In the case that D(ψ) is not equal to D[0, T ] × lR n × lR n × S n , the condition has to be regarded as a part of the solution. We will see that for some interesting special cases, the above condition can come automatically. More generally, we may also write (4.75) as since the set arg min lH t, t, x, Θ x (t, t, x), Θ xx (t, t, x) might contain more than one point. It is not hard to see that the above actually amounts to defining g ε (τ, t, x, u) = g(τ, t, x, u) + ε|u| 2 . We may refer to the corresponding problem as a regularized problem. If the corresponding equilibrium value function is denoted by V ε (· , ·), then it is expected that However, in general, if (X ε (·),ū ε (·)) is an equilibrium pair for the regularized problem, we might not have the limit ofū ε (·) as ε → 0. In this case, we should be satisfied by the above characterization of the equilibrium value function V (· , ·), andū ε (·) can be regarded as some kind of "near equilibrium control". (iv) For the case σ(t, x, u) = σ(t, x), i.e., the control does not enter the diffusion of the state equation, ψ(·) is independent of P and ψ(t, x, p) ∈ arg min p, b(t, x, ·) +g(τ, t, x, ·) . (4.82) Then the equilibrium HJB equation can be written as (4.83) We will carefully discuss this case in the next section. Note that for a deterministic problem, namely Problem (N) for an ordinary differential equation system, we may take for ε > 0 to regularize the problem. Then the corresponding equilibrium HJB equation reads (4.84) It is expected that Θ ε (· , · , ·) → Θ(· , · , ·) in some sense, as ε → 0, with (4.85) At the moment, it is not clear to us how one can define viscosity solution to the above equation. (5.2) Consider the following linear abstract backward evolution equation: Under some mild conditions, the above is well-posed, and we have the following variation of constant formula: where E(· , · ; v(·)) is called the backward evolution operator generated by L(· , v(·)). Consequently, the (timeconsistent) equilibrium value function V (t, ·) = Θ(t, t, ·) should be the solution to the following nonlinear functional integral equation: T t E(s, t; V (·))G(t, s, V (s))ds, t ∈ [0, T ]. (5.5) We call (5.5) the equilibrium HJB integral equation for Problem (N). Once a solution V (· , ·) of (5.5) is found, we can, in principle, construct a (time-consistent) equilibrium control and an equilibrium pair for Problem (N). Of course, if we like, we may also solve the equilibrium HJB equation (4.77), which actually is not necessary as far as the construction of a time-consistent equilibrium pair is concerned. The well-posedness of (5.5) seems to be difficult for the general case. At the moment, we do not have a complete solution for that and hopefully, we can present some satisfactory results for the equilibrium HJB integral equation (5.5) in our future publications. On the other hand, in the rest of this section, we are going to present a well-posedness result for an interesting special case of (5.5), from which one can get some taste of the problem. The main hypothesis that we will assume below is the following: namely, the control does not enter the diffusion of the state equation. As we discussed in Section 4, in this case, our equilibrium HJB equation reads The essential feature of (5.7) is that Θ xx (t, t, x) does not appear in the equation (although Θ x (t, t, x) still appears there). This leads to the well-posedness problem much more accessible. Further, from Example 3.5, we see that there are cases for which ψ is as smooth as the coefficients and b(t, x, ψ(t, t, x, p)) is bounded. Therefore, the case that we are going to consider below, although very special, includes a big class of problems. To avoid heavy notations, let us consider the following equation To investigate the well-posedness of (5.8) above, let us make some preparations. Let C α (lR n ) be the space of all continuous functions ϕ : lR n → lR such that Further, let C 1+α (lR n ) and C 2+α (lR n ) be the spaces of all functions ϕ : lR n → lR such that respectively. Next, let B([0, T ]; C α (lR n )) be the set of all measurable functions f : [0, T ] × lR n → lR such that for each t ∈ [0, T ], f (t, ·) ∈ C α (lR n ) and Also, we let C([0, T ]; C α (lR n )) be the set of all continuous functions that are also in B([0, T ]; C α (lR n )). Thus, Similarly, we define B([0, T ]; C k+α (lR n )) and C([0, T ]; C k+α (lR n )), respectively, for k = 1, 2. We introduce the following hypotheses for the above equation (5.8). Further, a(t, x) −1 exists for all (t, x) ∈ [0, T ] × lR n and there exist constants λ 0 , λ 1 > 0 such that We point out here that some of the conditions assumed in (P) can be substantially relaxed. However, we prefer not to get into those generalities for the sake of simplicity in our presentation. Note also that typically, the ellipticity condition of a(t, x) looks like for some δ > 0. It is clear that when a(· , ·) is assumed to be bounded, then the above is equivalent to (5.11). The number λ 0 in (5.11) will be used below. For any v(· , ·) ∈ C([0, T ]; C 1+α (lR n )), we consider the following linear PDE, parameterized by τ ∈ [0, T ): where the differential operator L[t, v(·)] is defined by the following: In what follows, we let We have the following result whose proof follows a relevant one found in [9], with some minor modifications. From the above procedure, we see that K > 0 in the above is an absolute constant, independent of (τ, t) ∈ D[0, T ]. Hence, in particular, we have (denoting . Recall that in Section 4, by assuming the convergence of Θ Π (· , · , ·) and V Π (· , ·), we get the equilibrium HJB equation for Θ(· , · , ·) and then V (· , ·) is characterized by an equilibrium HJB integral equation. We now want to show that under conditions ensuring (P), we do have the expected convergence. This will make our whole procedure satisfactorily complete for certain cases, at least. For the sake of simplicity, we assume that all the involved functions are bounded and continuously differentiable up to a needed order with bounded derivatives. When (5.6) holds, for k = 0, 1, · · · , N − 1, we have Since (5.38) is well-posed, we have Also, by the assumed uniform Lipschitz continuity of τ → (h(τ, x), h y (τ, y), g(τ, t, x, u)), we have Next, by Proposition 5.1, we have − b s, y, ψ(s, s, y, V x (s, y)) , Θ x (τ, s, y) x (s, y)) − g τ, s, y, ψ(s, x, V x (s, y)) dyds. (5.44) This yields that From this, our expected convergence follows. Some Special Cases In this section, we are going to look at several important special cases. We will mainly look at the corresponding forms of our equilibrium HJB equations. A Linear-Quadratic Problem Let us look at the LQ problem. Then lH(τ, t, x, u, p, P ) = p, This yields ψ(τ, t, x, p, P ) = − R(τ, t) and lH(τ, t, x, ψ(t, t, x,p,P ), p, P ) t, x,p,P ), ψ(t, t, x,p,P ) Hence, the equilibrium HJB equation takes the following form: Although having a little bit complicated looking, the above has a quadratic structure which can help us to study the well-posedness of it. To see that, let with some undetermined map P : D[0, T ] → S n . Plugging the above into (6.3), we see that the map P (· , ·) should satisfy the following equation: with the terminal condition P (τ, T ) = G(τ ). Note that if P (t, t) and R(t, t) are replaced by P (τ, t) and R(τ, t), respectively, the above becomes a standard Riccati equation with a parameter τ . The appearance of P (t, t) and R(t, t) makes the above non-standard. 34 We may rewrite the above as follows (suppressing t in P (τ, t), etc., for simplicity): Then the equation for P (· , ·) can be written as follows: Applying Itô's formula to s → P (τ, s)Φ(s, t)x, Φ(s, t)x on [t, T ], we have which leads to (6.8) Note that although P (τ, t) is a deterministic function, the above representation is stochastic. From the above, we have In particular, taking τ = t and denoting P (t) = P (t, t), one has Combining the above, we end up with the following system for the function P (·): (6.11) We refer to the above as a Riccati-Volterra integral equation system for the corresponding (time-inconsistent) LQ problem. Note that the above is actually a coupled forward-backward stochastic Volterra integral equation system (FBSVIE, for short). Some relevant results concerning backward stochastic Volterra integral equations (BSVIEs, for short) can be found in [25,26]. If Φ(· , ·), P (·) is a solution to the above, then the time-consistent equilibrium control is given bȳ In the case that A 1 (·) = 0, B 1 (·) = 0, (6.13) the above (6.11) is reduced to the following Riccati-Volterra integral equation system (for a deterministic time-inconsistent LQ problem): (6.14) and the time-consistent equilibrium control is given by (6.12) with a simpler Γ(·). This recovers the case presented in [27] where the well-posedness of (6.14) was established. For (6.11), we have the following result. Further, suppose Then (6.11) admits a unique solution. Then (suppressing t) Consequently, with K > 0 being an absolute constant. Hence, we obtain Hence, the map p(·) → P (·) is contractive on X [τ, T ] as long as T − τ > 0 small. Then a usual argument applies to obtain a unique fixed point of p(·) → P (·) on X [0, T ]. This proves the well-posedness of (6.11). To conclude this section, let us make a remark on the condition (6.16). It is not hard to show that when m = n and B 1 (·) −1 exists and bounded, then (6.16) holds. Apparently, this is a restrictive condition. We hope that in our future publications, such a condition can be removed. The maximum of (u, c) → lH(t, s, x, u, c, p, P ) is attained at Appendix In this appendix, we present some detailed calculations. Example 2.1. Recall that we are considering the following one-dimensional controlled linear SDE: with cost functional where σ > 0 is a constant and g(t) is a deterministic non-constant, continuous and positive function. For such a linear-quadratic optimal control problem on [t, T ] (with deterministic coefficients) and with t ∈ [0, T ) fixed, the Riccati equation takes the following form: (note that t ∈ [0, T ) is a parameter) P s (s, t) = P (s, t) 2 − σ 2 P (s, t), s ∈ [t, T ], P (T, t) = g(t). Let us solve the above Riccati equation. By separation of variables, we have Integrating from s to T , one has Hence, P (s, t) = σ 2 g(t)e σ 2 (T −s) σ 2 + g(t)(e σ 2 (T −s) − 1) , s ∈ [t, T ], and the optimal control is given byū Thus, the closed-loop SDE reads dX(s) = −P (s, t)X(s)ds + σX(s)dW (s), s ∈ [t, T ], Consequently, the optimal state process is given bȳ Consequently, the optimal control also admits the following open-loop form: . Thus, (7.31) will not be true. When (7.34) holds, the problem is referred to as the (classical) Merton's portfolio problem. In this case, (with a = λ−δ
8,104.6
2012-04-03T00:00:00.000
[ "Mathematics" ]
Teaching English in an engineering international branch campus: a collaborative autoethnography of our emotion labor : While a number of studies have documented the signi fi cant role of emotions and the emotion labor produced in English language teaching, research exploring English instructors ’ emotion labor in transnational higher education contexts such as international branch campuses (IBCs) and within Science, Technology, Engineering, and Math (STEM) programs is lacking. Arguably, these neoliberally-driven and educational neocolonialist endeavors can produce intense emotion labor for English instructors. This study employs a collaborative autoethnography (CAE) methodology to investigate what provoked emotion labor for expatriate instructors, who teach English courses to Qatari national students at an IBC in Qatar. Taking a poststructural approach to emotion labor as our theoretical framing, we collaboratively examined our emotion labor in audio-recorded weekly meetings and then engaged in further dialogues and writings about our emotion labor. We re fl ect on two themes that produced emotion labor as well as emotional capital for us: 1) navigating our purpose teaching English to engineering majors and 2) confronting our roles as English instructors within a context of educational neocolonialism. Our study adds to the knowledge base of English teachers ’ emotion labor in transnational and STEM spaces, while also showcasing CAE as a transformative methodology to explore language teachers ’ emotion labor. Introduction While a number of studies have documented the significant role of emotions and the emotion labor produced in English language teaching (Benesch 2017;De Costa et al. 2020;Nazari and Karimpour 2022;Song 2016), research exploring English teachers' emotion labor in transnational higher education contexts such as international branch campuses (IBCs) and within Science, Technology, Engineering, and Math (STEM) campuses is limited.An IBC is defined as "an entity that is owned, at least in part, by a foreign higher education provider; operated in the name of the foreign education provider; and provides an entire academic program, substantially on site, leading to a degree awarded by the foreign education provider" (Cross-Border Education Research Team 2017: para.2).The largest exporters of academic programs are the United States and the United Kingdom, and numerous IBCs have been established in the Middle East and Asia.Arguably, these "neoliberally-driven" (De Costa et al. 2022a: 81) and educational neocolonialist endeavors (Romanowski 2022) can produce intense emotion labor for English instructors (e.g., Rudd 2018). English instructors in IBCs face a complex environment characterized by a "hybrid scene of institutional cultures and practices" (Small 2022: 21).Instructors are required to navigate between the policies, attitudes, and ethics of the home campus, the branch campus, and the host country.This navigation is often complicated by the sense of contributing to the neocolonialism of education, or the one-way transfer of educational theories and practices from the global north to the global south (Romanowski 2022).As Rudd (2018) argues, IBCs are often more international than transnational in the sense of a "one-way movement from home campus to satellite school rather than a back-and-forth movement of information and decision-making" (658).IBCs have been critically analyzed for the ways in which they accommodate and normalize U.S. imperialism (Al-Saleh and Vora 2020) and contribute to the neocolonization of the education sector.Romanowski argues that "under the guise of globalization…educational neocolonialism is a form of domination that propagates Western ways of thinking and forms of knowledge" (201).Rottleb and Jana (2022) address the potential for conflict to arise in IBCs due to the fact that importing Western campuses also means importing institutional, social, and cultural identities.These points of tension are exemplified in Rudd's (2018) examination of what happens when a Western-centric honor code, originally developed at a U.S. home campus, is introduced to an engineering IBC located in Qatar.In Rudd's (2018) discussion of teaching at an IBC, she asserts that IBC students "are expected to take up and adhere to the traditions" (658) of the home institution and how she is still uncovering ways that her teaching, research, and service "do more to colonize than to liberate" (656). Moreover, English instructors at IBCs, often expatriate residents in the host country, may experience a sense of privilege and disadvantage in these transnational higher education contexts.Alshakhi and Phan's (2020) study found that expatriate teachers of English in Saudi Arabia "felt 'uncomfortable' about being 'privileged' as White native teachers of English yet 'inadequate' and 'monolingual' at the same time" (314).They also experienced emotion labor in response to social, religious, and cultural differences.Similarly, Hopkyns and Gkonou (2023) found that expatriate English instructors working in an English-medium instruction (EMI) university in Dubai recognized that they benefited from the phenomenon of EMI, but also experienced guilt that their livelihood depended on a system that did not provide students the choice to pursue a college degree in their first language (Arabic).Putting the IBC within a larger context of cultures being contested and concerns about the erosion of local values and practices in the Gulf (Hillman 2022), this creates a very emotionladen environment for teaching English language, but also one that can arguably be rich for developing "new agencies and belongings" (Vora 2019: 29). Recognizing these various tensions, we employed a collaborative autoethnography (CAE) methodology (Chang et al. 2013;De Costa et al. 2022b;Yazan et al. 2023) to investigate what provoked emotion labor for us, expatriate instructors from both the global north and south, who teach English courses to Qatari national students at an IBC in Qatar.In this article, we first theorize emotions, emotion labor, emotional capital, and transformative CAE.We then discuss our process of engaging in CAE work.In our findings section, we reflect on two themes that provoked emotion labor for us within our specific context, and how we managed our emotions.We also consider how engaging in CAE work was transformative for us and gave us emotional capital.We believe that our study adds to the knowledge base of English language teachers' emotion labor in transnational and STEM spaces, while also showcasing CAE as a useful and transformative methodology to explore English language teachers' emotion labor (Yazan et al. 2023).We are heeding the call of Cowie (2011) for teachers to talk collaboratively about the emotional landscape of teaching in a way that we hope is accessible to readers. Theorizing emotions and emotion labor Emotions play an important role in how instructors navigate and interact with colleagues, students, and their institution, influencing thoughts, actions, and decisions.According to Miller and Gkonou (2022), the substantial growth in studies related to language teacher emotions in the last decade highlights the crucial function that emotions play in shaping teacher practices and their career longevity.While much of the research on language teachers' emotions has been cognitively oriented (Her and De Costa 2022), there are a growing number of studies focused on language teacher emotions and emotion labor from poststructural and critical perspectives (e.g., Alshakhi and Phan 2020;Benesch 2017;De Costa et al. 2018, 2020;Her and De Costa 2022;Hopkyns and Gkonou 2023;Hillman et al. 2023;Miller and Gkonou 2022). Although there is no agreed-upon definition of emotion, we view emotions as responses to an event or phenomenon that can lead to feelings (Barrett 2017) and feelings can lead to action (Her and De Costa 2022).Coming from a sociocultural and poststructuralist perspective, we are less interested in what emotions are and whether, for example, they are negative or positive, but rather what emotions do socially (Ahmed 2004;Benesch 2017) and how we manage them.We do not view emotions as simply the product of individual psychological processes, but as influenced by the larger social, political, and cultural context in which they are experienced.Emotions may be influenced by individuals' personal histories, social identities, collective cultural contexts, and power dynamics (Benesch 2019).In this view, emotions are also not fixed entities, but are constantly changing and evolving.Benesch and Prior (2023) argue that "emotions are discourses that circulate in societies; those supportive of the status quo are elevated to dominant status and rewarded in various ways" (5).The emotions that are favored and rewarded as a teacher in a university context, for example, are often those that align with or serve the interests of those in power, such as the administration.Emotions that challenge or question that status quo may be marginalized or devalued.In other words, emotions have social meanings and consequences, and these meanings and consequences can reflect and reinforce power structures in society. The theoretical concept of emotion labor refers to the emotional work that individuals, such as teachers, are expected to perform as part of their professional role.Teachers "actively negotiate the relationship between how they feel in particular work situations and how they are supposed to feel, according to social expectations" (Benesch 2017: 37-38).How they are supposed to feel or what is considered appropriate and professional has been referred to as feeling rules (Zembylas 2006(Zembylas , 2007)).For example, Benesch (2017) describes how universities often have plagiarism policies or honor codes that have guidelines for identifying and reporting instances of plagiarism committed by students.Teachers are expected to be constantly aware and on the lookout for instances of plagiarism and if they find it, they are supposed to feel indignant and punish students.While feeling rules often affect teachers' responses to and decisions about pedagogical matters, teachers' feelings are often not in line with institutional expectations.Teachers may feel differently or even resist imposed rules by finding alternative solutions such as mentoring students who plagiarize instead of punishing them.This tension or conflict between implicit institutional feelings rules and teachers' professional training, experience, or ethics creates emotion labor. While exhibiting various forms of care toward students as a teacher can be a significant source of emotion labor (Isenbarger and Zembylas 2006), teachers can also accrue emotional capital or resources that they can use to manage their emotions through "teacher caring" or "ethos of caring" (Benesch 2017;Miller and Gkonou 2022;Nazari and Karimpour 2022).Teachers may also draw on values, such as religious values (Her and De Costa 2022) to mitigate emotion labor or try to "listen rhetorically" (Rudd 2018) by actively and empathetically engaging with their students' perspectives and experiences.In this study, we use Benesch's conceptualization of emotion labor to guide our understanding of how we negotiate teaching English in a context where the ideologies, values, and mission of our institution, the home campus, and our host countryin other words, the feeling rulesdo not always align with our own. Transformative collaborative autoethnography Examining our emotion labor formed the basis of our CAE.As described by Chang (2008) and elaborated on by De Costa et al. (2022b), autoethnography is a qualitative research method in which researchers collect, analyze, and interpret their personal experiences as part of an interrogation of self in relation to others.In other words, individuals "attempt to gain a sociocultural understanding of their lived experiences in relation to others and their contexts" (Hernandez et al. 2022: 3).Self-reflexivity is the central process of investigation in autoethnography.Autoethnography has grown in applied linguistic studies, partly "as an act of epistemological resistance" over the past decade and established itself as a credible methodology (De Costa et al. 2022b: 550;Keleş 2022;Yazan et al. 2023). While autoethnography involves a single individual as both researcher and participant using their autobiographical experiences as their data, CAE does the same but in the company of others.It is "a multivocal approach in which two or more researchers work together to share personal stories and interpret the pooled autoethnographic data" (Lapadat 2017: 590-591).It is common to mix styles or elements of writing in presenting a CAE.For example, in our findings, we mix an "authoritative researcher's voice" with our own "everyday 'feeling' voices" (Choi 2017: xxv), which includes excerpts of oral and written narratives. Much has been written about autoethnography as a process that is critical, vulnerable, agentive, emotionally engaged, self-reflective, and transformative (Hernandez et al. 2022;Yazan 2020;Yazan et al. 2023).In interrogating our emotion labor collaboratively, our goal was not only to understand what provokes emotion labor for us and how we manage our emotions, but for this experience to also be transformative for our own self-discovery, for our scholarship and practice at our institution, and for readers of this article to "experience the transformation of their own perspectives, insights, knowledge, and behaviors" (Hernandez et al. 2022: 18).We wanted to understand our own teaching ideologies and identities better, enhance our teaching and our connection with our students, and help bridge the disciplinary and researcher-practitioner divides among the four of us.As Hernandez et al. (2022) note, "the result [of collaborative autoethnography] can be profound transformation for individual participants, the inquiry group, and ultimately the contexts they inhabit" (133).As such, we desire for our research to be accessible and to make a difference (Lapadat 2017).In order to delve deeper in the understanding of our emotion labor and the transformative nature of collaborative autoethnography, our methodology led us to explore two primary research questions: 1.What provokes emotion labor for us and how do we manage our emotions?2. How was engaging in collaborative autoethnography to understand our emotion labor transformative for us? Introducing ourselves All four of us came to Qatar, a Muslim country on the Arabian Peninsula, because we were offered jobs teaching English at an engineering IBC.We are part of an influx of expatriate workers to Qatar who have been recruited to work in sectors such as energy, education, healthcare, and infrastructure.Our residence and employment are in a country where Arabic is the only official language, though English is widely spoken as a lingua franca.This is largely because foreign nationals in Qatar constitute approximately 89.5 % of the population (Snoj 2019).Thus, we teach in a context where Qatar's demographic imbalance contributes to "real and imagined threats" to Qatari citizens' national identity and Arabic language (Belkhiria et al. 2021: 126). We are sponsored by a state-led non-profit organization in Qatar that has partnered with a number of American universities.Each of these English-medium IBCs is recognized for an academic specialization and degree.Our university, for example, offers only engineering degrees.These IBCs confer the same degree in Qatar as a student would receive if they studied at its home campus.Thus, our professional practice is situated within a transnational, liminal space.In the next part, we describe our individual positionalities more. Sara Sara is originally from the United States, where she also completed her Ph.D. in Second Language Studies with a focus on Arabic heritage language learners.She is White of Mediterranean, English, and Eastern European roots, and grew up in a middle-class, Christian family.She speaks English and has learned Arabic as a second language (Egyptian dialect) as an adult.Sara has been working in Qatar for almost nine years and previously lived and studied in Egypt for approximately two years.She is an associate professor of English at the IBC where she teaches communication, composition, and developmental English courses and is actively engaged in applied linguistics research. Aymen Aymen is originally from Sudan and from a Muslim family.He completed his Ph.D. in Literacy, Culture, and Language Education at Indiana University-Bloomington with a focus on teacher identity.He speaks Arabic (Sudanese dialect) and English and has worked in the Gulf, both UAE and Qatar, for over 14 years.He is an instructional associate professor of English at the IBC where like Sara, he mainly teaches communication, composition, and developmental English courses.Aymen has experienced different educational systems as both a student and a teacher and is active in developing language teacher associations in Africa and in teacher educator research more generally. Naqaa Naqaa immigrated to Canada when she was nine years old and grew up in an Iraqi, Muslim immigrant family.She completed her Ph.D. in Comparative Literature at Western University with a focus on representations of Islam and the Other in nineteenth-century European Romanticism.She speaks English, Arabic (Iraqi dialect), and is proficient in German and French.Naqaa has a rich background in international study and teaching, having lived in England, Quebec Canada, Germany and Switzerland prior to relocating to Qatar six years ago.She is an instructional assistant professor of English at the IBC and teaches composition, technical writing, and developmental English courses.As a teaching professor, she has not been able to pursue research as much as she would like, but is interested in writing communities and writing pedagogy. Bryant Bryant is originally from the United States, where he completed his Ph.D. in English Literature at the University of Miami.He grew up in a working-class family, though he acknowledges his Whiteness and privilege.He speaks English and has studied a little bit of Arabic, French, and Mandarin Chinese.He has been working in Qatar for three years, but also previously spent a year teaching English in Saudi Arabia.He is an instructional assistant professor of English at the IBC where he teaches literature, film, writing, and developmental English courses.His main research interests include the role humanitarianism and human rights play in contemporary literature, media, and film.Bryant describes himself as someone who initially found navigating college life difficult due to a lack of structural and parental support, and so he finds it particularly important to reach students who feel marginalized and/or excluded. In terms of intragroup characteristics, we would describe two of us (Sara and Aymen) as coming from "lang" backgrounds (applied linguistics/language education) and two of us (Naqaa and Bryant) as coming from "lit" (English and comparative literature) backgrounds.We also bring a mix of global north and peripheral global south (Heugh et al. 2021) perspectives and racialized experiences.In terms of our teaching philosophies, our practices vary, but we are all in support of multilingual approaches and culturally sustaining pedagogies (Paris and Alim 2017; Raza et al. 2021).We try to provide a supportive space where our students' cultural and linguistic identities are valued, validated, and positively shaped and advanced by their learning experience at the IBC. Engaging in collaborative autoethnography work Our reflections on our emotion labor occurred over the course of approximately one year, from June 2022 through May 2023, including 5 months of writing and revising this manuscript.Before we started working on this research project, we were meeting weekly to conduct a program evaluation of our developmental English courses.Sara was audio-recording our face-to-face (and sometimes Zoom) conversations with a digital recorder during these weekly meetings.While emotions about teaching English were not the focus of these meetings, she observed that emotion labor came up regularly during our informal conversations.When Sara was invited to contribute an article to this special journal issue, she suggested that our group could collaboratively examine our emotion labor further.Thus, we became a research team with a particular research focus, as we continued to work on the program evaluation. The audio recordings and transcriptions of our meetings were put into a shared Google document as well as our e-mail exchanges and WhatsApp conversations related to issues that were happening in our classrooms (see Figure 1 as an example).We also uploaded relevant articles about emotion labor and CAE and spent time discussing these theoretical and methodological approaches, as they were newer concepts for most of us.Taking a poststructural and discursive approach to emotion labor (Benesch 2017;De Costa et al. 2018), we individually and collaboratively examined the transcripts of our conversations and then through thematic analysis and group reflective dialogues, we identified nine broad areas that provoked emotion labor for us.These related to: teaching English to engineering majors students' motivations students' attendance students' negotiation of grades students' college readiness skills giving feedback on student writing navigating educational necolonialism our positionalities as expatriate professors monolingual biases and language policies We then engaged in further reflective dialogues related to these themes and each wrote a summary (500-1,000 words) of our emotion labor in January, 2023 and shared these with each other through e-mail.The process was iterative between data generation and analysis and involved much agency.As Yazan (2020) states, "selecting what to include in the autoethnography and determining how to recount and analyze various elements involve series of other acts of agency that require continuing commitment/investment" (para 8).Ultimately, considering the space constraints of the manuscript and what unique insight we could add to the scholarship of teachers' emotion labor, we chose to focus on only two sources of our emotion labor that we felt were salient: 1) navigating our purpose teaching English to engineering majors and 2) confronting our roles as English instructors within a context of educational neocolonialism.It was at this point that we generated the two previously-mentioned analytical questions to approach our data: 1.What provokes emotion labor for us and how do we manage our emotions?2. How was engaging in collaborative autoethnography to understand our emotion labor transformative for us? We further refined our summaries and we also returned to the transcripts of our meetings to select relevant excerpts on these themes that we could interweave with our written summaries.Importantly, we viewed the process of shaping our manuscript as data collection and data analysis too, which informed one another (Keleş 2022), and we continued to gain insights into ourselves and each other as we completed the initial draft of the manuscript and the revisions.It should be noted that while all of our voices are heard in the findings, we decided that we did not need to include excerpts from all four of us related to all sections of our findings.As we crafted the manuscript and confronted the challenge of limited space, we arrived at a decision to carefully choose a limited number of narratives that we felt best exemplified the two salient areas of our emotion labor and the transformative nature of our engagement in CAE. Findings and discussion In this section, we weave both a researcher's voice and our own everyday voices (Choi 2017) in excerpts of our oral and written narratives.We encourage readers not to skip over the excerpts in search of finding the analysis only in the sections of the researcher's voice.The narratives provide important insights as well as analysis about our emotion labor. The emotion labor of teaching English to engineering students One of the main themes related to emotion labor that continually came up in our discussions and later in our written reflections was the tension between how our institution viewed our purpose as teachers of English for engineering students and how we viewed our purpose.According to our institution, we are there to "educate exemplary engineers" and "develop engineering leaders in Qatar."These discourses are in our university's mission and vision and are also regularly repeated by our administration.The engineering education is also tailored to meet the market needs of the host country.As Al-Saleh (2022) states, producing engineers is "central to the development of oil and gas, the military and logistics infrastructures across the Gulf" (2).Our institution's mission is "to be the premier provider of engineering education in the region…an essential resource to the State of Qatar."As faculty at this institution, the mandated feeling rule is that we are supposed to feel proud of developing great engineering leaders for Qatar and this is the emotion that is valued by the power dynamics (Her and De Costa 2022); however, the constant focus on engineering has generated much emotion labor for us. As English faculty, we provide English language development support and teach core writing, communication, and other elective courses that our students need for their engineering majors.While we teach many students who are enthusiastic about studying engineering and becoming engineers, we also teach many students who are less enthusiasticwho we would describe as apathetic.They may have been pressured by family or their education sponsors to study engineering or just want the high status that comes with having an engineering degree in the region (Hillman and Salama 2018). As we were working on our program evaluation, we discussed how much we should focus on engineering topics and genres in our English courses, and this came up as an area of emotion labor for us.For example, one discussion point was whether we should move to a language for specific purposes (e.g., English for engineering purposes) model or continue to have diverse themes and topic units in our English classes.Bryant found it emotionally taxing that his purpose according to the institution was to serve the engineering programs and help students succeed in their engineering majors, when as disclosed in Excerpt 1, he did not feel that this was necessarily in all of the students' best interests. Excerpt 1 (Bryant, E-mail exchange, 21 August, 2022) I'm still trying to understand our students' lives and where they're coming from, but I know that there seems to be more pressure from a range of influences to become engineers.Even in my short time here, I've had many students tell me they aren't sure about engineering, and that they've had teachers and family members and parents who pressured them into enrolling.I have a female student currently in English whom I'm having trouble motivating.I've talked with her a bit, and I got a similar story from her last week.Her dad wants her to be an engineer because she can have a good salary and so on.I'm not sure she's into it; she may have other ambitions, goals, desires, interests that are suffering.So, in my class, I want to get her through and see her do well, but also: do I want to push her to get motivated to do something that maybe for her own happiness and wellbeing, she would be better off not doing?How do you figure out how to motivate a student while also completely understanding the reasons why they may not be motivated to succeed (and understanding that maybe "success" here isn't what they need or want)?It's been emotionally taxing in the work I'm trying to do and in trying to figure out how to meet my students' needs and best interests beyond engineering. For Naqaa, meeting the needs of students who do not necessarily want to be engineers was also an emotionally draining process, but she did not feel it was her place to challenge cultural perceptions about obtaining an engineering degree as a status symbol or to please one's family.As an Iraqi, she understood the cultural and family pressures to study engineering or medicine, even though she herself chose a different academic path.She felt that her purpose was simply to "motivate, care, mentor" and in the end, students would figure it out.In Excerpt 2, Naqaa responded to Bryant's e-mail from Excerpt 1 and described the emotion labor of caring (Isenbarger and Zembylas 2006) for a student at her previous university who had no interest in studying English.She used this anecdote to illustrate how she currently manages her negative emotions while teaching students at the IBC who may lack intrinsic motivation to study engineering.For Naqaa, this happens through expressing care as an educator or "teacher caring" (Miller and Gkonou 2022). Excerpt When our group met face-to-face a week later, we found ourselves further discussing the two e-mails that Bryant and Naqaa had exchanged and the emotion labor of motivating our students within the context of an engineering school.It is also a context in which most of our students receive government or industry sponsorships for their education and are then required to work for their sponsors for a period of time upon graduation.Aymen commented how he felt that many of his students were not motivated to pursue their studies due to a passion for engineering, but like Naqaa, he understood the cultural pressures to pursue engineering.He managed his emotions by trying to empower students to understand the importance of education in general and to have a goal in mind to work towards.As an immigrant from Sudan who moved to the United States after completing a degree in teaching English, he could not find a job teaching because his Sudanese degree was not recognized or valued in the United States.Eventually, he decided to pursue graduate studies and got his Ph.D.He felt proud of his accomplishment and he wanted his students to understand the privilege they have to receive an education and not waste it. In Excerpt 3, Aymen discusses how he wants students to value the importance of being educated in general. Excerpt 3 (Aymen, Face-to-face meeting, 28 August 2022) Undergraduate education for me in Sudan meant moving up the socioeconomic ladder because it was viewed that the only way to get out of poverty was to have an education; you graduate and then that will facilitate employment.It was also about status.This has implications for what I do with my students in the class to try and motivate them because I know their background and that they're not motivated simply because of a love of engineering.I don't even think excelling in order to get a job is that big of a concern to them.So, I try to talk to them about the importance of being educated in general. Although I went to the United States after graduating in Sudan, I couldn't find any job teaching English even though that was my training, so I assumed different jobs. I was there thinking, how long will I be doing this as an immigrant in the United States and what can I do in order to elevate myself? Education was the answer.When I started doing my graduate studies, I changed professions and started working as an ESL instructor and then immediately thought about Ph.D. and so on and so forth.Sometimes I share that with my students-that this is my background, that this is what shaped me as a person and it's important, no matter which context you are in, to have a goal, and to work hard towards achieving that goal.I feel a sense of pride because I worked hard.I achieved the goals that I set for myself, but how that makes my students feel, I don't really know.Maybe they feel sympathetic towards me, or maybe they feel empowered that they have to follow this example. Thus, while the IBC emphasized the development of exemplary engineers and engineering leaders, we recognized the diverse motivations and pressures faced by our students, and grappled with balancing institutional expectations and student well-being.We also recognized the importance of empathy, mentorship, and helping students discover their own paths within the parameters of culture, tradition, and gender; this seemed to be our way of combatting negative emotions and wrestling our emotion labor. The emotion labor of confronting our roles in educational neocolonialism In addition to the emotion labor of navigating our purpose teaching English to engineering majors, another area of emotion labor for us was confronting our role in educational neocolonialism (Romanowski 2022), or how we are shaping and influencing students' education and thinking in ways that are largely based on Western paradigms.In Excerpt 4, Bryant describes the tension of the premise, the implicit feeling rules, that we are delivering unbiased knowledge in our English classes, and the emotion labor that is provoked in knowing that there are biases and uneven balances of power.Bryant also expresses how he feels that he should know more about his students' cultures than he does; Bryant's expressions of his "shortcomings" are similar to sentiments expressed by the transnational teachers in Alshakhi and Phan's (2020) study in Saudi Arabia.While this lack of knowledge causes Bryant to feel guilty, he manages his emotions through agentive reflexivity and critical awareness to change the pedagogical norms. Excerpt 4 (Bryant, E-mail exchange, 16 January 2023) I don't believe you can delink this branch from the histories and legacies of colonialism and neocolonialism, and the power/knowledge imbalances contained within those systems.We have inherited these imbalances, which are very much alive and well today as many of our students know, even if they can't necessarily articulate them in terms we would recognize.For me, feeling rules is the underlying assumption that we are transmitting universal, unbiased "knowledge," and emotion labor is understanding that there are clear biases, inherited and uneven imbalances in knowledge and power contained within the transmission of "knowledge."I think students are often skeptical because they know, at some level, something is amiss.They know we are teaching very specific behaviors, expectations, ways of knowing, ideas about knowledge and knowledge production, and so on.I also think there are ways around this, ways to opt out, ways to grapple with it, and ways to give students the language and tools to think about some of the things that they seem to sense and some of the things that I think cause them emotion labor.I also believe that I owe it to my students to know more about their culture than I do.I should know about them, about their lives, about their languages.Our students have so much knowledge of culture and literature, and often, in their classes, this goes unacknowledged and unappreciated, and, as a result, undervalued, devalued (I think insultingly so, and I think they feel it).I feel guilt not knowing more about their worlds, so I build lots of stuff into my syllabus that I don't know about, that give them a chance to teach me.I use it as a place where students can teach me about their worlds, cultures, languages, families (often), and lives.They're the experts, not me.I think building in that sort of strategic vulnerability into a classroom gives them ownership, defuses the artificial power imbalance in a classroom, and shows them that this-I hope-can be a collective, collaborative experience where their knowledge and expertise builds a new environment that is multisided and has much less to do with me or the peripheral knowledge that I bring to the class.I think that's, in part, how I deal with some of the specific emotion labor inherent in our system here, in our unique position here in a strange "Western" campus bubble in a rich yet developing multicultural environment. Similar to Bryant, in Excerpt 5, Sara also expresses the emotion of guilt about participating in "cultural and linguistic imperialism" and tries to manage and mitigate these tensions through exercising agentive powers to change the pedagogical norms. Excerpt 5 (Sara, E-mail exchange, 16 January 2023) I often feel guilty that I am participating in cultural and linguistic imperialism as a White, American instructor at a branch campus of an American university in an Arab country.As someone who researches the social and ideological aspects of English as a medium of instruction and transnational higher education in Qatar, I know that IBCs are contested spaces and that they can be viewed as an extension of an imperialistic agenda and as something that perpetuates cultural biases and dominant Western educational models.I'm acutely aware of a broader debate about the loss of language, values, and practices in the region and how some of my students navigate shaming from family and community members for using English too much or presenting undesirable Westernized identities in their communities. As an instructor at this institution, I often feel that I am expected to immerse our students in English and socialize them into "Aggie" values and traditions-a culture that comes from a home campus that is largely White, conservative, and Christian.I often reflect on what is being lost when academic spheres of knowledge are only produced in English. I try to mitigate this tension I feel in all sorts of ways.I try to create many opportunities to rhetorically listen to my students.I try to design assignments that help me better understand my students' beliefs, values, worldviews, identities, emotions.I am also very committed to using multilingual approaches in both my language and content courses.For example, I show videos in class that are in Arabic with English subtitles and have them discuss the video in English or I encourage them to use Arabic or other language resources for their papers and presentations.I now do a semester-long translingual literacy narrative project in my developmental English class.The project encourages my students to reflect on the emotional impact of global English and language policies and of being an IBC student.It encourages them to reflect on their emotions related to their linguistic and cultural identities (such as sometimes being called a "chicken nugget" for using English more than Arabic) and to think critically about the macro-, meso-, and micro-contexts in which they are situated and how these contexts and the power dynamics shape or underlie their emotions.Toward the end of the semester, students transform their literacy narratives into a multilingual, multimodal product of their literacy journeys, which I think captures a much more holistic view of their lived experiences, and teaches me a lot about their lives.My goal is not to help students be more monolingual or more Western, but to expand their communicative and cultural repertoires. In an earlier face-to-face meeting in which this topic of neocolonialism had also come up, Aymen had discussed (see Excerpt 6) how he managed his emotions about teaching English as "an arm for imperialism" by countering "native-speakerism" ideologies (Holliday 2006) in his classroom and the underlying hegemonic discourses; he also doesn't police translanguaging (García and Li 2014) or learners shuttling between English and Arabic or other languages in his classrooms. Excerpt 6 (Aymen, Face-to-face meeting, 11 September 2022) For me, what makes me feel better that we might be an arm for imperialism is raising students' awareness that they don't have to master English as "native speakers."They don't need to feel emotionally burdened themselves that they must be like native English speakers. I tell them that comprehensibility is really what matters and that they can also use their first language-it's not something that I ask them to keep out of the door.I tell them that it's important, and that it can help them with learning the second language.This reassurance is solidified by the translanguaging activities we conduct in class.It makes me feel better when they see their first language as valued in the learning process and hopefully makes them also feel better about their development in the language.I think it creates a more positive environment in the classroom. Similar to Aymen, Naqaa discussed with us how she has begun to embrace translanguaging and more culturally sustaining pedagogies in her language classrooms since she has moved to Qatar and started to work at an IBC.Previously, language immersion and Eurocentric teaching materials had been an important part of her language teaching philosophy.This change was a result of her managing the emotions that were provoked by teaching English in a neocolonialist educational context.In Excerpt 7, Naqaa reflects on how her language teaching philosophy shifted. Excerpt 7 (Naqaa, E-mail exchange, 16 January 2023) When I was a nine-year-old immigrant trying to integrate and settle in a small Waspy city in Canada, I did everything I could to accelerate my English so as not to be labeled an ESL student.This meant that I needed to think, dream, breathe English.I didn't even speak my mother tongue at home with my siblings -I am still called a bad influence to this day because of that.As I got older, I took classes, traveled for immersion programs, and lived in remote parts where I was forced to practice only that language that I was learning at the time.For me, that was the best way to learn a language.I immersed myself in the culture, tradition, and history of the language I was learning.I lived with exchange families, made local friends, traveled around the country, and took everything in.Friends and colleagues called me the most 'non-Arab' Arab they knew.To me, that was a good thing!I then became a language teacher of English, Arabic, French, and German.And, throughout my teaching career, my pedagogical practices recreated my own past experiencesimmersing oneself into a discourse community.In my classes, only one language could be spoken, and if students tried to express themselves in their mother tongue, I would force them to find the words in the language we were learning.My assignments, in-class activities, artefacts, videos, songs, and art were Eurocentric for the most part. The composition of my students and the context changed when I moved to the Middle East and started teaching students from my own religious and linguistic background in an English-medium instruction context.Here in Qatar, I do use a bit of Arabic in the class, and I see that my students feel a little bit relaxed, as it breaks that barrier between me and them.When they're comfortable with me, they'll say a few Arabic words here and there, or when I'm explaining something, I'll explain it to them in Arabic to make sure that they understand.I would say definitely a mix of English and Arabic is helping us get through the class and taking away some of their discomfort.It has brought me closer to my students and it also makes me feel better, like I'm acknowledging their full identities as well as my own. Although we experienced emotions such as guilt and emotion labor in confronting our roles in educational neocolonialism, we found that we actively managed and mitigated these tensions through rhetorical listening, multilingual approaches, and culturally sustaining pedagogies.We saw the "agentive power of teacher emotions" (Benesch 2018;Her and De Costa 2022: 2) and how our emotions influenced our actions, decisions, and interactions. How was exploring emotion labor together transformative for us? In thinking about what emotions do socially and what emotion labor does, a CAE focus on emotion labor allowed us to critically reflect on our own ideologies, identities, and practices, both individually and as an inquiry group in a shared context.We came to view both our methodology and our exploration of emotion labor as "resource[s] for reflection and ethical agency" (Miller and Gkonou 2022). We have become more aware of our own positionalities as expatriate instructors, the biases we bring, and how we each manage these in our classrooms.As Her and De Costa (2022) argue, emotion labor can be sites where teachers can "develop, reuse and strengthen their emotional capital" (9).We came to an important realization that addressing what is often left unsaidthe feelings rules and our emotion laborcan be transformative for our students and us, and help us thrive in this liminal space we share. As a group, we came to see how emotion labor is not only something burdensome or a problem to overcome, but also something engaging and enriching for us that helps us accrue emotional capital and helps us "explore and negotiate new dimensions of our identity as we work toward resolving these tensions" (Yazan et al. 2023: 142) Teachers and students may have deep emotions and feelings about teaching and learning English, or being in an IBC or language policies in the classroom.These tensions can be critical sites for students and instructors to think through complex and important issues and face them together and figure them out together (Pentón Herrera and Martínez-Alba 2022).Benesch (2017) suggests that "English language teachers can draw upon emotion labor as a signal of disharmony and therefore a tool for collaborating with students and colleagues to achieve greater understanding and possible reform" (182).Similarly, Alshakhi and Phan (2020) in writing about transnational teachers' emotions in the context of Saudi Arabia note: "through engaging with these dialogues and conversations, hidden nuances of emotion(al) labor in transnational mobilities [can] be revealed" (323). We found that we manage our emotions through agentive reflexivity, honest engagement with students, and "teacher caring" (Miller and Gkonou 2018;Nazari and Molana 2022).In exploring the transcripts of our conversations, we realized that each of us engages in critical discussions with our students that relate to our own emotion labor.Naqaa explores with her students the emotion labor of teaching English to engineering students: What motivates me/you to be here?Do you want to be an engineering leader?What do I/you see as the purpose of an undergraduate degree? of studying English?How important are individual happiness and wellbeing to me/you?Aymen explores with his students the emotion labor of underlying hegemonic discourses of English: Why are we teaching/learning English?Which English are we teaching/learning?Whose English are we teaching/learning?Does English belong to all of us?What does English mean to you?Similarly, Bryant explores with his students the emotion labor of giving feedback on student writing: What is corrective feedback?What is correct?Where does correct come from?Who says what's correct?Sara explores the emotion labor of monolingual biases in IBCs: How do we feel when we mix languages in different contexts?How do others react when we mix languages?How do we use different languages to express ourselves and make sense of the world?We felt we were able to effectively manage our emotions and gain emotional capital (Her and De Costa 2022) through these critical discussions with students. Another transformative aspect of our CAE work was "bridging the researcherpractitioner" divide (De Costa et al. 2022b;Yazan et al. 2023).Naqaa expressed that she had not been able to pursue research since joining the IBC and missed being part of an intellectual community.She also became aware that she was already enacting pedagogical practices recommended by applied linguist researchers, even if she could not "pin a theoretically oriented label" to them (De Costa et al. 2022b).In Excerpt 9, Naqaa discusses how engaging in CAE work was transformative for her. Excerpt 9 (Naqaa, E-mail exchange, 16 January 2023) When we are so focused on our teaching and classroom management, we often lose touch with current research.Having a working group of teachers and researchers has been a very enriching experience, one that is greatly contributing to my professional development.I realized I have already been applying some theoretical concepts with which I was not that familiar before, like translanguaging, in my classroom.I have become a better instructor, and a more conscientious teacher since I joined this discourse community.As we meet regularly throughout the term, and as we talk through issues and challenges, share our thoughts or vent on WhatsApp, and collaborate on conference presentations and research papers, this community has not only directed my research and readings, but has helped me become better towards my students, and towards myself. Thus, Naqaa's reflection highlights the significance of having a community of teachers and researchers who collaborate and share their experiences, thoughts, and research findings.This is similar to De Costa et al.'s (2022b: 13) findings, in which they discovered a "collective power of collaboration" between researchers and teachers in their study.Naqaa's reflection emphasizes how being a part of such a community can enrich professional development by helping teachers stay current with research and apply theoretical concepts in their teaching.When Naqaa submitted her edits on our second revision of this article, she expressed over e-mail to Sara how grateful she was to be included in research projects (Excerpt 10).Being part of a collaborative and communal research environment invigorated and energized her personally and professionally.It should be noted that the experience equally enriched those in our group who have been more involved in research projects and as emphasized by De Costa et al. (2022b), there was "knowledge sharing and not unidirectional knowledge transfer" (13) between us. Excerpt 10 (Naqaa, E-mail exchange, 29 May 2023) …thank you for including me in your research projects; this collaboration and community is what keeps me invigorated and excited about my teaching and pedagogy! In considering overall how exploring emotion labor together was transformative for us, we found that, similar to Yazan et al. (2023), that engaging in CAE allowed us to develop much more emotional awareness and identify some of the often unspoken and "invisible" sites of friction that we all feel working in a transnational higher education landscape and an engineering context.We agree that we would not have gained the same insights working on this project in isolation.During the presentation of an earlier iteration of our CAE work at a conference, a member of the audience remarked that the collaborative aspect of our study was its true strength and we indeed felt this. Conclusions In this CAE study, we have collectively examined our emotion labor as expatriate English instructors in an engineering transnational university context in Qatar.One area of emotion labor for us was our institution's discourse about developing and producing engineers and our own encounters as English instructors with students who were not that motivated to study engineering.We had to constantly mediate the institutional discourse with our own experiences with students, and this led to emotion labor (Her and De Costa 2022).Another area of emotion labor was confronting our role in contributing to the neocolonization of the education sector in Qatar and the tensions of knowing there are biases and uneven balances of power.We also developed emotional capital though in relation to feelings rules and emotion labor (Her and De Costa 2022).Moreover, the methodology of CAE helped us to critically reflect on our own ideologies, identities, and practices and view emotion labor as a resource for reflection and ethical agency (Yazan et al. 2023).We hope our readers are inspired by our narratives and our accessible methodology to examine their emotion labor collaboratively with their colleagues or students or other stakeholders in their contextsand to consider how their emotion labor is shaped by their own social, political, and cultural contexts.We believe that addressing unsaid feelings, emotions, and emotion labor can be transformative for language teachers and their students by leading to a more vulnerable, collaborative, and equitable classroom space. Figure 1 : Figure 1: Example of our WhatsApp correspondence related to an issue of suspected student plagiarism. 2 (Naqaa, E-mail exchange, 21 August, 2022) Three years ago, I taught English literature and writing classes to female students at the national university.One student, let's call her Maha, took a writing class and did really well.Then, she took English literature and just didn't seem to care or lost interest in her studies.She came to my office one day upset.It certainly isn't my job to get her to 'rebel' against her parents.But, I regularly talked to this student and suggested 'safe' ways in which she could still pursue her interests.A year later Maha comes into my office looking radiant.Maha is a full-time teacher now AND she is a part-time art instructor at an art gallery.She says "the most important thing is that everyone is happy.My mom is happy about my career and I get to do what I want in the end, and the brainstorming sessions I had with you helped me a lot."I'm not sure how to answer your question Bryant, but I've learned to remember Maha as an example for whenever I feel emotionally taxed.She figured it out, and so will our students forced to be engineers.I haven't met that many who are really passionate about engineering but our job is to try to work with the parameters specific to culture, tradition, and gender.The process is draining, of course, and I feel sad or bad or angry about this or that student's story, but, for my mental wellbeing, I have to keep saying, I'll do my best to motivate, care, mentor, but in the end, they will figure it out. It turns out that Maha got accepted to an IBC in Qatar but was not allowed to go.Maha hates English and doesn't want to become an English teacher.She loves art and drawing, and she wants to pursue her studies in visual art and design.She couldn't go to her dream university because of cultural restrictions and family expectations; they did not want her in a co-educational environment.If some students are forced to study engineering at an IBC because 'that's where the money is,' Maha had to study English and education at the national university because that's where good, dutiful daughters go to get a safe and respectful job (no need to mix with men, no need to work long hours).What to do?This student is crying in my office because she believes her life is being dictated to her.I cannot ask this student to go against her parents' will.I understand the parameters with which many Muslim Arab females have to work.I understand the baggage.But, it is not my job to start a revolution. They assume that what I am doing and what they are close to culturally, emotionally, and linguistically are disconnected.I find this not only sad but a huge failure.But what I think that I can do, which I think is a good marker of critical thinking in general, is to try and talk about the friction.In writing classes, we discuss: "Is this a writing class or a writing in English class?Or a writing in college English class?Do the skills transfer to Arabic or other languages?Why or why not?" It's also a good exercise in critical thinking to discuss these things and a place where students can discuss, challenge, and re-envision everything about the course with the instructor: "This is a problem?Okay, what's the solution?If we imagine this course as your course, not mine (because it's not), then what can I do for you?What can you do for you?" . Bryant reflects on the emotional capital gained from emotion labor in Excerpt 8.I feel lucky to be able to have these sites of friction and to be able to confront them with a diverse body of students.I really appreciate this friction as a space of personal development and critical inquiry into my own practices and assumptions about what I'm doing.I think these contentious and conflicting sites can actually be positive and rewarding sites to dwell in and they can be opportunities for real, critical, deep learning in a classroom.
12,278
2024-04-04T00:00:00.000
[ "Engineering", "Education" ]
Small Flux Superpotentials for Type IIB Flux Vacua Close to a Conifold We generalize the recently proposed mechanism by Demirtas, Kim, McAllister and Moritz arXiv:1912.10047 for the explicit construction of type IIB flux vacua with $|W_0|\ll 1$ to the region close to the conifold locus in the complex structure moduli space. For that purpose tools are developed to determine the periods and the resulting prepotential close to such a codimension one locus with all the remaining moduli still in the large complex structure regime. As a proof of principle we present a working example for the Calabi-Yau manifold $\mathbb{P}_{1,1,2,8,12}[24]$. Introduction In view of the recent swampland conjectures, one has to revisit the standard constructions of dS vacua in string theory. The most recognized approach is the mechanism of KKLT [2] in which an initial non-perturbative AdS vacuum is uplifted by anti D3-branes placed at the tip of a strongly warped throat. This construction has been scrutinized from various points of view. First the dS uplift mechanism was questioned, namely whether a D3-brane at the tip of a warped throat is really a stable configuration (see [3] for a review). Moreover, it has been questioned whether the 4D description of the KKLT AdS minimum does really uplift to a full 10D solution of string theory [4][5][6][7][8][9][10][11][12][13]. Another important point is whether the effective action that is presumed to describe the strongly warped regime is really under control. Based on earlier work [14], this question has been addressed recently [15][16][17][18][19]. However, there is one even more basic assumption in the KKLT construction, which is that the three-form flux induced no-scale potential admits (Minkowski) vacua with an exponentially small value of W 0 1 . There exist statistical arguments (see [21] for a review) that this should be the case. However, based on an older proposal [22,23], only very recently Demirtas, Kim, McAllister and Moritz (DKMM) [1] formulated a concrete two-step mechanism for the explicit construction of such vacua. Working in the large complex structure regime of a Calabi-Yau (CY) manifold one first considers only the leading order terms in the periods and dials the fluxes such that one gets a supersymmetric minimum with W 0 = 0. In fact this leaves at least one complex structure modulus unstabilized. Taking in the second step also the non-perturbative corrections to the periods into account, this final modulus also gets frozen in a race-track manner and generically gives an exponentially small |W 0 | 1 as a potential starting point for KKLT. However, for the final uplift one actually needs a similar controllable mechanism close to a conifold point in the complex structure moduli space where large warping can occur [24]. It is the objective of this paper to analyze whether such a generalization of the DKMM mechanism can indeed be found. For that purpose, one first needs to know the explicit form of the periods close to a conifold locus in the complex structure moduli space. The best studied example is the quintic for which the periods in this regime have been determined explicitly [25][26][27] and for which there were already studies of moduli stabilization [28,29]. As we will describe, in this case finding models with |W 0 | 1 appears to be a number theoretic problem, i.e. there is no generic algorithm where at leading order one starts with W 0 = 0 and then subleading instanton corrections provide the exponentially small corrections. The obstruction here is that besides the conifold modulus there are no other complex structure moduli that can still take values in their large complex structure regime. Thus the generalization of the DKMM mechanism requires some remaining complex structure moduli to still be in their large complex structure regime. It turns out that this regime that lies at the tangency of the conifold and the large complex structure locus has poorly been studied so far. While the periods in the large complex structure (LCS) regime and deep in the non-geometric regimes are well studied, an equally satisfying method for points close to the conifold for more than one modulus models is still lacking. Therefore, a large portion of this paper deals with the development of such methods to compute the relevant periods. Describing this in more detail, in the one modulus case a possibility is to compute local solutions of the Picard-Fuchs (PF) equations and transforming them into the symplectic basis analytically continued from the LCS region. This approach was used e.g. in [26][27][28][30][31][32]. While it is in principle applicable in the multi-modulus case, the computations become very tedious and have to be performed for every point one is interested in separately. Another approach is to work with the ω periods of [25,[33][34][35] determined deep in the LCS/non-geometric phases. Unfortunately these converge badly at the conifold. Nevertheless, it is possible to extract the periods using very high orders, as is done for example in [36]. Methods based on gauged linear sigma models (GLSM) [37] face the same problem of slow convergence at the phase boundaries. Recently, the Mellin-Barnes representations arising from the GLSM were used in a recursive construction, resulting in infinite sum expressions for the entries of the transition matrix [38,39]. Instead of a singular approach, we will apply a combination of various methods. We will still try to find the transition matrix from a local solution to the symplectic basis. To improve the numerics, monodromy considerations as well as a symplectic form on the solution space developed in [40], where it was applied to the Seiberg-Witten point of the P 1,1,2,2,6 [12] model, are used. In [30] the same method was applied to the P 2 -fibration phase of the P 1,1,1,6,9 [18] CY. We then obtain an analytic solution for P 1 -fibrations and compare it to the numerical results. We find good agreement between the two methods. This paper is structured as follows. Before we delve into the problem of moduli stabilization, in section 2 we start with the mathematically rather involved description of a systematic way to compute the periods of a multi-parameter Calabi-Yau manifold close to the point of tangency of the conifold with the LCS regime analytically. The less mathematically inclined reader may essentially skip this section after noticing the result for the prepotential shown in equation (2.45), which we will use for our working example. In section 3 we present a three-step mechanism to generate small W 0 vacua with all complex structure moduli as well as the axio-dilaton stabilized. Here we first discuss the example of the quintic and the appearing obstructions to the formulation of such a mechanism for too simple models, and then show that more involved examples behave much better. Finally, we demonstrate the mechanism by constructing an explicit example for the P 1,1,2,8,12 [24] Calabi-Yau. Note added: While finishing this work we became aware of an upcoming paper [41] by Demirtas, Kim, McAllister and Moritz which approaches the same question. Periods at special points in moduli space In this section we present the tools that we employed in order to compute a symplectic basis of periods close to the conifold locus with the remaining moduli in their large complex structure regime. This involves quite some mathematical machinery. For readers not such interested in the technical details, we note that the main result is the prepotential shown in equation (2.45). This will be employed in the upcoming section on moduli stabilization. We will mainly focus on hypersurfaces (or complete intersections thereof) in weighted projective spaces P n ( w) defined by the zero locus of m quasihomogeneous polynomials P i of degree d i satisfying (2.1) These can be described by means of toric geometry through an n-dimensional convex integral reflexive polyhedron. For the detailed construction and the topological properties of the resulting varieties see [42] and references therein. The integral points ν i of the polyhedron are embedded into R n+1 as ν = (1, ν i ). These are not linearly independent and their dependencies can be described by a lattice whose basis {l i } can be chosen to be the one for the Mori cone (cf. [42]). This basis represents the charge matrix of the associated GLSM and directly relates to the Gel'fand-Kapranov-Zelevinski (GKZ) hypergeometric system [43] which in turn annihilates the period integrals (2.6) we shall define in a moment. The a i are coordinates of an affine C p+1 , which is larger than the physical complex structure moduli space. β is a constant vector with β 0 = −1 and β j = 0 for j = 0. The relevant coordinates around the LCS point are also given by the Mori cone basis as where s is the number of vertices in the polyhedron. These are chosen such that any function of them is automatically annihilated by the Z j of (2.4). In the appendix of [42] the l (i) -vectors and resulting Picard-Fuchs (PF) operators D i , i = 1, . . . , h 2,1 are listed for many examples. Note that for some models the PF operators obtained from the GKZ system (2.3) are not the complete PF system, requiring an extension which was also worked out in [42]. For the example we will consider this is not necessary. The periods we are ultimately interested in are defined by integrals over the unique holomorphic (3, 0)-form as Here X denotes the CY we are investigating. The periods are annihilated by the PF operators. Computing the local periods For cleaner notation we use multi-indices i = {i l } l∈{1,...,h 2,1 } , α j,k = {α j,k,l } l , and β j = {β j,l } l in the following, and employ the shorthand notations As the periods are annihilated by the PF operators, local solutions can be obtained by inserting the ansatz expressed in coordinates centered around the point of interest. The sum over k runs from 1 to 4 h 2,1 in order to capture terms with h 2,1 individual log factors, each with powers ranging from 0 to 3. The sum over i runs over the integer lattice i l ∈ {0, ..., m}, l ∈ {0, ..., h 2,1 } with m the order up to which we are computing. The multi-indices α j,k and β j are made up of positive constants and can be fixed as follows. The fundamental period ω 0 with α 0,k = β 0 = 0 is always present. Moreover, for each distinct β j there is one period with α j,k = 0 ∀k, i.e. a pure power series. Thus, one makes the ansatz (2.9) The allowed β j are fixed by demanding the vanishing of the coefficients of c 0,j in the constraints D i w j = 0. The remaining periods have α j,k,l ∈ {0, 1, 2, 3}. For one parameter models the α j,k associated with each distinct β j range from 0 to the degeneracy of β j as a solution to the indicial equations. For multivariate cases it is no longer clear how to count the degeneracies, as there is more than one PF operator. In that case the α j,k can be determined by making the most general ansatz as in (2.7). To reduce the computation time one can restrict the α j,k further by using the following observation. The logarithmic structure of the periods with β j = 0 is completely fixed by the logarithmic structure at the LCS point. Performing on the LCS periods the coordinate transformation to coordinates centered around the point one is interested in and expanding them in a series around the origin in the new coordinates, gives exactly the needed structure. For the resulting expressions in a three-parameter example see appendices A.1 and A.2. In [44] a solution of (2.8) around the LCS point in terms of the Mori-cone basis {l i } and the triple intersection numbers K ijk was given for any CICY. The fundamental period is . (2.10) In this expression ρ i ∈ R are introduced, extending the summation variables n i . Defining then the derivative operators with respect to them, the period vector ω is computed at the natural indices ρ i = 0 as with i = 1, . . . , h 2,1 . An integral symplectic basis The local periods need to be combined into an integer symplectic basis Π. The two bases are related by a linear transformation Π = m · ω . (2.13) At the LCS and orbifold point the (h 2,1 +2)×(h 2,1 +2) matrix m can be determined purely based on monodromy arguments. For the conifold point the monodromies constrain the form of the transition matrix but do not completely fix it. Moreover, the monodromy calculations can become very involved for models with many moduli, especially when the moduli space needs to be blown up and monodromies around exceptional divisors are needed. In principle one could simply use a general ansatz for m and numerically determine it in the overlap of the regions of convergence of Π and ω. But this turns out to be numerically unstable. For one parameter models it is possible to go to very high order and obtain reasonable results, but already for two parameter models the convergence of the values is extremely slow. Thus, a systematic method to constrain the transition matrix m is needed. An alternative to monodromy arguments is the use of the symplectic form on the solution space of the PF operators. This symplectic form was first introduced in [40], and in [30] the same method was applied to the P 2 -fibration phase of the P 1,1,1,6,9 [18] CY. Since the space of periods and the space of solutions to the PF system can be identified, one can represent the symplectic form pairing the periods as a bilinear differential operator acting on the space of PF solutions. The symplectic form Q is then given by where Q k,l (x) are functions of the coordinates and k and l range over a basis of the ring of differential operators where D j = 0. One imposes the conditions enforcing constancy over the moduli space, which leads to a system of coupled linear differential equations for the Q k,l (x). This system allows for the determination of the symplectic form up to an overall normalization η. The symplectic form in turn enables us to systematically constrain the transition matrix m by demanding that the resulting periods have the desired intersections. This does not fix it completely as there are combinations of the periods that leave the intersection matrix invariant. For example, given a certain α-cycle that has vanishing intersections with all the other cycles except for its β-cycle companion, we could add multiples of the α-cycle to the β-cycle without changing the value of the intersection. Therefore we still need to perform the numerical matching as a final step. However, for our later construction it will be important that the coefficients of certain moduli in the prepotential are rational numbers. As it is not possible to prove this property by a numerical approach we describe in a later section a way to analytically compute the coefficients, thereby proving that they are indeed rational numbers. 2.3 A (not so) special example P 1,1,2,8,12 [24] As a concrete geometry on which to explore the above constructions we choose (the mirror dual to) the three-parameter (h 1,1 = 3, h 2,1 = 243) CY P 1,1,2,8,12 [24], given by the vanishing locus of the defining polynomial The hypersurface can be seen as a fibration of a K3 surface, and contains a singular Z 2 curve C, with an exceptional divisor corresponding to a C × P 1 surface, and an exceptional Z 4 point, which is blown up to a Hirzebruch surface Σ 2 . It has been studied e.g. in [42,[45][46][47][48]. The extended set of integral points in its toric geometric construction is given in [42] as The Picard-Fuchs operators in these coordinates read 2 (2.20) where we have employed the logarithmic derivatives Θ We will also make use of the rescaled coordinates Figure 1: A simplified depiction of the moduli space of P 1,1,2,8,12 [24], only looking at the divisors, intersections and tangencies of interest. The LCS point and the (x, y, z) = (0, 0, 1) conifold are shown. The two points of interest in the following are the LCS point (x, y, z) = (0, 0, 0) and the conifold point (x, y, z) = (0, 0, 1). We blow-up the conifold point by introducing exceptional divisors as schematically represented in figure 1, and we define coordinates at the LCS side of the blow-up and at the conifold side of the blow-up. We will work on the LCS side of the blow-up, but the final results are independent of the one chosen. An integral symplectic basis at the LCS We would like to obtain an integer symplectic basis at the LCS from ω LCS , the local basis of periods obtained from (2.12) and printed explicitly in appendix A.1. In practice, we need to calculate a transition matrix m 1 such that Π = m 1 · ω LCS is an integer symplectic basis. To this end we start by writing the prepotential at the LCS for P 1,1,2,8,12 [24]. The general expression for such a prepotential is [44] where c = − ζ(3) (2πi) 3 χ with χ the Euler number of the manifold. The classical triple intersection numbers K 0 ijk are given in [42]. The b i are related to the intersections of the second Chern class and the Kähler forms. Both the b i and χ can be calculated from the Mori-cone basis and the classical triple intersection numbers through explicit expressions given in [44]. The a i are fixed modulo an irrelevant integer part by demanding that the prepotential gives periods with integer monodromies. The resulting prepotential at the LCS for P 1,1,2,8,12 [24] is From it we obtain an integer symplectic basis of periods To calculate m 1 we match the leading behavior of Π and ω LCS . To this purpose we insert the leading terms of the mirror maps into Π and work with a general ansatz for m 1 . The latter is constrained by demanding that the monodromies M x i around the LCS divisors are compatible in both bases, The resulting matrix is (2.28) Symplectic form To calculate the symplectic form (2.14) for P 1,1,2,8,12 [24] we start by writing the most general ansatz taking into account the order of the PF operators, which reads Imposing then the constraints (2.15) that the symplectic form is constant throughout the moduli space when evaluated on solutions of the PF system we arrive at expressions for the coefficients A i . Their exact form is listed in appendix A.3. Inserting the integer symplectic basis of periods at the LCS that we obtained above into the symplectic form yields the intersection matrix This allows us to fix η = −16iπ 3 so that the symplectic form returns the correctly normalized intersections. A (numerical) integral symplectic basis at the conifold As described in section 2.1 we can generate a local basis of periods at the (x, y, z) = (0, 0, 1) conifold by expressing the PF system in the (2.22) coordinates and finding solutions to it order by order. A set of solutions ω c is given in appendix A.2. To transform ω c into an integer symplectic basis at the conifold we need to determine a suitable transition matrix as explained in section 2.2. Instead of working with a fully general ansatz for this matrix, we already restrict to the candidates among which the combinations giving the α-cycles will be chosen such that they do not mix with the would-be β-cycles, i.e. (m 2 ) 1−4,5−8 = 0. Inserting Π = m 2 · ω c into the symplectic form and demanding that the intersections are we find the relations that have to hold between the entries of m 2 . To proceed with the numerical matching we need to select points in the overlap of the regions of convergence of ω LCS and ω c . Given the conditions (2.15) imposed on the symplectic form, the intersection of two periods given by it is a constant. Taking mixed intersections between the periods in ω LCS and in ω c we obtain functions that plateau in the region where the series expansions still correctly capture the behavior of both periods, thereby guiding us in the choice of points. In this way we obtain that the transition matrix transforming ω c into an integer symplectic basis is Both ω LCS and ω c were expanded to O(x 11 ) to perform the matching, but in spite of this the convergence of the numerical values is still not enough, as some of the entries present an error of a few % when compared to the exact result (2.40) that we calculate below. On top of this the result is very sensitive to slight changes in the choice of points. Shifting one of the points from (x, y, z) to (x, y − 10 −5 , z) the result changes noticeably, now being These problems can be solved by an analytic determination of the transition matrix m 2 . Analytic transition matrix In this section we will provide an analytic solution for the transition matrix to the conifold in the P 1,1,2,8,12 [24] CY, which is an example of a P 1 -fibration. This leads to expressions for the periods in terms of hypergeometric 3 F 2 functions and derivatives thereof, which can be evaluated analytically, allowing us to give an exact expression for the prepotential at the conifold, not involving any factors which can only be determined numerically. This also shows that all factors in the prepotential are rational numbers, a fact important for our algorithm described in the next section. To determine the prepotential to high orders we introduce several bases: • The symplectic basis Π. • The local PF basis around the conifold ω c . The PF basis ω c has the advantage that it is easy to evaluate to high order as described in the previous section. The hypergeometric basis can be related to the symplectic basis exactly. Moreover, it can be expanded around the conifold in terms of derivatives of hypergeometric functions, which allows us to match it exactly to the local basis. Combining these two transformations gives the relation between the local basis and the symplectic basis. The relations between the different bases are shown in figure 2. The hypergeo- metric basis ω is the local basis around the LCS (2.12). The transition matrix between this basis and the symplectic basis, m 1 , can be determined purely on monodromy considerations around the LCS with the result (2.28). For the analytic continuation to the conifold we rewrite the fundamental period in terms of a hypergeometric function. One can perform this sum for any coordinate, without loss of generality we choose the z direction. We denote the l-vector corresponding to this direction l (z) . The fundamental period then takes the form x n 1 +ρ 1 y n 2 +ρ 2 z ρz f (n 1 , n 2 , ρ 1 , where f denotes a complicated combination of Γ functions independent of the coordinates and a, b are parameter vectors of length p and q, depending on the l-vectors. Here i.e. p is the sum of the negative entries of the charge row vector l (z) (plus 1), and q is the sum of the positive entries of the same row. Due to the CY-condition these sums have to be equal and p = q + 1. The entries of l (z) appear inversely in the parameters a of the hypergeometric function. The exact form of the hypergeometric function is model dependent. The need to compute the derivatives with respect to the parameters later on imposes the computational constraint that no entry in l (z) may have an absolute value larger than 2 as otherwise rational parameters beyond 1/2 appear. We now specialize to the charge row where the ordering and number of zeros do not matter. This structure appears quite commonly, e.g. in the hypersurfaces P 1,1,2,2,2 [8], P 1,1,2,2,6 [12] and P 1,1,2,8,12 [24] as well as in the complete intersections X (2|4) (11|1111), X (2|2|3) (11|11|111) and X (2|2|2|2) (11|11|11|11) [42]. The hypergeometric functions appearing in these manifolds when summing over the z coordinate have only integer and half-integer parameters. To prevent clustering of the formulas we now specialize again to our example P 1,1,2,8,12 [24], but the computation is similar for all such models. In this case the hypergeometric function in (2.35) takes the form The periods are now given by up to third order derivatives of this hypergeometric function with respect to its parameters. These can be evaluated e.g. using the HypExp2 package [49]. It has been proven in general that it is always possible to rewrite the generalized hypergeometric functions in terms of multiple polylogarithms [50], allowing us to express the derivatives in terms of harmonic polylogarithms (HPL). These can then be expanded 3 around the conifold coordinates x i to any necessary order. The expansion in terms of x 1 and x 2 coordinates has to be calculated term by term. While these can be in principle calculated to arbitrary order, the time needed to evaluate the derivatives becomes impractical already at low orders. Thus, we found it easier to calculate the transition matrix m 2 to the local basis and compute the instanton corrections in these coordinates. Transforming the PF operators into these coordinates and using the Ansatz (2.7) gives the local periods. The matching of the coefficients in the expansion around the conifold uniquely fixes the transition matrix where a 6 = 4π 2 + 25 log 2 (2) + 9 log 2 (3) + 30 log (2) While these expressions are rather long and non-rational, the important part is that all entries are known analytically, such that the cancellation of the irrational factors in the following steps is manifest. Applying this matrix to the local solution around the conifold gives an expression for the periods in the symplectic basis to arbitrary order. The periods themselves, especially those corresponding to the β-cycles, are too long to be presented here. After dividing by the fundamental period, the α-periods which represent the mirror map take the form Changing the x 3 coordinate to x 3 = x 2 3 and defining allows us to invert the mirror map order by order. The numerical factors in the expressions for q U 1 and q U 2 arise from the chosen coordinates. If we had used the x, y and z coordinates instead of x, y and z these would have been absent. The resulting mirror map is given by 42) 43) Moreover, the hypergeometric representation allows us to compute the mirror maps around the conifold exactly in the conifold coordinate. At x 1 = x 2 = 0 the conifold modulus on the Kähler side is given by In this case the harmonic polylogarithm (HPL) actually reduces to a simple logarithm, but in higher orders more complicated HPLs of higher weight appear. In the appendix B.1 we give the basic definitions of harmonic polylogarithms. Finally, inserting the mirror map into the periods allows us to write down the prepotential around the conifold as Note that all polynomial terms involving U 1 or U 2 are rational. The only nonrational terms are the quadratic Z 2 term and the constant ζ(3) term shown in the last row. We also observe that the linear terms related to the U i are all given by the same topological numbers as they are in the LCS regime. The same holds for the manifold P 4 1,1,2,2,6 [12]. Together with the observation that the topological numbers at a conifold transition are given by sums of the LCS topological numbers [51], this would give rise to the conjecture, that all coefficients in the prepotential around the conifold except the quadratic terms are rational numbers. While we cannot prove this for the general case, it seams to be a rather frequent property. Let us close this elaborate mathematical section with a comment on possible generalizations. If one wants to go beyond P 1 -fibrations, -expansions for either hypergeometric 3 F 2 with rational parameters or 4 F 3 functions evaluated at 1 are needed. These lead to much more complicated expressions in the transition matrices. For example in the 1-parameter complete intersection of four quadrics in P 7 the fundamental period takes the form x n (2.46) whose value at 1 is given by [52] ω 0 (1) = 16 the critical L-value of the weight four Hecke eigenform where η(τ ) is the Dedekind η-function. This value will appear in the entries of the transition matrix. In appendix B.2 we provide the basic definitions of critical L-values. L-values are highly non-rational and for the given example even expressions in terms of Γ-functions are unknown [52]. But many identities for ratios of L-values are known giving surprisingly simple, often rational, results. As an example, consider the weight 4 form then the following identities hold [52]: Moreover, for critical values of a modular form g it holds that L(g, 2k) L(g, 2k − 2) = algebraic number · π s (2.51) as well as L(g, 2k + 1) L(g, 2k − 1) = algebraic number · π s (2.52) for some integers k and s [53]. The algebraic numbers turn out to be rational in many cases, as e.g. in the example above. Thus, our construction could also work in these cases, but this would require much more mathematical machinery which is beyond the scope of this paper. 3 The quest for |W 0 | 1 Now that we have developed the tools to calculate periods close to the conifold, we can continue towards the goal of this paper. We want to investigate whether a method similar to that proposed by DKMM can be established in a region in moduli space that is close to a conifold point. After reviewing the construction at the LCS point as described by DKMM, we will start off the conifold discussion by considering the one-parameter model of the quintic (or rather its mirror). Since this model has only one complex structure modulus, there is no direct way of generalizing the DKMM construction, instead it turns out to be rather a tuning problem whether fluxes can be chosen such that in the minimum W 0 1. Indeed, one can find fluxes such that W 0 ≈ 10 −4 , but to formulate a general mechanism geometries with more complex structure moduli are needed. As explicitly elaborated on in the previous section, in such multi-parameter models there exits a regime, called Coni-LCS in the following, which lies at the tangency between the conifold and the LCS locus. We will extend the DKMM construction to vacua close to the Coni-LCS regime and explicitly demonstrate the procedure using the P 4 1,1,2,8,12 [24] example from section 2.3. Review of the DKMM construction Let us first briefly review the construction of Demirtas, Kim, McAllister and Moritz [1]. The authors propose a two-step procedure to generate exponentially small W 0 terms at weak string coupling and large complex structure. When using mirror variables, the prepotential splits into classical and non-perturbative terms. Initially neglecting the non-perturbative terms, the first step is to find quantized fluxes for which the F-terms and superpotential vanish perturbatively. DKMM formulate a Lemma which gives a sufficient condition to construct such solutions and directly determine the flat direction. In the second step, the previously neglected non-perturbative terms generate a potential along the flat direction which can generically be stabilized to an exponentially small value by a racetracklike procedure. Let X be an orientifold of a Calabi-Yau 3-fold with O3-planes and wrapped by D7-branes, carrying D3-brane charge −Q D3 . With {A a , B b } a symplectic basis a , the period vectors are defined as X a are projective coordinates on the complex structure moduli space, and F is the prepotential with F a = ∂ X a F. We continue to work in a gauge where U 0 = 1, so F 0 = 2F − U i F i . From the 3-form field strengths F 3 , H 3 one similarly obtains the flux vectors With the symplectic matrix Σ = 0 −1 1 0 and S the axio-dilaton, the Kählerand superpotential take the form When written in terms of the mirror variables, the tree-level prepotential F can be separated into a classical, perturbative part F pert and non-perturbative instanton The expressions refer to the mirror CY, so K abc are the triple intersection numbers of the mirror, and the sum runs over effective curves in the mirror. The constants a ab , b a are rational numbers, and ξ = − ζ(3)χ 2(2πi) 3 with the Euler number χ of the CY. The contributions to the superpotential stemming from F pert and F inst are respectively denoted W pert and W inst , such that W = W pert + W inst . Since the axionic real parts of U do not appear in the perturbative Kähler potential, they enjoy a discrete Z n shift symmetry which is broken by generic fluxes. The shift symmetry generates a monodromy transformation on the flux vectors, and only if such a monodromy combined with an appropriate SL(2, Z) transformation (H, F ) → (H, F + rH), r ∈ Z leaves the flux vectors invariant there can be an unbroken remaining shift symmetry. The first step then amounts to finding fluxes that allow for an unbroken shift symmetry, and finding moduli values that satisfy the F-flatness conditions with W pert = 0. The following is a sufficient condition for the existence of such a perturbatively flat vacuum. If a pair of Z n vectors M , K exists such that • p = N −1 K lies in the Kähler cone of the mirror CY, • and a · M and b · M are integer-valued, then the fluxes are compatible with the Q D3 tadpole bound, and the potential is perturbatively flat along U = p S with W pert | U = 0. The non-perturbative contributions can now stabilize the remaining flat direction. The effective superpotential along U in terms of the axio-dilaton S is given at weak coupling by The final idea is to find flux quanta that stabilize S via a race-track scenario, balancing the two most relevant instantons q 1 , q 2 against each other. This is achieved when p · q 1 ≈ p · q 2 . The conditions indicate that h 2,1 ≥ 2 is necessary in order to apply this mechanism. For a one-parameter model, the vectors and matrices are just numbers and K 2 N −1 = 0 means K = 0. But then the perturbative vacuum found by the mechanism is U = N −1 K S = 0 which is both outside the LCS regime of validity and has no flat direction along which the non-perturbative terms could generate a small |W 0 |. For a complete stabilization of all moduli, the hope is to continue with a KKLT-like procedure starting with this small W 0 . Unfortunately it is not quite so straightforward, as examples show that the perturbatively flat direction produces a mass scale of order |W 0 |, which coincides with the mass scale of the Kähler moduli in the KKLT scenario. The low energy theory must contain not only the Kähler moduli, but also the axio-dilaton, and the Pfaffian prefactors which appear in the non-perturbative superpotential cannot be treated as a constant. DKMM argue that under some assumptions, the unbroken shift symmetry of the perturbatively flat vacuum would guarantee that the contributions of the axiodilaton to the Pfaffian factors are exponentially small. Then one could reasonably approximate the Pfaffians by constants. To show this explicitly is however left open, and will also not be treated in our work. |W 0 | 1 in the conifold regime For really getting the uplifted dS minimum in the last step of KKLT, one needs a strongly warped throat. Thus, one needs a similar construction in the region close to a conifold point. This is not straightforward, as the periods take a completely different form when expanded around such a point. To set the stage, let us consider the simplest model, namely the (mirror of the) quintic that has just a single complex structure modulus. Close to the conifold point the period vector Π T = (X 0 , X 1 , F 0 , F 1 ) can be expressed as [25][26][27][28][29] that are only known numerically 5 . Note that these are in general irrational numbers though featuring certain correlations and rationality properties. The relation B = C is a consequence of the existence of a prepotential for these periods, which reads The corresponding Kähler potential for the complex structure modulus is given by (3.10) This will be the leading order Kähler potential in the volume-dominated regime, i.e. for V|Z| 2 1. Including also the overall Kähler modulus V and the axiodilaton S, the total unwarped Kähler potential becomes (3.11) For the strongly warped, throat-dominated regime V|Z| 2 1, the effective action was derived in [14][15][16]. Here the warping backreacts non-trivially so that the Kähler potential takes the different form with ξ = c M 2 , c an order one parameter and M denoting the F 3 flux along the conifold A-cycle. This Kähler potential features a warped no-scale structure where the sum runs over the set I, J ∈ {T, Z}. Thus, precisely for the Kähler potential (3.12) the order O(ξ) term vanishes. Moduli stabilization A general flux induced superpotential leading to the stabilization of the conifold modulus at exponentially small values can be expanded as (3.16) Here we have chosenh 1 = 0 in order to avoid (SZ log Z)-terms. Note that while the quantized fluxes are integers, the coefficients M n and K n are in general complex numbers. Next we have to solve the minimum conditions D Z W = D S W = 0. Using the Kähler potential (3.11), one finds for the volume-dominated case As shown in [16], in the warped, throat-dominated case, the warped no-scale structure (3.13) implies that the minimum of the scalar potential is at ∂ Z W ≈ 0. This gives the same result as in (3.17) once we formally set (B) = 0. Solving (3.17), in both cases at leading order the Z modulus can be written as (3.20) ForK 1 > M and (S) > 1 the value of the conifold modulus can be guaranteed to be exponentially small, hence making our expansion in orders of Z self-consistent. Looking at the axio-dilaton condition D S W = 0, at leading order we find where D Z W = 0 was invoked. As in [29], for the stabilization of the axio-dilaton we now distinguish the two cases, K 0 = 0 and K 0 = 0. In this case, the terms linear in Z in (3.21) can be neglected so that one gets the simple solution For (S) 1 we need to require For the resulting value of the superpotential in the minimum one obtains Thus, in order to have an exponentially small value of the superpotential in the minimum, the leading order term w 0 in (3.24) must vanish or at least be very tiny. Thinking of M 0 and K 0 as two-dimensional vectors, the superpotential w 0 vanishes if M 0 and K 0 are collinear. Since M 0 and K 0 generically contain model dependent complex valued parameters, solving this condition for the fluxes becomes a number theoretic question. Let us analyze this in more detail using the concrete values for the (mirror of the) quintic. First one realizes that due to (3.23) w 0 = 0 implies (S) = 0 which means the string coupling is infinitely large and thus outside the regime of validity. Moreover, using w 0 = 2i (S)K 0 and (S) > 1 one can derive the lower bound where we used that due to K 0 = 0 not both h 0 andh 0 are allowed to vanish. Thus, at least for the specific case of the quintic, in Case A the superpotential in the minimum is bounded from below by |w 0 | > O(10 −1 ). This means that we have h 0 =h 0 = 0 so thatK 1 = K 1 = h 1 and M = −f 1 are both integers. Now, up to order O(Z) the condition (3.21) reads where Z is related to S as Z = ζ 0 exp(− 2πK 1 M S). We observe that (3.26) is nothing else than the vanishing F-term condition F S = 0 for an effective superpotential This is very reminiscent of the KKLT superpotential, where here we are dealing with a no-scale potential. Writing S = s + ic one obtains for the C 0 axion and the dilaton is given by the solution of the transcendental equation As in KKLT this only admits a solution in the controllable regime if the left hand side is very tiny, M 0 1. Whether the flux landscape admits such values is a model dependent number theoretic question. Let us recall the parameters which for small enough K 1 is in a perturbative regime. For the value of the conifold modulus we find |Z 0 | ∼ 4 · 10 −7 and the value of the superpotential in the minimum is of the order of M 0 namely Therefore, the Case B provides a controlled KKLT-like stabilization of the complex structure and axio-dilaton moduli giving for the quintic a Minkowski minimum of the no-scale scalar potential with a small value of |W 0 |. This value was dialed by a suitable choice of flux quantum numbers. In our case these were of the order O(10 2 ) and so that there is the concern of overshooting in some tadpole cancellation conditions. In the example, there will a contribution to the D3-brane Moduli masses The latter result is encouraging for extending the modelà la KKLT by adding a non-perturbative contribution to the superpotential that depends on the Kähler modulus T . Recall that in the DKMM construction the issue arises that the mass of the lightest complex structure modulus is of the same order as the mass of the Kähler modulus, calling for a more detailed analysis. Let us see how the situation is in the conifold regime. For estimating the masses, we compute the Hessian V ab = ∂ a ∂ b V in the minimum, which for a no-scale model simplifies considerably. Since F I = 0 in the minimum, the only non-vanishing contributions can come from (3.34) The masses in the canonically normalized field basis are the eigenvalues of the matrix K ac V cb , where K ac denotes the inverse Kähler metric. In the volume-dominated regime, we find for the mass eigenvalues the following scaling 6 with V and |Z| In Case B we also have the relation |Z| ∼ M 0 /s. The expression for the mass m Z makes it evident that the expressions in this regime can only be valid for V|Z| 2 1, because otherwise the mass of the conifold modulus would come out larger than the string scale. Moreover, one always finds the hierarchy m Z m S . Extending this model to KKLT by also including a non-perturbative contribution A exp(−aT ) depending on the overall Kähler modulus, the mass of the latter scales as which for small M 0 can be kept much smaller than the complex structure and axio-dilaton moduli. Next consider the throat-dominated regime, where for Case A we find the mass eigenvalues The expression for m Z nicely shows that we need V|Z| 2 1 in order for the mass to be smaller than the string scale. Moreover, one has the hierarchy m S m Z . However, at least for the concrete example of the quintic we do not get |W 0 | 1 in Case A. For Case B there is an important change in the mass scales In the more precise relations also factors of the dilaton and the fluxes appear, but they do not change our conclusion. so that now we have the inverted hierarchy m Z m S . In addition, taking into account (3.36) for sufficiently small |Z| the Kähler modulus can be kept lighter than the axio-dilaton, i.e. m S m τ . This looks very promising, so let us summarize our findings: In Case B, by a suitably tuned choice of fluxes one can stabilize the conifold modulus and the axio-dilaton in the controlled regime such that |W 0 | ∼ O(10 −4 ) and their masses are hierarchically larger than the mass of the Kähler modulus. Thus, the AdS KKLT minimum seems to exist. In the throat-dominated regime, there is also a tiny warp factor that in principle could allow to uplift the minimum to dS. However, in this case other issues might appear, like the appearance of light KK modes localized at the tip of the long throat, whose mass has been shown [16] to scale like the mass of the Z modulus. This might spoil the validity of the employed effective action of just the conifold modulus and the axio-dilaton. While in the simple one-parameter model we could explore the stabilization of the conifold modulus, generalizing the DKMM procedure requires more moduli to work with. That Case B with h 0 =h 0 = 0 showed more promise is nice, since these fluxes are also suggested by the procedure of DKMM. In the following we shall propose a general algorithm which extends the work of DKMM to the Coni-LCS regime of a multi-parameter CY. |W 0 | 1 in the Coni-LCS regime Consider an n-parameter CY with one modulus close to the conifold described in terms of the perturbative prepotential and instanton series with q i the coordinates used to invert the mirror map 7 and c running over effective curves. To simplify notation, we use latin indices to denote all moduli X i = ( U , Z) T , i = 1, . . . , n, and greek indices to denote only the LCS moduli U α , α = 1, . . . , n − 1. If a pair of Z n flux vectors f , h exists such that Since we are close to the conifold these coordinates are not simply exponentials of the moduli as in the LCS regime, but rather the conifold modulus enters linearly (2.41). • p α = (N −1 ) αβ h β lies in the Kähler cone of the mirror CY, then the fluxes are compatible with the Q D3 tadpole bound, and there is a perturbatively flat vacuum along along which W pert | U ,Z ≈ ZM 2πi is exponentially small in (S). As before, the conditions imply that too few moduli break the mechanism. Here , h 2,1 ≥ 3 is necessary. Following a three-step procedure, let us outline in more detail how this works. The periods are computed from the prepotential as (3.43) By restricting our choice of fluxes tõ we obtain a superpotential which, similar to the DKMM case, is homogeneous of order two at Z = 0. Note that for this to work, B if i , A αif i must be integer valued, which calls for the parameters A ij and B i in the prepotential (3.39) to be rational numbers. The resulting superpotential can be expanded as (3.45) To proceed, at zeroth order in Z we first stabilize the U α moduli in a supersymmetric minimum with vanishing superpotential 46) with N αβ = K iαβf i . Provided N αβ is invertible, the minimum is located at Demanding that W = 0 results in a condition on the fluxes, (N −1 ) αβ h α h β = 0. Integrating out the moduli U α , since we invoked a vanishing superpotential at zeroth order in Z, the remaining terms of the superpotential are at least of order Z with the parameters given in (3.42). For the F-term we find showing that the Kähler potential contribution to D Z W is of subleading order. Thus, the conifold modulus is stabilized at What we have found is a perturbatively flat vacuum extending the Lemma of DKMM, where the complex structure moduli are stabilized in terms of the axiodilaton as log(Z) ∼ U α ∼ S. The final step is to integrate out Z, resulting in an effective superpotential composed of the instanton superpotential W inst = −f i ∂ i F inst as well as the linear corrections in Z resulting from W Z = W pert | Z=Z 0 = ZM 2πi , Similar to DKMM, such an effective non-perturbative superpotential has the potential to stabilize the axio-dilaton by choosing fluxes that balance the leading terms against each other in a racetrack-like way. As long as the approximations we did along the way hold true in the minimum, the resulting W 0 can be stabilized at exponentially small values. Here it is important to keep the instanton series under control, as the conditions |q i | < 1 will result in non-trivial constraints on the fluxes we may choose. 3.4 Example: P 1,1,2,8,12 [24] Now let us apply this generic algorithm to the example P 1,1,2,8,12 [24] worked out in detail in section 2.3. Recall the form of the prepotential (2.45), from which one can read off the data for the perturbative part K 111 = 8, K 112 = 2, K 113 = 4, K 123 = 1, K 133 = 2 , (3.52) Moreover, the leading instanton contributions are (3.53) The generic relation (3.47) provides a minimum at U α ∼ S which is flat along S as long as the following condition on the fluxes is satisfied (3.54) Additionally the conifold modulus is stabilized by (3.50) with (3.55) Note that with the exception of M 1 , the parameters are real and |ζ 0 | = 1 2π is independent of the fluxes. Hence, the conifold modulus is guaranteed to be small for (S) 1 and our trusted regimes overlap. So to first order in Z, which we can trust if we can stabilize at (S) 1, we have a "perturbatively flat vacuum". The final step is to realize a racetracklike vacuum for S with (S) 1 and resulting in |W 0 | 1. The effective superpotential (3.51) evaluates to By now we have several constraints on the fluxes. Besides the original choices and the condition we get from the U α minimization, we need (S) 1. The instanton expansion is under control if |q i | < 1 with q i given in (2.41). Altogether we have and from the instanton series Also, it is assumed thatf 3 = 0 and 2f 1 +f 3 = 0 in order to be able to invert the relations of steps 1 and 2. It is straightforward to find flux combinations that fulfill these requirements without going to very large flux numbers, e.g. (3.59) The final step is to search for a racetrack type Minkowski minimum close to the perturbatively flat minimum. Semi-analytically minimizing the effective scalar potential for S, with superpotential (3.56) evaluated along the perturbatively flat valley, we find approximate positions for the axio-dilaton (see figure 3) that lie close to the minimum of the full scalar potential depending on all eight real scalar fields. This true Minkowski vacuum can then be found by a numerical search using those starting points. We have checked that in this example for the specific choice of fluxes (3.59) such a numerical minimum indeed exists at With these values we observe that the instanton series is nicely under control with |q i | ≈ (2 · 10 −5 , 0.2, 4 · 10 −6 ). The superpotential in this minimum is very well approximated by (3.56) and evaluates to W 0 = −3.10 · 10 −6 . (3.61) Sections through the full potential are shown in figure 4. Computing the mass eigenvalues for our example, we find a very heavy eigenvalue corresponding to the conifold modulus, two less heavy directions which mix the complex structure moduli U i with the axio-dilaton, and a very light direction along the perturbatively flat vacuum {m 2 } = {6 · 10 14 , 1 · 10 3 , 3 · 10 2 , 2 · 10 −11 }M 2 pl . (3.62) The smallest value is approximately |W 0 | 2 , which also corresponds to the mass scale of the Kähler modulus in the KKLT scenario. The challenge of further stabilizing the remaining moduli thus persists from the LCS point. In an inexhaustive search over fluxes and performing the semi-analytic minimization of the effective potential for S to keep the computation tractable, we find more than 10 4 (approximate) vacua for which |W 0 | 2 ≤ 10 −6 , with values like |W 0 | ≈ 10 −12 being commonplace. Indeed it seems that arbitrarily small values of W 0 can be reached with reasonably small fluxes, however it is not clear if those minima are true vacua or if the approximations and numerics break down around those small values. This has to be tested case by case using the full potential without approximations, as has been done in the example above. The search suggests that examples with reasonably small W 0 as the one discussed are nonetheless plentiful. Conclusions In this paper we have extended the construction of minima with small values of |W 0 | of DKMM to the point of tangency between the conifold and the LCS regime. We found that it is possible to construct vacua with arbitrarily small values of W 0 for reasonable values of the fluxes. As a proof of principle, the proposed construction was successfully applied to an explicit example of a CY 3-fold. With O(10 2 ) fluxes we explicitly found a minimum with |W 0 | ≈ 10 −6 , while a broad search revealed that values of the superpotential could easily be as small as 10 −12 . These examples seem to be good candidates to be used in a KKLT-like construction. The inclusion of the Kähler moduli and their explicit stabilizationà la KKLT was not considered in detail. The potential issue of DKMM concerning the masses of the Kähler moduli and the lightest complex structure moduli remains for future investigation. Let us emphasize again that for the mechanism to work rational coefficients in the scalar potential are a necessary requirement. An exact computation of these values requires analytic knowledge of the transition matrix of the periods to the conifold. We have shown that for a certain class of models these can be calculated analytically using expressions for the periods in terms of harmonic polylogarithms. Moreover, we expect this rationality property to hold in more general models as well. The computation in these more general cases requires evaluations or identities between L-values of (twisted) Hecke eigenforms, which are currently being developed [55] but are beyond the scope of this paper.
12,994.8
2020-09-07T00:00:00.000
[ "Mathematics" ]
Hypertension in a patient with medullary sponge kidney Abstract Rationale: Medullary sponge kidney (MSK) is a congenital renal disorder characterized by recurrent nephrolithiasis or nephrocalcinosis. Recently, it has been found that MSK can be also combined with other diseases, such as primary aldosteronism and Beckwith-Wiedemann, but whether it is associated with secondary hypertension remains unknown. Patient concerns: A 22-year-old hypertensive female presented to our hospital characterized by hypokalemia and hypertension. Diagnosis: The laboratory examination showed secondary aldosteronism. And the common causes for secondary aldosteronism include renal artery stenosis, glomerulonephritis, lupus nephropathy, and diabetic nephropathy, all of which were excluded except MSK. Interventions: She was treated with angiotensin-converting enzyme inhibitors. Outcomes: Her blood pressure, serum potassium, and plasma renin levels were reversed after treatment with angiotensin-converting enzyme inhibitors. Lessons: We presumed that MSK may be associated with secondary hypertension, and the mechanism may be the activation of the renin-angiotensin-aldosterone system. Introduction Secondary arterial hypertension accounts for about 5% to 10% of the general hypertensive population. [1] Secondary hyperten-sion should be screened if a hypertensive patient shows early onset of hypertension, resistant hypertension, severe hypertension and hypertensive emergencies. [1] Hypokalemia in patients with hypertension accounts for about 15.8%, and, of these patients, secondary hypertension accounts for about 30.6%. [2] The known etiologies of hypokalemia accompanying hypertension include primary aldosteronism, hypercortisolism, hyperthyroidism, and Liddle syndrome. Medullary sponge kidney (MSK) is a renal malformation known to be a benign disease. The estimated prevalence of MSK in the general population is between 1 in 5,000 to 1 in 20,000, [3] while, in patients with renal stones, the incidence increases to 3%-5%. [4] Whether MSK is an etiological factor for secondary hypertension has not been determined, although there are a few cases in which MSK is combined with hypertension. Here, we provide a young susceptive secondary hypertensive case without any etiological factors but MSK. Case presentation A 22-year-old female was suspected to have polycystic ovary syndrome (PCOS) due to experiencing menopause-like symptoms for two months. However, her menstrual cycle was recovered after treatment with ethinylestradiol and cyproterone acetate tablets (Dine-35) for 21 days. Two months later, during a resection of a fibroadenoma in the left breast, she was diagnosed with hypertension, and her blood pressure (BP) ranged between 150/110 to 180/110 mmHg. Meanwhile, her potassium concentration was 2.75 mmol/L. The patient had no symptoms -even when her BP was as high as 182/122 mmHg. There was no history of vomiting, diarrhea, or treatment with a diuretic. She was admitted for secondary hypertension screening due to hypertension combined with hypokalemia. Diltiazem hydrochloride sustained-release capsules and terazosin hydrochloride tablets were used to control the patient's BP. Physical examinations revealed that her temperature was 36.0°C, her pulse rate was 70 bpm, her respiratory rate was 18 bpm, her BP was 171/104 mmHg, her body mass index was 18 kg/m 2 , her waistline was 70 cm, and the BP of her bilateral limbs was symmetrical. No clinical signs, such as a buffalo hump, moon face, or striae, were observed. There were no vascular bruits in the carotid and subclavian arteries, and there were no abnormal signs in the heart and lung. No renal vascular bruits was found upon abdominal examination. The pulsation of the radial artery and dorsal artery of the feet were regular and symmetrical. Despite continuous oral potassium supplementation (3000 mg per day), her blood potassium was still low (3.33-3.53 mmol/L), and her 24-hour urine potassium was increased (72 mmol/24 h). Primary aldosteronism, hypercortisolism, hyperthyroidism, secondary aldosteronism (renal artery stenosis), and some of the genetic diseases, like Liddle syndrome, are the possible causes of hypertension combined with hypokalemia. [5] However, hypercortisolism was excluded because her serum cortisol levels and the circadian rhythm of her cortisol secretion were normal. Hyperthyroidism was also excluded due to the normal thyroid function. Plasma aldosterone and renin concentrations were tested, which indicated that the patient's renin levels were increased (119.3 mIU/ml, reference range 4.4-46.1 mIU/ml), while her aldosterone levels were in the normal range (104 pg/ml, reference range 30-353 pg/ml). Additionally, the aldosteronerenin ratio (ARR) was 0.87 (normal range < 24). Primary aldosteronism and Liddle syndrome were excluded due to the increased renin activity. The reason for the hyperreninemia needed to be investigated. One study indicated that hypokalemia may result in the depression of aldosterone secretion. [6] When this was considered along with the patient's elevated plasma renin levels, it was deemed to be secondary aldosteronism. Renal parenchymal and vascular diseases are the common causes of secondary aldosteronism. While there was no renal artery stenosis confirmed by a computed tomography angiography (CTA) of the bilateral renal arteries, urinalysis indicated two to four red blood cells and three to five white blood cells per high-power field. The urinary albumin creatinine ratio was 238.73 mg/mg, and the urinary protein volume was 415 mg/24 h. The suspected causes of renal parenchymal diseases, such as glomerulonephritis, diabetic nephropathy, and lupus nephropathy, were suspected. Serum complement, serum anti-streptolysin O (ASO), plasma glucose, anti-dsDNA antibody, and anti-SM antibody levels were tested, yet all were either normal or negative. However, an abdominal ultrasound revealed multiple renal stones in the bilateral kidneys, right hydronephrosis, and an enlarged right kidney. Computed tomography urography (CTU) was performed in order to determine the reason for the hydronephrosis, and it revealed a dilated collecting system in the right kedney, a "papillary brush" appearance in the bilateral kidneys (Fig. 1A), medullary cysts (Fig. 1B), and multiple calcifications in both kidneys (Fig. 1C). The reconstructed image of the CTU specifically revealed a "bouquet of flowers" appearance ( Fig. 2). Moreover, there was no urinary tract obstruction at the ureteropelvic junction, and the CTU demonstrated the typical manifestations of MSK. In order to evaluate the split renal function, 99Tc-DTPA renal dynamic imaging was performed. The flow phase showed that the perfusion volume and velocity of the right kidney was lower than that of the left kidney (Fig. 3A). The renographic curve revealed that the excretory function of the right kidney was impaired (Fig. 3B). The patient was finally diagnosed with secondary hypertension related to MSK after other etiological factors were ruled out. Benazepril hydrochloride at a dose of 10 mg daily was prescribed for antihypertensive therapy. The patient's serum potassium was increased to 4.0 mmol/L without a potassium supplement, and her blood pressure was controlled to around 120/80 mmHg after a month of treatment. Plasma renin and aldosterone concentrations were restored to normal levels after three months. Discussion and conclusions The patient was suspected to be secondary hypertension due to spontaneous hypokalemia and the early onset of hypertension without a family history of hypertension. Other causes of hypertension with hypokalemia, such as primary aldosteronism, hypercortisolism, hyperthyroidism, and Liddle syndrome, were ruled out based on the increased renin activity, normal cortisol levels, and thyroid function. Secondary aldosteronism is also a common etiological cause of hypertension with hypokalemia, however, it can be masked by hypokalemia. Secondary aldosteronism is usually associated with renal ischemia caused by renal parenchymal and vascular diseases. However, the CTA of the bilateral renal arteries showed no renal artery stenosis. The patient had no history of infection, and her serum complement and serum ASO was normal. Glomerulonephritis was ruled out, and diabetic nephropathy and lupus nephropathy were also excluded according to the patient's history. Laboratory examinations showed microscopic hematuria and microalbuminuria. The microscopic hematuria was considered to be associated with the renal stones, and the microalbuminuria were perhaps related to the organ damage caused by the hypertension. MSK is usually characterized by the tubular dilation of the renal collecting ducts and the cystic dilation of the medullary pyramids. [7] CTU is recommended for the diagnosis of MSK, although the traditional method is intravenous pyelography (IVP). [8][9][10] Typical pictures of MSK show collections of contrast media in dilated collecting tubules with the appearance of a papillary brush or bouquet of flowers. [8,9] These radiographic findings are generally used in the diagnosis of MSK along with medullary nephrolithiasis, medullary nephrocalcinosis, and medullary cysts. [9] Although there have been several cases reporting that MSK may be involved in hypertension, whether it is an etiological factor for secondary hypertension remains unknown. In 1977, a case of hypertension with MSK was reported in an adolescent; however, the patient was diagnosed with essential hypertension and normal renin activities. [11] In 2008, another case of MSK combined with hypertension was presented by Michele. [12] This patient was found to have left kidney atrophy and was then diagnosed with renovascular hypertension for the stenosis of the left renal artery. A pathological biopsy after the left nephrectomy confirmed the presence of MSK and renal artery fibromuscular dysplasia (FMD). It was considered that FMDbut not MSK -was an etiological factor for hypertension. Our patient with susceptive secondary hypertension exhibited secondary aldosteronism combined with MSK. The common www.md-journal.com causes related to secondary aldosteronism, such as renal arteries stenosis, glomerulonephritis, diabetic nephropathy and lupus nephropathy, have been excluded. Therefore, the etiology of hypertension was thought to be related to MSK. The pathophysiologic mechanism involved in may be the over-activation of renin-angiotensin-aldosterone system (RAAS), since the perfusion of the right kidney was reduced slightly, as confirmed via renal dynamic imaging. Hypoperfusion was supposed to be the consequence of the compression of the renal arterioles caused by the dilation of the collecting ducts and hydronephrosis. In addition, the possibilities of PCOS and drug-related hypertension were also suspected. PCOS is a common endocrinopathy in women of reproductive age. [13] It has been suggested that the diagnosis of PCOS should be made if two of the following three criteria are met: androgen excess, ovulatory dysfunction, polycystic ovaries. [14] The relationship between PCOS and hypertension has remained controversial until now. Retrospective studies have shown that patients with PCOS have a higher incidence of hypertension and their BP is significantly higher than that of control groups. [15][16][17] However, a prospective study with a 12-year follow-up period published by Tehrani showed that the BP of PCOS patients was slightly higher than in control groups, yet there was no statistically significant difference. [18] For our patient, PCOS was excluded because there was neither clinical or biochemical hyperandrogenism nor polycystic ovaries excepted for menopause-like symptoms. Besides, Dine-35 as an oral contraceptive may result in hypertension due to the effect of estrogen, which could lead to sodium and water retention or activate angiotensinogen. A meta-analysis showed a linear relationship between the risk of hypertension and the duration of an oral contraceptive treatment. The relative risk (RR) of hypertension was 1.01 in patients with 0.5 years of treatment. [19] Therefore, it was considered that the risk of drug-related hypertension in our patient was lower, because she took the drug for only 21 days and had stopped for two months prior to admission. In conclusion, this case indicated that medullary sponge kidney may be associated with secondary hypertension. The mechanism may be the activation of RAAS caused by the low perfusion of the kidney. Moreover, it was shown that angiotensin-converting enzyme inhibitors (ACEI) could effectively control the patient's BP and improve secondary aldosteronism due to the vasodilation of the afferent arterioles.
2,577.4
2021-01-22T00:00:00.000
[ "Medicine", "Biology" ]
Design and Implementation of a Logo-based Computer Graphics Course Two years ago the Faculty of Mathematics and Informatics at Sofia University makes a decision to design a new series of Logo-based courses which make use of the modern technology. The pedagogical component of the challenge is to design a multidisciplinary course suitable for students with different skills and interests. From a development perspective the challenge is to build an entirely new one. And finally the course must be attractive regardless of the seriousness and complexity of the topics included in it. The paper discusses the structure of the course including the final weeks when topics emerging from students' course projects are taught. Each lesson from the course is based on sets of sample programs representing the general lifecycle of software development. This includes designing, coding and debugging. Samples are created on-the-fly, thus different instances of the course results in different final projects. Lessons are interactive and students may interfere with the direction of demonstrated software development. Preface The Logo programming language has been traditionally used in the classroom to describe, to explain and to explore the fundamental principles of Computer Science.The properties of this language make it an advantageous choice for a first programming language.It is not only the simplicity of the syntax that makes it beneficial to students, to their teachers and to the overall learning process.Another factor is the immediate access to drawing functionality via Turtle Graphics.The widespread use of Turtle Graphics is a unique phenomenon which has some negative effect on the public opinion about Logo.Namely, Logo might be considered as just a system for doing graphics with a turtle.The opposite opinion is also still widespread -some turtle graphics environments and libraries are considered to be Logo. The postponed negative effect of these opinions is that some teachers and parents think of Logo as of a childish language, not appropriate for doing serious stuff.Apparently what is considered as serious, is merely everything which has direct positive impact on entry and exit exam.The Faculty of Mathematics and Informatics at Sofia University, has been using Logo for some decades in various courses -from teacher training courses to e-Learning and Technology Enhanced Learning (TEL) courses (Nikolova and Sendova, 1995).The faculty members were not only using Logo in their classes, but also they were developing new Logo versions.The first one being Plane Geometry System, later renamed to Geomland, released more than 20 years ago, to the latest Elica Logo, which is still under active development (Elica, 2007). Three lessons from the course are sketched in the paper.The first one is taught in week 4 and spans over Computer Science, Calculus, Analytical Geometry; and Applied Statistics and Probability.The lesson in week 6 is focused on composition of complex movements and their synchronization.It uses elements from Computer Science, Geometry, Physics, and Trigonometrics.The third lesson is about relative transformational geometry and its application in the form of Turtle Graphics.It uses elements from Physics, Robotics, Biology and Art.Snapshots from the projects are shown in Fig. 1. A few of the students' course projects are also presented in the paper -an animated 3D model of the Solar system, a transformational geometry impression called "United Colours of Elica", and a 3D model of the Faculty building. The presented Logo-based Computer Graphics course 'conquers' educational territories from the dominating C and C++.It is taught since 2005 and is well accepted by students.They find it both interesting and useful for their education.The future of the course is very promising.A textbook is planned, as well as extension of the teacherstudent interaction beyond the time frame of the course. The Challenge In the spring of 2005 the Department of Information Technology makes a decision to provide greater support to the development of Logo, as well as to revive its use in the courses.Thus the main challenge is formulated as to design and implement a new series of Logo-based courses which make use of the modern technological achievements. Pedagogical Challenges The creation of a new stream of courses would have greater academical value if it is adaptable to the specific needs of the teacher and the course.For example, the Logo-based Computer Graphics course could be taught to Bachelors and to Master's programmes.It could be taught to students in Informatics, Mathematics, Applied mathematics, Mechanics, etc.Many of the students attending the courses are supposed to be familiar with some of the basic concepts of computer science -algorithms, procedural and functional programming, OOP, etc.However, the courses should be accessible to students which have no significant (or in some cases any) experience in these areas either because they are freshmen, or their specialty does not require the full extend of the programming skills.Examples of such students are those who are studying for becoming teachers in Mathematics or Informatics (Vitukhnovskaya, 2005). The initiative for designing new courses clearly stated that they should cover all previous courses, so the educational plan of the Faculty is not invalidated.Additionally, the courses should provide enough complexity and robustness so that new students from other programs could also join.For example students from the Computer Graphics Master's Programme are also supposed to visit these courses. Development Challenges The Logo implementations used by the Faculty in the past are getting more outdated, and the new versions of these implementations are not available due to various reasons.This poses the challenge what programming environment to use for the courses (Laucius, 2006).It is decided to use Elica Logo as a software backbone for the course and to design new courses based on the advanced features of this Logo dialect (Boytchev, 2005).Switching to a new Logo dialect with a wider but different range of features makes most of the already existing teaching materials obsolete.The design of new courseware would lead to major changes in the lessons' content and will affect the actual teaching procedures.This poses a unique challenge as to how to design and implement new courses in a way that the overall advantage is much bigger than the overall disadvantage. Psychological Challenges Charm is one of the most undervalued factors for modern courses.Teaching Computer Science and Computer Graphics at university level is treated as a serious endeavor.The matter is heavy, sophisticated and knotty.Whether it is attractive to students or not is not as important as the content of the course.The new courses which are to be developed need this charm in order to win students' hearts.Initially the courses start as selective, so it is important to design the course in such a way that students are willing to enroll not only to gain some credits, but to learn something useful.The so called Nintendo generation (West, 1995) poses new requirements for courses, especially those related to computer graphics.Providing adequate level of charm is essential factor to meet these requirements. Course Structure A typical course spans over 15 academic weeks and includes 30 lecture hours plus 30-60 lab hours.Traditionally students are evaluated several times during a course and eventually they have an exam, which comprises the biggest part of their final score.The final exam usually has two components -practical and theoretical. The Logo-based courses could follow the same settled structure, but instead, it is decided that the way to measure student's efforts, knowledge and imagination is to provoke their creativity in the creation of a Logo-based course projects.As a result there is no examination synopsis of Elica, and students are focused on their projects long before the examination session. Introductory Section The course is divided into three sections -see Fig. 2. The first section takes three weeks (i.e., 6 academic hours).It is dedicated to the introduction of Logo and Elica.During the first week, while students get accustomed to Elica, they learn the basics of the Logo language, including data types, all reserved words and some functions for words and lists processing. The second week is left for iteration, recursion and custom operators.The third introductory week is dedicated to OOP features of Elica.Students learn various methods for class definition, object using and manipulation of OOP entities. The Core Section The core part of the course takes roughly 8 weeks and covers topics directly related to computer graphics, 3D modeling and animation.The lessons also utilize implicitly knowledge from the conventional computer science subjects, like algorithms and optimizations.It is not possible to make a precise topic-week relationship, because the course is continuously adapting itself to the level of the students.Follows a list of the basic topics in the core section: • 1D/2D objects: points, lines, segments, rays, circles, ellipses, squares, rectangles. • Custom user-defined objects, custom methods for object rendering, support for nested graphical objects, local transformations.• Color, RGB color space, lighting, object materials, fog effects, textures.• Turtle graphics in 3D space, hierarchical joint models, creating complex android models.• View points, flying through space, perspective and orthogonal projections. • Animation, smooth movement, techniques for achieving realism -resistance, inertia.Simulation of physical movements, composition of various movements.• Building graphical user interfaces, event handlers, capturing mouse activity, responding to user interaction, building interactive environments. Advanced Section The last few weeks of the course are left for topics which are not taught during the basic course, but are related to the course projects selected by the students.During the second half of the core section students determine the topic of their projects.If the projects use specific objects or techniques, then they are included in the advanced section. As a result of this layout, the advanced sections for different semesters are always different.They depend entirely on what students chose to do.Here are some examples of advanced topics: • simulation of water waves, • run-time modification of classes and objects, • building Logo libraries and software packages. The Exam About the middle of the course, weeks 8 to 10, students are encouraged to propose their projects.If accepted they can start working on them immediately.Some proposals might not be accepted either because they are too complex or are too easy.Complex proposal are simplified to a level which will make them doable in a reasonable period of time. Projects that are too simple or too easy to implement in Elica are loaded with some extra features. Students may request additional information about issues related to their projects.In this way they determine the topics for the third section of the course (Stoyanova, 1999).When they think they are ready with the projects they submit them electronically for evaluation.If submission is done in advance, students get reply with the current rating and suggestions how to improve it.If students have time and are willing to get a higher rating they can adjust their projects and resubmit them.This project-evaluation scheme is repeated several times. For the last 4 semesters this scheme proves to be successful for students.Most students resubmit their projects twice.Very few of them send three or four versions.The possibility to have their projects reviewed and resubmitted makes students more comfortable with the course and lets them display their creativity and imagination. Class Structure The structure of each individual class is a small copy of the whole course structure.It starts with an easy to understand basic concepts, and then it goes to details and advanced ideas, and ends with explanations-responses to students' questions. Classes are extremely interactive.The topic is presented as series of programs which are typed and executed in real time.The image from the teacher's computer is projected on a big screen and students can follow the process of programming from the beginning to the end. Usually the first sample program is just few lines long.Next samples just add some features to it thus building more complex programs.One of the co-ideas of the course is to demonstrate the process of real-world programming.The selected technique of erecting a full-functional program from scratch is a perfect live scenario for students (Dagienė, 1999). One of the most enjoyable by the students moments is when the program does not behave correctly.Such situations are handled by on-the-fly debugging and modification of the code.The teacher's thinking is vocalized and projected on the screen without interruptions, so students learn various techniques to resolve bugs. Quite often students are asked to provide ideas how to solve a problem.Most of the cases they suggest interesting solutions which are immediately tested.There are also cases when the suggestions do not solve the problem.Such ideas are also tested, because they are an excellent opportunity to practice important problem-solving and pitfallavoiding skills. All samples during each class are archived and provided to the students, so they can replay the whole lesson, and do further explorations with the code.Some are getting the sources right after the lesson; others are downloading them from an online repository (Elica Repository, 2007).At the beginning of the course many students use a paper notebook where they try to copy by hand all sample programs.However they soon find that it is impossible to cope with the dynamics of a live coding.The sample programs are constantly changing while testing different ideas and debugging, so students realize that it is much more valuable to grasp the ideas and the techniques, rather then to memorize the exact programs. The selected structure of the classes is well accepted by students.However, it imposes additional requirements for the teacher.The most obvious one is that it is unavoidable to write only correct programs.A single class may have 20 to 30 program samples, and each of them is a potentially dangerous place for the teacher to make a mistake.Some of the mistakes are intentional, but others are not.The latter makes teaching very demanding, because the teacher should extract educational value not only from good examples, but also from bad.On the other hand unintentional errors place the teacher in a stress situation, because s/he has to provide a solution with an adequate reasoning in a limited time frame. Multidisciplinarity Each lesson from the course has a main topic about features and techniques from the area of programming with Elica.Additionally, each lesson is enriched with many smaller fragments from other subjects, mainly from Mathematics and Physics.Some lessons 'rent' ideas from Biology, Psychology, Astronomy, Geography, Statistics and even Arts.Multidisciplinarity1 is an intentional feature of the course which makes classes more interesting and demonstrates the application of software in various scientific and artistic domains (Sendova, 2006).To demonstrate the multidisciplinarity of the lessons, three of them are briefly described below. Points (week 4) The first graphical lesson is scheduled for week 4 of the course.It starts with the introduction of the most primitive graphical object in Elica -the point.A point is defined by 3 numbers which stand for its coordinates.The initial examples used in this lesson are used to describe how the graphical system is initialized, how to visualize the coordinate system, and how to create a point at some position.During the lesson the students face a short challenge -to create 1000 random points within a virtual cube.The second challenge is more complex: to create the points in a sphere, not in a cube -Fig.3. Some of the students do not know how to do this, others suggest using the formula of a sphere The most often suggested solution is to create random points in a cube, but keep only those which are internal to the sphere.This is one of the milestones in the lesson, because students realize that the created points are less than 1000. The lesson continues in the direction of how to fix this.Several solutions are suggested and tested.One of them is to keep track of the number of internal points and terminate the loop when 1000 is reached.Another solution is to check for validity before the creation of a point.Throughout a series of tests and modifications students learn to be doubtful about a program that looks correct. Once they calm down thinking they've done all possible solutions, they are asked to find a way to directly create a point inside the sphere without the need to test for inclusion.Some students try to give up, others test teacher's sense of humor by making all the points at (0, 0, 0).With help from the teacher students find two more solutions -one based on Cartesian coordinates, and another based on polar coordinates. At the end of the lesson students are directed back to the wrong solution of 1000 points in a cube some of which are also in a sphere.This wrong solution serves as a basis to solve a new problem -how to find the approximate value of π exploiting the bug in the program.It takes some time to conclude that the number of points in the sphere and in the cube depends on the ratio of the sphere's and cube's volumes which is π/6.Table 1 Fig. 3. Random points in a cube and in a sphere.shows the approximate values for π calculated for cases with 10, 100, 1000, and 10000 points.This first graphical lesson spans over a few subjects studied in the faculty.Students learn some basic concepts from Computer Science (various loops, conditional execution, counters), Calculus (function composition), Analytical Geometry (Cartesian and polar coordinates, equations for cubes and spheres, volumes); and finally -Applied Statistics and Probability. Bouncing Balls (week 6) The bouncing balls lesson is taught at week 6.It is the first time students learn how to simulate physical properties of objects through their movement.The final goal of the lesson is to show how to make two balls bouncing on a flexible plate -Fig.4. The balls have different bounce periods.The plate starts to vibrate whenever it is hit by any of the balls.The vibration gradually fades if no hit is encountered for some time.Both balls have shadows cased on the plate.While balls go away from the plate, shadows become smaller and lighter. The series of programs in this lesson starts with the simple case of a ball going linearly up and down.Students realize immediately that such movement is not visually acceptable because it is physically incorrect.The next step is to replace the linear movement with sine function2 .Now the movement of the balls near their highest positions is acceptable, but they do not bounce off sharply from the plate.This gives the clue to use the absolute value of sine -Fig. 5. By using | sin | students learn that very often physical movements in an animation are not based on the physically correct formulae.Because of performance issues some calculations are replaced by faster one.That is why | sin | can effectively replace the calculation of a parabola. Sine is heavily used throughout the whole lesson.It is 'responsible' not only for the bounce, but for another 3 actions -the vibration of the plate based on variation of sin(x)/x, the rotation of the plate which smoothly alternates between clockwise and counter-clockwise, and the flying of the view point around the scene. An interesting problem with the bouncing balls is the synchronization of various movements.The bouncing of the balls is independent on the vibration of the plate.The program must explicitly synchronize both motions in order to trick the viewer to think that vibration is caused by the hit. The "Bouncing balls" lesson is another example of a multidisciplinarity.It practices skills in several subjects -Computer Science, Geometry, Physics, and Trigonometrics. Relative Transformational Geometry and Turtle Graphics (week 8 and 9) Transformational Geometry deals with affine transformation of objects.It is heavily used in many Elica lessons, because the original shapes of all objects are the canonical solids (like cube 1×1×1, or sphere with radius 1).All other objects are generated by transforming the canonicals.Relativity in geometry is when each object has its local coordinate system, and other objects bound to it are defined in terms of its local system. The usage of relative transformational geometry is based on transformational matrices.One interesting application is how to stack transformations for chained objects.Because of the flexible structure of each lesson, the same lesson taught in two different years are different.The main topic is the same, but the set of samples and especially the final programs becomes quite different.For example, "Dandelions" from Fig. 6 are the final program of the lesson from December 2005, while "Octopus" is from the same lesson two semesters later. "Octopus" gives birth to other program which is included in the Online Elica Museum (Elica Museum, 2007).The third image in Fig. 6 is a snapshot from "Larnaean Pentapus" exhibit from the museum.Using Relative Transformational Geometry prepares students to accept easily the benefits of Turtle Graphics.Relativity is the nature of turtle's motion and thus it has common grounds with Differential Geometry. The first 5-10 programs in the lesson are used for introduction to 2D and 3D turtle graphics.A typical example of this introduction is to make a turtle crawl on the surface of a cube and build various objects on each face, or to crawl on a sphere -see Fig. 7. The more advanced usage of Turtle Graphics is to create the objects drawn earlier with relative transformational geometry, but this time using turtles.Students see how the reimplementation becomes shorter and easier to understand, because turtles and relative transformations have a common mathematical background.The rest of the lesson is dedicated to step-by-step building and animation of a humanoid body -legs, hands, torso, and head, traversed by an invisible turtle.The animation is done by synchronous changes in the joints -every joint has a local coordinate system and is a place where the body parts have some level of motion freedom. Most often the created body is of a dancer playing with a hoop -Fig.8.The right-most image is from the "Circus Dancer" exhibition form the Online Elica Museum.This lesson teaches students how to program more complex systems.It makes use of various elements from Physics, Robotics, Biology, and Arts. Course Projects Show-Cases The Online Elica Museum is a section from the Elica site containing a collection of Elica programs, mostly related to animation of 3D objects.The programs are freely available as source code for everyone.Some of the course projects developed by students are so successful that they are included in the museum as independent exhibits.Of course, the original source code is polished to make it more representative and clearer, so that other students can learn from it.This section of the paper presents snapshots of several students' course projects -some are published in the museum, others are not. The project "Solar System" creates the Sun and the planets as 3D objects, and then animates the system by rotating the planets around the Sun -Fig.9.Each planet is dressed in a texture taken from real astronomical images.Some details are added to make the animation more realistic -there are stars at the background, the planets spin around their axes too, Saturn and Uranus have semitransparent rings, and the Moon rotates around the Earth.The making of this project requires understanding of how textures are used and some math skill to define the orbits and spinning of the planets as well as composition of several non-linear movements. The second project -see Fig. 10 -is named by the student who wrote it "United Colours of Elica".It features a series of animations of cube transformation.Each face of the cube is implemented as a spline surface which control points are defined by an invisible 3D turtle. Another student's project is to make a virtual 3D building of the Faculty of Mathematics and Informatics.A snapshot of this building is shown in Fig. 11. Conclusion Logo is used in primary and secondary school curricula, but it can be used in various courses at a university level too.This applies not only for light-weight courses, but also to traditionally heavy subjects like Computer Graphics where C and C++ are dominant programming languages.The course described here has been active for 4 semesters.The number of enrolled students is getting bigger, popularity rises.Anonymous questionnaires show that students definitely prefer interactive lessons.One of the most liked features is the life demonstration of programming which includes both coding and debugging.The project-based scenario of each lesson serves as a prototype of the lifecycle of a typical software development. Another feature of the course admired by students is the possibility to unfold their imagination by working on a course project related to their personal interests.Multidisciplinarity builds bridges between programming and the highly varies skills of studentsalthough they come from different specialities, they learn something new and interesting.The way how the topics in the course are designed, especially the custom-defined ones during the last few weeks, is a thing which helps student build awareness of and confidence in their own skills.This is due to the fact the students feel the course as if it is personally oriented towards them. The future of the Logo-based course is very promising.There are several ideas for its further development.One of them is the writing of a textbook, which is a challenge of its own, because of the dynamism of the course topics.The first step is to collect enough variations for each topic. Another idea is to move the Elica Repository to a more interactive online system -Moodle.This will add many new possibilities for student-teacher off-lesson interaction. Since January 1, 2007, Bulgaria joined European Union.This had a significant impact on the educational system.Many Universities are now revising their Programmes because of expected increase of foreign students.One of the possible future plans of the Logobased Computer Graphics course is to offer it to English-speaking audience. Fig. 1 . Fig. 1.Snapshots of projects developed through the course. Fig. 7 . Fig. 7. Houses created by turtle crawling on the surface of a cube and a sphere. Fig. 11 . Fig. 11.Project "The Building of Faculty of Mathematics and Informatics". Table 1 Calculated approximation of π
5,910.8
2007-01-01T00:00:00.000
[ "Computer Science" ]
Antipathogenic Applications of Copper Nanoparticles in Air Filtration Systems The COVID-19 pandemic has underscored the critical need for effective air filtration systems in healthcare environments to mitigate the spread of viral and bacterial pathogens. This study explores the utilization of copper nanoparticle-coated materials for air filtration, offering both antiviral and antimicrobial properties. Highly uniform spherical copper oxide nanoparticles (~10 nm) were synthesized via a spinning disc reactor and subsequently functionalized with carboxylated ligands to ensure colloidal stability in aqueous solutions. The functionalized copper oxide nanoparticles were applied as antipathogenic coatings on extruded polyethylene and melt-blown polypropylene fibers to assess their efficacy in air filtration applications. Notably, Type IIR medical facemasks incorporating the copper nanoparticle-coated polyethylene fibers demonstrated a >90% reduction in influenza virus and SARS-CoV-2 within 2 h of exposure. Similarly, heating, ventilation, and air conditioning (HVAC) filtration pre- (polyester) and post (polypropylene)-filtration media were functionalised with the copper nanoparticles and exhibited a 99% reduction in various viral and bacterial strains, including SARS-CoV-2, Pseudomonas aeruginosa, Acinetobacter baumannii, Salmonella enterica, and Escherichia coli. In both cases, this mitigates not only the immediate threat from these pathogens but also the risk of biofouling and secondary risk factors. The assessment of leaching properties confirmed that the copper nanoparticle coatings remained intact on the polymeric fiber surfaces without releasing nanoparticles into the solution or airflow. These findings highlight the potential of nanoparticle-coated materials in developing biocompatible and environmentally friendly air filtration systems for healthcare settings, crucial in combating current and future pandemic threats. Introduction Airborne pathogen transmission traditionally occurs due to the transmission of droplets, in the respirable size range of ≤5 µm, from the respiratory tract of one individual to another individual's mucosal surface or conjunctivae, i.e., airborne transmission of infectious influenza via breathing, coughing, sneezing, talking, and laughing.Bacterial pathogens present a distinct hazard to patients within healthcare environments, especially those already debilitated by pre-existing conditions, illnesses, or surgical procedures [1].Utilizing barrier protection methods, such as face masks, can significantly mitigate the transmission of bacterial pathogens.These masks serve a dual purpose: firstly, by impeding the dissemination of pathogens from an infected patient into the surrounding environment, and secondly, by safeguarding vulnerable individuals, such as those afflicted with cystic fibrosis, from exposure to potential pathogens.The transformation of respiratory airborne particles to droplets has significant practical implications for infection control measures in hospital and primary care [2][3][4]. Human health has been challenged by microbial threats globally, especially in epidemics and pandemics, since the beginning of human existence.Once effective medicines and vaccines increasingly fail due to microbial evolution, the development of personal protective equipment (PPE) as a rapid response to reduce the transmission of infectious diseases is a cornerstone of modern medicine [5][6][7].Airborne pathogens present a particular risk, exemplified in recent years by the COVID-19 pandemic.The widespread use of single-use polymeric filtration materials to reduce transmission of infective particles through the air was a major factor in preventing the spread of the pathogen [8][9][10].However, significant waste is created with single-use PPE, particularly when used on the scale of a pandemic [11,12].Moreover, the UK government has estimated the healthcare cost measures of the COVID-19 pandemic at between GBP 310 and 410 billion in the UK. Barrier protection with standard air filtration systems (such as face masks, respirators, and HVAC) may reduce transmission; however, any filtration that traps the pathogen without killing/inactivating it only nets the threat and, therefore, presents a clear risk to anyone in contact with the materials as the virus and bacteria still remain infectious, promoting biofouling [13][14][15][16].Biofouling restricts the efficacy and performance of air filtration membranes via self-replicating bacterial growth on filter layers, resulting in biofilm formation, which eventually mechanically blocks the filtration surfaces.Therefore, several applications have been reported to mitigate biofouling and provide antipathogenic properties via the incorporation and immobilization of naturally occurring metals and their oxides, such as silver, copper, and zinc oxides, on polymeric air filter membranes [7,[17][18][19][20][21][22][23][24][25][26]. The medicinal properties of copper are well established and have been demonstrated since the ancient Egyptians [23][24][25][26].Current applications of copper include touch surfaces such as bed frames and door handles due to copper oxide having unique electrical properties, allowing biomedical and antipathogenic properties [27][28][29][30].Copper and both its oxides were investigated during both the Swine flu (H1N1) and Bird flu (H5N1) outbreaks and have more recently been used to combat the SARS-CoV-2 pandemic [31,32].Generally, copper oxides hold a beneficial price/performance ratio, making it the principal antifoulant oxide against both Gram-positive and Gram-negative bacteria.Also, the insoluble and high hydrophobic nature of copper ions allows it to easily precipitate and accumulate on filtering materials.These oxide nanoparticles also provide high surface areas, improving the antifouling efficacy while minimizing the environmental influence due to exposing higher contact sites and lower copper ions release [33]. The functionalization of CuO NPs on the surface of polymeric fibers has demonstrated remarkable outcomes in inhibiting the growth of a wide range of microorganisms and significant applications in areas like food packaging, medical instruments, and water treatment.Copper/polymer fibers exhibit bactericidal properties primarily due to their capacity to release metal ions in an aqueous environment.These ions facilitate electrostatic interactions with the negatively charged bacterial cell walls, leading to their disruption and eventual rupture.Consequently, intracellular material leaks out, resulting in cell death.The process of metal ion release from the composites begins with water diffusing into the composite bulk.Subsequently, the reaction between metallic particles and water molecules generates metal ions.Finally, the migration of these ions to the composite's external surface enables interaction with bacteria [34]. The characteristics of the polymeric matrix, such as crystallinity and hydrophobic behavior, can affect the composite's ability to release metal ions.Damm et al. suggested that water molecule and metal ion diffusion primarily occur in the amorphous regions of the polymer matrix.Therefore, enhancing the hydrophilicity and reducing the crystallinity of the polymer matrix could enhance ion release [35]. However, a common issue encountered in composite materials produced via melt mixing is the inadequate dispersion of nanoparticles (NPs) within the polymeric matrix.NP aggregation in the matrix is linked to its high surface energy.Typically, the formation of large NP aggregates leads to a decline in the mechanical, thermal, and antimicrobial properties of the composite [36]. Herein, we report the synthesis of highly uniform copper oxide nanoparticles (~10 nm) by a chemical precipitation method using a high throughput continuous flow spinning disc reactor, functionalized via carboxylated ligands in the form of amino acids, resulting in colloidally stable aqueous suspensions.Subsequently, the solution was applied as a surfacebound nanoparticle coating to polyethylene and polypropylene air filtration media to evaluate their antiviral and antibacterial applications. Aqueous solutions of copper (II) chloride (0.1 M, 2.5 L) and sodium hydroxide (0.1 M, 2.5 L) were prepared.The solutions were subsequently pumped (60 mL/min) into the center of a spinning disc reactor (1500 RPM, 60 • C), where they reacted on the rotating disc (15 cm diameter) to spontaneously form CuO nanoparticles (9.1 ± 1.9 nm diameter).The product was collected and filtered against gravity using a sintered glass funnel (porosity grade 3).The filter cake was then dried in an oven (ca. 3 h, 120 • C) after washing with deionized water (3 × 250 mL).Copper oxide nanoparticles (50 g, 1 eq.) and L-lysine monohydrochloride (50 g, 1 eq.) were ground using a mechanochemical extruder and stored in an airtight container under nitrogen until further use [37,38].Subsequently, lysinecoated copper oxide nanoparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), powder X-ray diffraction (XRD), dynamic light scattering (DLS), thermogravimetric analysis (TGA), and Fourier transform infrared spectroscopy (FTIR).The 4-Ply CNC-PE masks are composed of an inner hypoallergic layer combined with a melt-blown filter (the CNC-PE anti-viral fiber layer and the fluid-repellent outer layer.Furthermore, a lysine-coated copper oxide nanoparticle (10% w/v) solution was used to prepare copper nanoparticle-coated polypropylene fibers (CNC-PP fibers) using spray via nebulization while polyethylene fibers (CNC-PE fibers) were synthesized via dip coating or extracted via a print drum roller.Subsequently, the polymeric filtration media were cured using UV (355 nm) and dried via an IR heating lamp (750 nm-1000 µm).Later, these polymeric air filtration fibers (CNC-PP and CNC-PE) were characterized by scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDS). The leaching properties of copper from the polymeric fibers were determined according to the modified ISO 17294-2:2023.The leaching properties of copper from the filter textiles were investigated both in solution and airflow via inductive-coupled plasma mass spectrometry.Both CNC-PE and CPC-PP filter fabrics (5 cm × 5 cm) were tested underwater (2 mL, 8 mL, and 10 mL separately) over 24 h to test the copper leaching.Subsequently, airflow leaching studies were performed on both sides of the filter fabrics under constant airflow (10 L min −1 over 7 h) for copper leaching. Subsequently, the water samples (1 mL) from both solution and air-blown fabric fibers were digested with HN03 (70%, 10 mL) for 4 h and diluted further prior to the ICP-MS elemental analysis using standard calibration (0-1000 ppb) from Certipur ® ICP Single-Element standards of copper and indium (20 ppb) as internal standard. All virucidal activity assessments of the CNC-PP and CNC-PE textiles, including dust-treated CNC-PP textile material, were investigated against Influenza A/WSN/33 (H1N1) and SARS-CoV-2 viruses relative to non-treated reference controls under standard ISO 18184:2019 protocol [39].African Green Monkey Kidney Epithelial (Vero) cells were used as viral host cells when assessing SARS-CoV-2 and Madin-Darby Canine Kidney (MDCK) cells for Influenza virus assessments.Supplementary material on virus titers and test conditions are provided in Table S3.If not otherwise stated, all experimental conditions were performed in triplicate. Textile squares (20 × 20 mm and 0.4 g) were used for the assessment procedures.The antiviral tests were performed with 200 µL viral inoculum to completely soak up the assessed test (copper-treated) and reference (non-treated) textiles while these were placed in individual test tubes.The assessed virus was left to incubate at room temperature with the textile for a period of time (2 h or 7 h) as detailed in Tables 1 and S3, and this time is referred to as contact time.Upon the completion of the incubation time, the textiles were thoroughly washed with media several times to recover the virus.TCID50 was then used to calculate the amount of the recovered virus in each of the tested materials. To determine this, the isolated virus-containing wash media were incubated and assessed using a seven-point, ten-fold serial dilution of the media on host cells in quadruplicate for each sample, as mentioned in Tables 1 and S3.TCID50 was calculated using the Reed and Muench method to quantify the dilution (TCID50), where 50% of the cells are infected/killed using regression analysis.An additional, non-treated reference control (virus recovery control) was obtained by viral incubation on non-treated textiles (ISO 18184:2019) with immediate recovery to assess the starting viral concentration and used for Mv calculation. The antiviral activity (Mv) was calculated using the following formula while an Mv value of ≥1 indicates antiviral activity: where Log(Va) is the average of the common logarithm of the number of infectious units recovered from the reference specimens immediately after inoculation, and Log(Vc) is the average of the common logarithm of the number of infectious units recovered from the treated test specimens at the end of the incubation time.For the virucidal activity assessments to be valid, the materials tested should not have any cytotoxic activity on assessed host cells nor affect cell sensitivity to infection.For cytotoxicity controls, media with no textile contact, media with 5 min contact to treated textile, and reference control textile were incubated with host cells for a period of time (Table S3), followed by crystal violet staining to determine cell viability.For sensitivity control tests, media with no textile contact, media with 5 min contact to treated textile, and reference control textile were incubated with the virus, and following incubation time, the amount of infectious virus infecting test cells was quantified with TCID50 assay. Bacterial strains were human clinical isolates from UK hospitals (see Table S4 for details).All bacteriological media and buffers were prepared as per the manufacturer's instructions.The touch-killing antibacterial properties of the CNC-PE fibers and CNC-PP fibers were investigated against Acinetobacter baumannii, Pseudomonas aeruginosa, Escherichia coli, and Salmonella enterica following ISO 20743:2021 with modification.Phosphate buffer saline (PBS) was used instead of Polysorbate 80 due to its high viscosity when spun.Stocks of the strains were streaked and incubated (37 • C for 24 h) onto Tryptic soy agar (TSA) plates.Tryptic soy broth (20 mL) was added to an Erlenmeyer flask (100 mL), and one colony from the incubated agar plate was added to the broth and incubated (37 • C for 18 h at 110 RPM).Another Erlenmeyer flask was prepared with TSB (20 mL), and inoculum (0.4 mL) was added from the first flask, which was measured at 1 × 10 8 CFU mL −1 and incubated (37 • C for 3 h) at 110 RPM.The inoculum was adjusted to 1 × 10 5 CFU mL −1 , preserved on ice, and used within 4 h of adjustment (as per ISO20743:2021).Six test samples, three treated and three untreated, were prepared with a mass of (0.4 g) and were sent for autoclave sterilization.Inoculum (200 µL) was pipetted directly to the fabric samples and placed in an incubator (37 • C for 18 h).After incubation, PBS (20 mL) was added to each sample and vortexed (2 min at 1500 RPM) to recover any viable cells after contact with the treated and untreated samples.The recovered inoculum was then spotted out onto agar plates via serial dilution to enumerate the viable cells. Synthesis and Characterization of Lysine-Copper Oxide-Coated Polymeric Air Filtration Media Fibers Synthesis of various nanoparticles via spinning disc reactors (SDRs) is a well-established technique over co-precipitation methods by allowing control over the reaction time to achieve monodisperse nanoparticles [13,40].SDRs generally consist of a flat spinning reaction surface where reaction materials are applied.Subsequently, the reaction materials travel to the disc's surface to react, and reacted liquid colloids are ejected from the disc's surface.Several methods of nanoparticle fabrication have been reported while aiming to create and control substantially monodisperse materials, such as a uniform and controlled particle size.Even though SDRs are controlled by varying the disc rotation and temperature of the disc, the drawback with these SDRs is limited control over the reaction time.Subsequently, it hinders SDR application when bulk production of nanoparticles is required [37]. To overcome these difficulties, we reported a patented continuous flow process spinning disc reactor that consists of a concave spinning disc along the rotating axis, allowing reactants' residence time over the flat reaction surface.This allows for greater control of the reaction time to achieve monodisperse nanoparticles by choosing the optimal degree of concavity of the surface.These SDRs have scaled up the production of nanoparticles to 2 kg hr −1 per disc [41].Upon scaling up the production of copper oxide nanoparticles via SDRs, the nanoparticles were characterized via electron microscopy (SEM, TEM), X-ray diffraction, and FTIR techniques.The copper oxide nanoparticles were qualitatively spherical in shape, as observed via scanning electron microscopy (Figure 1a).Transmission electron microscopy quantified the size of the nanoparticles with an average particle size of ~10 ± 1.9 nm (Figure 1b).Powdered XRD analysis determined the phase composition and crystalline structure of copper oxide (CuO) nanoparticles synthesized using a spinning disc reactor.As shown in Figure 2, the XRD 2θ values ranged from 30° to 75°.The large peaks for 2θ values between 35° and 40° correspond to the planes of (002), (−110), (111), and (200), which are in line with the JCPDS card no 00−041−0254.The XRD patterns demonstrated that the nanoparticles were polycrystalline with a monoclinic CuO crystal structure.The other crystal planes (−202), (020), ( 202), (−113), (022), (−311), ( 113), (220), and (311) correspond to other important Bragg's reflection peaks.The Zeta potential characterization of copper oxide nanoparticles produced from the spinning disk reactor was found to have a positive surface charge.The hydrodynamic size of the copper oxide nanoparticles was measured to be 78.8 ± 6.4 nm with a polydispersity index of 0.24 (Figure S3).113), (220), and (311) correspond to other important Bragg's reflection peaks.The Zeta potential characterization of copper oxide nanoparticles produced from the spinning disk reactor was found to have a positive surface charge.The hydrodynamic size of the copper oxide nanoparticles was measured to be 78.8 ± 6.4 nm with a polydispersity index of 0.24 (Figure S3). shown in Figure 2, the XRD 2θ values ranged from 30° to 75°.The large peaks for 2θ values between 35° and 40° correspond to the planes of (002), (−110), (111), and (200), which are in line with the JCPDS card no 00−041−0254.The XRD patterns demonstrated that the nanoparticles were polycrystalline with a monoclinic CuO crystal structure.The other crystal planes (−202), ( 020), ( 202), (−113), ( 022), (−311), ( 113), (220), and (311) correspond to other important Bragg's reflection peaks.The Zeta potential characterization of copper oxide nanoparticles produced from the spinning disk reactor was found to have a positive surface charge.The hydrodynamic size of the copper oxide nanoparticles was measured to be 78.8 ± 6.4 nm with a polydispersity index of 0.24 (Figure S3).The mechanical process facilitates high levels of shear to dry the copper oxide mixture, which subsequently distributes the L-lysine-hydrochloride around and among the copper oxide nanoparticles.Copper oxide nanoparticles were coated with amino acids to improve stability and retain a hydrodynamic surface to keep the ions active. Subsequently, powder XRD analysis confirmed the phase composition of CuOlysine-coated nanoparticles, as shown in Figure S2, consisting of all planes in line with the 2θ values of lysine and CuO.The planes for L-lysine (020), ( 011), ( 021), (−121), ( 210), (−121), (230), and (151) correspond to the 2θ values in the range of 9° to 75° along with CuO planes.Thermogravimetric analysis (TGA) analysis confirmed that L-lysinehydrochloride (1:1) w/w was loaded on copper oxide nanoparticles, as shown in Figures S4 and S5.The functional group characterization of copper oxide-coated L-lysine nano- The mechanical process facilitates high levels of shear to dry the copper oxide mixture, which subsequently distributes the L-lysine-hydrochloride around and among the copper oxide nanoparticles.Copper oxide nanoparticles were coated with amino acids to improve stability and retain a hydrodynamic surface to keep the ions active. Subsequently, powder XRD analysis confirmed the phase composition of CuO-lysinecoated nanoparticles, as shown in Figure S2, consisting of all planes in line with the 2θ values of lysine and CuO.The planes for L-lysine (020), ( 011), ( 021), (−121), ( 210), (−121), (230), and (151) correspond to the 2θ values in the range of 9 • to 75 • along with CuO planes.Thermogravimetric analysis (TGA) analysis confirmed that L-lysine-hydrochloride (1:1) w/w was loaded on copper oxide nanoparticles, as shown in Figures S4 and S5.The functional group characterization of copper oxide-coated L-lysine nanoparticles was performed via FTIR spectroscopy, as shown in Figure S6.The absorption band at 532 cm −1 corresponds to the vibrations of the Cu-O bond, confirming the CuO nanoparticles as shown in Figure S6a.In Figure S6b, NH 2 vibrational stretching frequencies are observed at 3400 cm −1, while characteristic asymmetric and symmetric frequencies of carboxylate are at 1598 cm −1 (C=O) and 1418 cm −1 (C-O), respectively.This spectroscopic analysis confirms the synthesis of copper oxide nanoparticles and their successful chemosorption of the amino acid via electrostatic interactions. Subsequently, these polymeric air filtration fibers (CNC-PP and CNC-PE) were characterized by scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDS), as shown in Figure 3. particles was performed via FTIR spectroscopy, as shown in Figure S6.The absorption band at 532 cm −1 corresponds to the vibrations of the Cu-O bond, confirming the CuO nanoparticles as shown in Figure S6a.In Figure S6b, NH2 vibrational stretching frequencies are observed at 3400 cm −1, while characteristic asymmetric and symmetric frequencies of carboxylate are at 1598 cm −1 (C=O) and 1418 cm −1 (C-O), respectively.This spectroscopic analysis confirms the synthesis of copper oxide nanoparticles and their successful chemosorption of the amino acid via electrostatic interactions.Subsequently, these polymeric air filtration fibers (CNC-PP and CNC-PE) were characterized by scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDS), as shown in Figure 3. Evaluation of Copper Leaching from Polymeric Fibers The leaching properties of copper from the polymeric fibers were performed according to the modified ISO 17294-2:2023 method [42].The leaching properties of copper from the filter textiles were investigated both in solution and airflow.Both CNC-PE and Evaluation of Copper Leaching from Polymeric Fibers The leaching properties of copper from the polymeric fibers were performed according to the modified ISO 17294-2:2023 method [42].The leaching properties of copper from the filter textiles were investigated both in solution and airflow.Both CNC-PE and CPC-PP filter fabrics were tested under water over 24 h to test the copper leaching.ICP-MS elemental analysis confirmed that there was no leaching from either of the resultant filter fabrics.Subsequently, airflow leaching studies were performed on both sides of the filter fabrics under constant airflow (10 L min −1 over 7 h) for copper leaching.The results of the ICP-MS analysis showed that there was evidence of copper leaching from the CNC-PP filter within the environmental limits, while there was no leaching of copper from the CNC-PE filter within the limits of detection. Virucidal Activity Assessments of Polymeric Fibers In contrast to the standard antiviral masks, the 4-Ply CNC-PE masks are composed of an inner hypoallergic layer combined with a melt-blown filter, the CNC-PE antiviral fiber layer, and the fluid-repellent outer layer, as shown in Figure 4.The external compartment of the mask confers a hydrophobic environment to prevent airborne pathogens/bioaerosol contamination from bodily fluids.The antipathogenic fiber layer consisting of copper oxide nanoparticles coated with amino acids improves stability and also retains a hydrodynamic surface to ensure that the ions remain active on the surface.Subsequently, the "wet" metallic structure (CuO) interacts with the cells and kills the virus or bacteria via the emission of ions that travel through the aqueous media.The melt-blown filter that has been used, preventing over 95% of bacteria and other airborne particulate matter passing the layer, as shown in Figure 4(3).A soft hypoallergenic inner layer is incorporated in order to be breathable, removing moisture from the face for extended periods of wearing the masks without causing discomfort and rashes around the face.In validation control tests, the CNC-PP and CNC-PE textile filters showed no inter ference with the host cells' sensitivity to both of the assessed viruses as per ISO18184:2019 test requirements [39].When excluding the undiluted recovered media the treated and non-treated textiles showed no cytotoxicity toward the host cells, allow ing the completion of the antiviral activity tests. CNC-PP textile, with an Mv value of 2.60, demonstrated a clear 99.8% viral reduc tion compared to its reference control textile following 2 h contact time with SARS-CoV 2 and a 99.5% (Mv 2.68) viral reduction following 7 h contact time with Influenza virus (Table 1).With an average recovered viral titer of 4.59 × 10 4 TCID50/sample, CNC PP/dust textile appeared with 99% antiviral activity when compared to 5.59 × 10 In validation control tests, the CNC-PP and CNC-PE textile filters showed no interference with the host cells' sensitivity to both of the assessed viruses as per ISO18184:2019 test requirements [39].When excluding the undiluted recovered media, the treated and non-treated textiles showed no cytotoxicity toward the host cells, allowing the completion of the antiviral activity tests. CNC-PP textile, with an Mv value of 2.60, demonstrated a clear 99.8% viral reduction compared to its reference control textile following 2 h contact time with SARS-CoV-2 and a 99.5% (Mv 2.68) viral reduction following 7 h contact time with Influenza virus (Table 1).With an average recovered viral titer of 4.59 × 10 4 TCID50/sample, CNC-PP/dust textile appeared with 99% antiviral activity when compared to 5.59 × 10 6 TCID50/sample in reference/dust textile.The resulting Mv of 1.93 still verified the CNC-PP filters' antiviral action against SARS-CoV-2 irrespective of deposited dust particles, proving its long-term activity (Table S3, Supplementary).Virucidal activity results on CNC-PP dust-treated textiles indicated no antiviral activity against Influenza A/WSN/33 (H1N1) compared to their reference controls with an Mv value of 0.18.The CNC-PE textile displays virucidal activity following 7 h contact time and a 95.8% reduction of the Influenza virus (Mv 1.19).SARS-CoV-2 was 90% (Mv 1.61) reduced on the same textile following 2 h contact time compared to its reference controls.The average recovered titers from treated and non-treated textiles are shown in Table 1. Antibacterial Activity Assessment of Polymeric Fibers The use of filtration materials in personal protective equipment, such as face masks and filters that control airflow, will reduce the transmission of bacterial pathogens; however, the organisms still present an infection risk as the pathogens are not killed, making such materials a significant infection risk [1].We, therefore, tested both filtration media using ISO 20743:2021 methodology [48] for their touch-killing properties to determine whether the copper-lysine nanoparticles were effective antibacterials against clinical pathogens associated with these materials.The bacterial species tested included P. aeruginosa, A. baumannii, S. enterica, and E. coli due to their impact on human health. A significant touch-killing effect was established on the CNC-PP textile against all bacterial pathogens after 18 h contact with treated material, with A. baumannii and E. coli exhibiting the greatest decrease in viable cells compared to control media (Figure 5a,b).A significant touch-killing effect was also observed after contact with treated CNC-PE, with the recovery of viable bacterial cells below the threshold for detection for all pathogens after 18 h contact (Figure 5c,d).The use of filtration materials in personal protective equipment, such as face masks and filters that control airflow, will reduce the transmission of bacterial pathogens; however, the organisms still present an infection risk as the pathogens are not killed, making such materials a significant infection risk [1].We, therefore, tested both filtration media using ISO 20743:2021 methodology [48] for their touch-killing properties to determine whether the copper-lysine nanoparticles were effective antibacterials against clinical pathogens associated with these materials.The bacterial species tested included P. aeruginosa, A. baumannii, S. enterica, and E. coli due to their impact on human health. A significant touch-killing effect was established on the CNC-PP textile against all bacterial pathogens after 18 h contact with treated material, with A. baumannii and E. coli exhibiting the greatest decrease in viable cells compared to control media (Figure 5a,b).A significant touch-killing effect was also observed after contact with treated CNC-PE, with the recovery of viable bacterial cells below the threshold for detection for all pathogens after 18 h contact (Figure 5c,d). Conclusions A potentially scalable route to copper oxide nanoparticles synthesis and functionalization was established for the formulation of an antipathogenic functionalized ink that was successfully applied, via commercially viable processes, to polyethylene (and polypropylene filtration media.The resulting fabrics did not demonstrate any significant leaching of the active coating in both solution and airflow, thereby demonstrating the stability of the copper nanoparticle coating on polymeric filtration media and their environmental safety.Both polypropylene and polyethylene filters demonstrated significant Conclusions A potentially scalable route to copper oxide nanoparticles synthesis and functionalization was established for the formulation of an antipathogenic functionalized ink that was successfully applied, via commercially viable processes, to polyethylene (and polypropylene filtration media.The resulting fabrics did not demonstrate any significant leaching of the active coating in both solution and airflow, thereby demonstrating the stability of the copper nanoparticle coating on polymeric filtration media and their environmental safety.Both polypropylene and polyethylene filters demonstrated significant antibacterial effects, with over 99.9% reduction in bacterial species.The polypropylene and polyethylene filters exhibited sustained virucidal activity against SARS-CoV-2 for at least 2 h and Influenza virus over at least 7 h, indicating the potential use of the masks for efficient extended use.The antiviral properties against SARS-CoV-2 remained even during material-accelerated aging dust treatment worth 1 year of filtration, demonstrating the potential effectiveness in commercial HVAC filtration systems. Figure 2 . Figure 2. Powder XRD analysis of copper oxide nanoparticles synthesized using SDRs. Figure 2 . Figure 2. Powder XRD analysis of copper oxide nanoparticles synthesized using SDRs. Figure 3 . Figure 3. SEM-EDX characterization of CNC-PE fibers (a,b) and CPC-PP fibers (c,d).Copper coating represented in yellow (b) and purple (d), while fibers are represented in red and black. Figure 3 . Figure 3. SEM-EDX characterization of CNC-PE fibers (a,b) and CPC-PP fibers (c,d).Copper coating represented in yellow (b) and purple (d), while fibers are represented in red and black. Figure 5 . Figure 5. Bacterial cells were exposed to CNC-polypropylene (a,b) or CNC-polyethylene (c,d) at a final concentration of 10 5 CFU mL −1 and incubated for 5 min (a,c) or 18 h (b,d) at 37 °C.Bacterial cell viability was determined after recovery into PBS.N = 9. Figure 5 . Figure 5. Bacterial cells were exposed to CNC-polypropylene (a,b) or CNC-polyethylene (c,d) at a final concentration of 10 5 CFU mL −1 and incubated for 5 min (a,c) or 18 h (b,d) at 37 • C. Bacterial cell viability was determined after recovery into PBS.N = 9. Table 1 . The average infectious units mL −1 recovered from the test and reference control materials in air vent and face masks at a contact time of 2 h or 7 h with the assessed viruses.
6,732.6
2024-06-01T00:00:00.000
[ "Materials Science", "Environmental Science", "Medicine" ]
Impact of channel estimation-and-artificial noise cancellation imperfection on artificial noise-aided energy harvesting overlay networks EHONs (Energy Harvesting Overlay Networks) satisfy stringent design requirements such as high energy-and-spectrum utilization efficiencies. However, due to open access nature of these networks, eavesdroppers can emulate cognitive radios to wire-tap legitimate information, inducing information security to become a great concern. In order to protect legitimate information against eavesdroppers, this paper generates artificial noise transmitted simultaneously with legitimate information to interfere eavesdroppers. Nonetheless, artificial noise cannot be perfectly suppressed at legitimate receivers as for its primary purpose of interfering only eavesdroppers. Moreover, channel information used for signal detection is hardly estimated at receivers with absolute accuracy. As such, to quickly evaluate impact of channel estimation-and-artificial noise cancellation imperfection on secrecy performance of secondary/primary communication in ANaEHONs (Artificial Noise-aided EHONs), this paper firstly proposes precise closed-form formulas of primary/secondary SOP (Secrecy Outage Probability). Then, computer simulations are provided to corroborate these formulas. Finally, various results are illustrated to shed insights into secrecy performance of ANaEHON with key system parameters from which optimum parameters are recognized. Notably, secondary/primary communication can be secured at different levels by flexibly adjusting various parameters of the proposed system model. Introduction Advanced wireless networks such as 5G/6G (Fifth/Sixth Generation) open a door to a large number of emerging wireless applications but impose an immense pressure on telecommunications infrastructure which requires advanced technology solutions of high (spectrum utilization, energy, spectral) efficiencies to release it [1][2][3][4]. Indeed, a key application of 5G networks is IoT (Internet of Things), which is deployed extensively from civilian (e.g., transportation, electricity, healthcare, public safety, ... ) to military (e.g., tactical reconnaissance, smart bases, ...) [5]. However, when deploying IoT, an enormous number of concurrently connected terminals consume tremendous amount of energy and hence, it is essential to improve energy efficiency to not only extend the lifetime of terminals but also reduce energy need. Moreover, IoT demands a large bandwidth to allot concurrently a huge number of terminals and thus, in the spectrum shortageand-scarcity situation as nowadays, solutions of enhancing spectral efficiency should be devised. Similarly to IoT, 5G mobile wireless communications, which serves the growing number of mobile terminals and demands increasingly high data transmission speed, needs efficient energy-andspectrum utilization solutions to meet its requirements [6]. CRs (Cognitive Radios), which typically operate in overlay, underlay, and interweave modes, can access the licensed frequency band of PUs (Primary Users) without causing any performance degradation for PUs, thus significantly improv-ing spectral efficiency and mitigating spectrum scarcity issue [7]. In the underlay mode, CRs utilize the licensed spectrum but must upper-bound interference caused at PUs. The overlay mode allows concurrent transmission of CRs and PUs but signal reception quality at primary receivers must be remained or enhanced with complicated signal processing techniques. In the meantime, the interweave mode merely leaves blank licensed spectrum for CRs to utilize. While the literature has intensively focused on the underlay and interweave modes, few works have studied the overlay one. The overlay mode can trade-off performances between primary and secondary communication better than other modes and hence, it is of a special attention in the current paper. Energy efficiency of wireless communication can be enhanced by several viable solutions (e.g., network planning, EH (Energy Harvesting), hardware solutions). Amongst these solutions, RF (Radio Frequency) energy harvesting can be integrated into (5G/6G mobile or IoT) users to supply energy, extend the life-time of wireless devices, and improve energy efficiency since it requires simple energy harvesting circuits [8][9][10]. EHONs (Energy Harvesting Overlay Networks) can exploit simultaneously advantages of two feasible (energy harvesting and cognitive radio) technologies to meet several standards of advanced wireless networks requiring high energy-and-spectral efficiencies [11]. Nevertheless, that both licensed and unlicensed users in these networks are permitted to utilize the licensed spectrum simultaneously may enable eavesdroppers to emulate legitimate users to steal secret information, seriously warning security issues. To supplement and improve secrecy capability for traditional cryptographic and encryption techniques, PLS (physical layer security) has recently been suggested [12]. Amongst various PLS methods (e.g., opportunistic scheduling, transmit beam-forming, transmit antenna selection, on-off transmission, jamming, relaying), jamming (or generating artificial noise) is of a great concern due to its simple, efficient, and flexible implementation [13]. Therefore, this paper applies artificial noise in EHONs to secure primary/secondary communication. Most references (e.g., [14][15][16][17][18][19][20][21]) assumed artificial noise to be exactly known at legitimate receivers. Accordingly, these receivers completely eliminate its detrimental effect while eavesdroppers suffer severely this effect. Nevertheless, the amount of artificial noise received at legitimate receivers is variable due to uncertainties such as noise and fading. As such, assumption on perfect artificial noise cancellation at these receivers seems unrealistic. Moreover, channel information affects successful probability of signal detection not only at legitimate receivers but also eavesdroppers, eventually impacting security capability. Nonetheless, it is certain that any channel estimator has some accuracy degree [22] and hence, it is practical to investigate channel estimation imperfection in ANaEHON (Artificial Noise-aided EHONs). Therefore, this paper evaluates the effect of artificial noise cancellation-and-channel estimation imperfection on security performance of PUs/CRs in ANaEHON. Prior works and motivations This paper considers ANaEHON where a primary transmitterreceiver pair cannot communicate with each other directly due to some reasons and a secondary transmitter-receiver pair assists primary communication in reward for their access to primary spectrum. The secondary transmitter harvests RF energy from the primary transmitter and transmits not only its private signal but also the primary transmitter's signal and artificial noise. Data transmission of the secondary transmitter is wire-tapped by an eavesdropper. While publications on information security for energy harvesting (interweave/underlay) networks have been blooming, few works have been interested in the overlay mode [14][15][16][17][18][19][20][21]23]. More specifically, [14] and [15] considered the almost same system model as ours but EHONs are secured by letting the primary receiver jam the eavesdropper and the secondary transmitter helps primary communication by the AF (Amplify-and-Forward) mechanism. 1 In [16], a dedicated jammer was employed to interfere the eavesdropper instead of the primary receiver as [14] and [15]. In addition, [16] differs [14] and [15] in the EH method, the EH-capable terminal, and the assistance mechanism. The former used the EH-capable jammer, which harvests energy based on the time splitting technique [25], and employed the secondary transmitter as a DF (Decode-and-Forward) relay. Meanwhile, the latter used the secondary transmitter as the AF relay and as an energy harvester which is based on the power splitting technique [26]. To further secure primary transmission, [17] proposed to jam the eavesdropper by the primary receiver as well as the dedicated jammer. However, security performance of primary/secondary communication in terms of SOP (Secrecy Outage Probability) was not analyzed in [14][15][16][17]. In [23], the transmit antenna selection and the multi-user scheduling were proposed to secure EHONs. Moreover, the SOP of primary communication and the ergodic rate of secondary communication were derived in closed-form in [23]. Recently, [18] proposed a group of dedicated jammers to guarantee communication security for the secondary transmitter in EHONs. Moreover, the SOP of secondary/primary communication was analyzed in [18]. Nonetheless, different from [14][15][16][17], the secondary user relays the primary signal and transmits its private signal separately in [18] and [23]. This significantly mitigates complexity in analyzing the SOP and hence, making the analysis in [18] and [23] tractable. Although [18] proposed the SOP analysis for both primary and secondary communication and [23] analyzed the SOP/ergodic rate of primary/secondary communication in EHONs, the SU (Secondary User) relays the primary signal and transmits its private signal separately. This requires at least three stages (Stage I: energy harvesting and primary communication, Stage II: secondary communication to PU, Stage III: secondary communication to SU) to complete both secondary and primary communication, considerably reducing spectral efficiency. Recently, [19][20][21] proposed a two-stage transmission scheme with artificial noise generation in EHONs to enhance secrecy performance and spectral efficiency. More specifically, [19][20][21] suggested an ANaEHON where the secondary transmitter performed a superposition of artificial noise, secondary signal, and primary signal to transmit at once. Furthermore, [19][20][21] proposed the SOP analysis. In summary, imperfect channel estimation and artificial noise cancellation are ineluctable in practical systems. Nonetheless, none of the prior works in [14][15][16][17][18][19][20][21]23] analyzed the SOP of both primary and secondary communication in ANaEHONs under their impact as summarized in Table 1. This motivates the current paper to study their impact on security performance of ANaEHON in [19][20][21] for the first time. Contributions The following are our contributions: • Suggest a novel signal model for ANaEHONs accounting for channel estimation-and-artificial noise cancellation imperfection. In ANaEHONs under consideration, the secondary transmitter harvests energy in primary signals, decodes and forwards primary data, and generates a signal combination of primary data, secondary data, and artificial noise. Such an operation mechanism of the secondary transmitter is flexible in compromising security performance of primary communication with that of secondary communication and optimizing system design by selecting appropriately the (power splitting, time splitting, power allocation) factors. • Propose precise closed-form SOP formulas for promptly rating security metrics of primary/secondary communication under channel estimation-and-artificial noise cancellation imperfection. These formulas serve as a key starting point to obtain formulas for other pivotal secrecy performance indicators comprising IP (Intercept Probability), STP (Secrecy Throughput), PSCP (Positive Secrecy Capacity Probability). • Search optimum pivotal specifications for the optimal secrecy capability and the best performance compromise between primary and secondary communication. • Provide insightful results on security performance of primary/secondary communication in important system parameters. Structure Section 2 discusses the system model. Then, Section 3 derives detailedly the SOP of primary/secondary communication. Next, Section 4 provides illustrative results and ultimately, Section 5 closes the paper. Figure 1 shows an ANaEHON in which direct communication between a primary transmitter-receiver pair, PT − P R, is not of good quality owing to uncertainties (e.g., long distance, severe fading, ...). Therefore, the secondary transmitter ST , which is in the transmission range of PT , can assist PT in relaying the PT 's signal to P R. ST is assumed to be capable of harvesting RF energy from PT and spends the scavenged energy for its communication operation. Additionally, the overlay mechanism is applied to ST in order for ST to relay the PT 's signal to P R as well as send its private signal to the secondary receiver S R. Information transmission of ST is stolen by an eavesdropper E. In order to reduce the wiretapping capability of E, ST transmits artificial noise together with information signals of PT and ST . uv in which ς is the path-loss exponent and d uv is the u-v distance. Then, the probability density function (PDF) and the cumulative distribution function (CDF) of |h uv | 2 are correspondingly expressed as f |h uv | 2 (x) = e −x/μ uv /μ uv and Signal model In Fig. 1, the total transmission time T for both PT and ST to complete their information transmission to corresponding receivers is divided into two stages. Stage I with the time of αT with α ∈ (0, 1) being the time splitting factor is for PT to send its private data x p in order for ST to harvest energy with the power splitting technique and recover the PT 's information. Such a technique separates the received signal at ST , y s , into two portions: one portion √ λy s with λ ∈ (0, 1) being the power splitting factor for decoding the PT 's information 2 and another portion √ 1 − λy s for harvesting energy. Dependent on the decoding status, 3 ST transmits distinct signals as shown in the flow chart of Fig. 2. To be specific, if ST successfully restores the PT 's information, it sends a combination of three sig- Decoding with infinitesimal power is assumed. This assumption is popularized in previous publications (e.g., [16,[27][28][29]). 3 In [16], Stage II allows ST to always relay primary data, leading to error propagation for primary data. Nonetheless, the secondary/primary SOP analysis was dissembled in [16]. Consequently, error propagation was not considered in the SOP analysis. and κ are the transmit power of ST , the power allocation factor for desired signals and artificial noise as ST decodes correctly the PT 's signal and the power allocation factor for primary and secondary signals, respectively): the PT 's decoded information x p , the ST 's private information x s , and the artificial noise x a . In the case that ST unsuccessfully decodes the primary data, it sends a combination of two signals √ τ P s x s + √ (1 − τ ) P s x a (τ is the power allocation factor for desired signal and artificial noise as ST decodes incorrectly the PT 's signal): the ST 's private information x s and the artificial noise x a . Stage II with the time of (1 − α)T is for ST to send its signal to S R, P R, and E. In Stage I, ST receives the following signal In (1), the receive antenna at ST induces the noise n s ∼ CN 0, σ 2 s and PT transmits with power of P p . Stage I in Fig. 1 offers ST the scavenged energy as where η ∈ (0, 1) is the energy conversion efficiency and {·} is the expectation operator. The scavenged energy E s offers the transmit power of ST in Stage II as Figure 1 shows that the signal used for decoding the PT 's information isỹ s = √ 1 − λy s +ñ s whereñ s ∼ CN 0,σ 2 s is the noise owing to the passband-to-baseband signal conversion. Inserting (1) intoỹ s yields Channel estimators suffer a certain error and hence, channel state information is not perfectly estimated. For performance analysis, channel estimation imperfection should be modelled appropriately. This paper employs a well-known channel estimation error model as [22] where h uv is the true channel,h uv is the estimated channel, ε uv is the estimation error; all random variables h uv ,h uv , ε uv are modelled as CN (0, μ uv ); the correlation coefficient 0 ≤ ρ uv ≤ 1 is a constant, representing the exactness of channel estimation. Inserting (5) into (4), one obtains It is inferred from (6) that the SNR (Signal-to-Noise Ratio) achievable for decoding the PT 's information is given by where The channel capacity that ST can obtain is C s = αlog 2 (1 + γ s ) bps/Hz with the pre-logarithm factor α owing to Stage I of αT . The communication theory addressed that ST decodes exactly the PT 's data merely if C s exceeds the required transmission rate C t , i.e., C s ≥ C t (or γ s ≥ γ t where γ t = 2 C t /α − 1). If ST decodes successfully the PT 's information, it broadcasts the combination of three signals in the form of Otherwise, it broadcasts the combination of solely two signals in the form of √ τ P s x s + √ (1 − τ ) P s x a in Stage II. Therefore, P R, S R, and E receive signals in Stage II, correspondingly, as where n p ∼ CN 0, σ 2 p , n r ∼ CN 0, σ 2 r , n e ∼ CN 0, σ 2 e are respectively the noises impaired by the receive antennas at P R, S R, and E. Most works (e.g., [14][15][16][17][18][19][20][21]) assumed the artificial noise x a to be completely known at the legitimate receivers (P R and S R), not at E, in order for P R and S R to totally eliminate the effect of x a on their received signal. Nonetheless, this assumption seems impractical because the regeneration of x a is hardly achieved with the absolute probability. Therefore, this paper assumes x a to be regenerated at P R and S R with the accuracy of 1 − χ , χ ∈ [0, 1], which indicates that χ x a represents the residual artificial noise due to imperfect artificial noise cancellation at P R and S R. Accordingly, P R and S R obtain signals with less artificial noise after partly removing x a , respectively, as Inserting (5) into (12), one obtains Based on (16), SINR (Signal-to-Interference plus Noise Ratio) for decoding x p at P R is represented as Similarly, inserting (5) into (13), one obtains Based on (16), SINR for decoding x s at S R is given by The knowledge of the artificial noise x a is merely shared among ST , P R, and S R for securing x s and x p but E is blind with it. As such, the SINRs at E for recovering x s and x p are inferred from (11). Inserting (5) into (11) results in from which the SINRs at E for restoring x s and x p are respectively derived as It is worth emphasizing from (19) and (20) that ST purposely generates the artificial noise power to corrupt the eavesdropper. Accordingly, increasing the artificial noise would secure information transmission for x s and x p . Moreover, channel estimation imperfection, which is represented by terms in the denominators of (15), (17), (19), (20) weighted by 1 − ρ 2 uv , degrades the performance of all receivers (P R, S R, E). Secrecy capacity The channel capacities at P R and S R in Stage II are inferred from (15) and (17), correspondingly, as where (1 − α) is the pre-logarithm factor due to Stage II of (1 − α)T . Similarly, the channel capacities at E for decoding x s and x p in Stage II are inferred from (19) and (20), correspondingly, as The substraction of the channel capacity at E for restoring x s from that at S R is the secrecy capacity for x s : where [x] + denotes max (x, 0). Similarly, the substraction of the channel capacity at E for restoring x p from that at P R is the secrecy capacity for x p : SOP analysis The SOP indicates the possibility that the secrecy capacity is below the preset security threshold C 0 . Therefore, it quantifies the secrecy performance of ANaEHON. This section suggests precise closed-form SOP formulas for quickly rating the secrecy capability for x s and x p without timeconsuming simulations. Moreover, these formulas serve as a good starting point to achieve the formulas for other pivotal security measures such as IP, PSCP, STP. SOP for primary information x p The SOP for primary information x p is given by SO P p (C 0 ) is divided into two scenarios: i) one corresponds to the ST 's unsuccessful decoding primary data and ii) another corresponds to the ST 's successful decoding primary data, i.e. SO P p (C SubstitutingČ p in (26) into (28), one has The term in (29) is equivalently rewritten as In ANaEHONs, if ST unsuccessfully decodes the PT 's data, it doesn't forward primary data, yielding the zero SINR at P R for restoring x p (i.e., γ p = 0 for γ s < γ t as shown in (15)). Therefore, in this scenario, the secrecy capacity for x p is also zero (i.e.,Č p = 0 conditioned on γ s < γ t ), resulting in ψ = 1. The term ϒ in (29) is rewritten after using (15) and (20) for the case of γ s ≥ γ t as where with By imitating the derivations in [21, Appendix C], one achieves a precise form of ϒ as where In (45) and (47), Ei (·) is the exponential-integral function [30]. SOP for secondary information x s The SOP for secondary information x s is given by SO P s (C 0 ) is divided into two scenarios: i) one corresponds to the ST 's unsuccessful decoding primary data and ii) another corresponds to the ST 's successful decoding primary data, i.e. SO P s (C SubstitutingČ s in (25) into (49), one has SO P s (C 0 ) The term in (50) was already computed in (30) while the term Z 1 in (50) is rewritten after using (17) and (19) for the case of γ s ≥ γ t as where The quantity Z 1 has the same form as ϒ in (31). Therefore, by substituting variables appropriately into ϒ in (31), one can achieve the precise closed-form formula of Z 1 . More specifically, Z 1 is computed by using ϒ in (39) with A 1 → A, B 1 → B, C 1 → C, μ sr → μ sp ,σ 2 r →σ 2 p . As a result, the derivation of Z 1 is skipped here for compactness. The term Z 2 in (50) is rewritten after using (17) and (19) for the case of γ s < γ t as where The quantity Z 2 has the same form as ϒ in (31). Therefore, by substituting variables appropriately into ϒ in (31), one can achieve the precise closed-form formula of Z 2 . More specifically, Z 2 is computed by using ϒ in (39) with A 2 → A, B 2 → B, C 2 → C, μ sr → μ sp ,σ 2 r →σ 2 p . As a result, the derivation of Z 2 is skipped here for compactness. Inserting the above-proposed precise closed-form formulas of , Z 1 , and Z 2 into (50) yields the precise closed-form expression of SO P s (C 0 ). Remarks The precise closed-form formulas of SO P p and SO P s are useful in quickly assessing the security measure of secondary/primary communication in ANaEHON without exhaustive simulations. Upon our understanding, these formulas haven't been reported yet. Moreover, they can be exploited to achieve the formulas for other pivotal security measures. To be more specific, the IP addresses the probability of negative secrecy capacity. Accordingly, the IP of secondary/primary communication is computed as where u = s, p. PSCP indicates the probability of positive secrecy capacity. Consequently, the PSCP of secondary/primary communication is expressed as Finally, STP is the product of the secrecy communication probability at a certain secrecy capacity with that secrecy capacity. Consequently, the STP of secondary/primary communication is expressed as Illustrative results The SOP of secondary/primary communication in ANaE-HON is assessed through pivotal specifications. Unless otherwise stated, a set of arbitrary parameters is used to illustrate the following results: Figures 3-12 respectively denote "Sim." and "Ana." as the simulation and the analysis, and illustrate the match between the simulation and the analysis, ratifying the exactness of the analysis in (29) and (50). Figure 3 illustrates the SOPs versus channel estimation imperfection reflected by ρ. This figure demonstrates that channel estimation error drastically affects the SOP of primary/secondary communication. More specifically, large SOPs are almost unchanged over a wide range of bad channel estimation error (0 ≤ ρ ≤ 0.9) while SO P p (or SO P s ) significantly drops (or increases) with a slight channel estimation improvement (0.9 ≤ ρ ≤ 1). Additionally, imperfect artificial noise cancellation at legitimate receivers degrades security performance of primary/secondary communication (i.e., SOPs increase with increasing χ ) as expected. Moreover, the SOP of primary communication is smaller than that of secondary communication at the same levels of channel estimation error and artificial noise cancellation. This is because among the amount of the power θ P s reserved for transmitting legitimate data, ST allocates 70% (κ = 0.7) of this amount to relay the primary data and 30% (1 − κ = 0.3) of that to send the secondary data. Figure 4 illustrates the SOPs versus imperfect artificial noise cancellation reflected by χ . This figure shows the increase of SOPs with increasing χ , which is expected because of increasing artificial noise residue at legitimate receivers. Additionally, security performance of primary communication is considerably improved (or deteriorated) with reducing channel estimation imperfection (i.e., increasing ρ) in the range of low (or high) artificial noise residue (e.g., SO P p at ρ = 1.0 is smaller than SO P p at ρ = 0.9 for χ < 0.675 but the reverse happens for χ > 0.675). Nonetheless, security performance of secondary communication is always degraded with reducing channel estimation imperfection irrespective of χ (e.g., SO P s at ρ = 1.0 is larger than SO P s at ρ = 0.9 for any χ ). Furthermore, due to κ = 0.7 as Fig. 3, primary communication is more secure than secondary communication at the same levels of channel estimation error and artificial noise cancellation, as expected. Figure 5 plots the SOPs versus P p /N 0 . Owing to κ = 0.7 as Fig. 3, primary communication is more secure than secondary communication at the same levels of channel estimation error and artificial noise cancellation, as expected. In addition, SO P p is drastically reduced with better channel estimation and artificial noise cancellation, especially when the transmit power of PT increases, i.e., SO P p at (ρ = 1.0, χ = 0.0) is considerably smaller than SO P p at (ρ = 0.9, χ = 0.5). Nonetheless, the reversed security performance trend is observed for secondary transmission, i.e., SO P s at (ρ = 1.0, χ = 0.0) is larger than SO P s at (ρ = 0.9, χ = 0.5). Furthermore, security performance of primary communication compromises that of secondary communication with P p /N 0 (i.e., SO P p reduces while SO P s increases with P p /N 0 ). Figure 6 plots the SOPs versus θ . This figure demonstrates optimum values of θ , which minimize the SOP of primary/secondary communication. These optimum values balance the powers for sending the artificial noise and the legitimate (secondary and primary) data. Moreover, SO P p is lower than SO P s at the same levels of channel estimation error and artificial noise cancellation, which can be interpreted from the fact that κ = 0.7 allocates more power for the ST to transmit the PT 's information than the ST 's information. Furthermore, better artificial noise cancellation and channel estimation improve secondary/primary security capability in a certain region of θ (e.g., SO P s (or SO P p ) at (ρ = 1.0, χ = 0.0) is smaller than SO P s (or SO P p ) at (ρ = 0.9, χ = 0.5) when θ < 0.575 (or θ < 0.925)) but degrade that performance in another region (e.g., SO P s (or SO P p ) at (ρ = 1.0, χ = 0.0) is larger than SO P s (or SO P p ) at (ρ = 0.9, χ = 0.5) when θ > 0.575 (or θ > 0.925)). Figure 7 plots the SOPs versus κ. The results illustrate that increasing κ improves secrecy performance of primary communication (decreasing SO P p ) but mitigates that of secondary communication (increasing SO P s ), showing the security compromise between primary and secondary communication. This is obvious because κ interprets the percentage of the ST 's transmit power allotted for the primary data while 1−κ interprets the percentage of the ST 's transmit power allotted for the secondary data. Therefore, increasing κ decreases SO P p but increases SO P s . Due to the conflicting Fig. 7), which means the best security balance between primary and secondary communication. Moreover, the primary (or secondary) communication is in outage over a certain region of κ; for example, SO P s = 1 for κ ≥ 0.7 and SO P p = 1 for κ ≤ 0.2 when ρ = 1.0 and χ = 0.0 as shown in Fig. 7. Furthermore, better artificial noise cancellation and channel estimation improve secondary/primary security capability in a certain region of κ (e.g., SO P s (or SO P p ) at (ρ = 1.0, χ = 0.0) is smaller than SO P s (or SO P p ) at (ρ = 0.9, χ = 0.5) when κ < 0.52 (or κ > 0.48)) but degrade that performance in another region (e.g., SO P s (or SO P p ) at (ρ = 1.0, χ = 0.0) is larger than SO P s (or SO P p ) at (ρ = 0.9, χ = 0.5) when 0.52 < κ < 0.7 (or 0.2 < θ < 0.48)). Figure 8 plots the SOPs versus C t , which expose that increasing C t enhances secrecy capability of secondary communication but deteriorates that of primary communication. This is because the increase in C t (equivalently, the increase in the transmission rate demanded by the PT ) reduces the probability of decoding successfully the PT 's information at the ST , eventually limiting the PT 's information relayed to P R and increasing the SO P p . While the PT 's information is rarely relayed to P R by ST , the information of ST has more chances to be transmitted with higher transmit power, ultimately reducing the SO P s . Two contra-dictory trends of primary and secondary security capabilities with respect to C t facilitate in balancing these security capabilities by setting the reasonable required primary transmission rate; for instance, SO P s = SO P p at C t = 0.79 bps/Hz for (ρ = 0.9, χ = 0.5) and at C t = 1.85 bps/Hz for (ρ = 1.0, χ = 0.0). Moreover, better artificial noise cancellation and channel estimation improve primary data security but degrade secondary data security, i.e., SO P p (or SO P s ) at (ρ = 1.0, χ = 0.0) is smaller (or larger) than SO P p (or SO P s ) at (ρ = 0.9, χ = 0.5). Figure 9 plots the SOPs versus C 0 . This figure exposes that increasing C 0 degrades security capability of primary/secondary communication until a complete outage, as expected. Interestingly, secrecy performance of secondary communication may be superior or inferior to that of primary communication over a certain region of C 0 (e.g., SO P s < SO P p for C 0 ≤ 0.016 bps/Hz but SO P s > SO P p for C 0 > 0.016 bps/Hz when ρ = 1.0 and χ = 0.0). Moreover, better artificial noise cancellation and channel estimation also improve secondary/primary security capability in a certain region of C 0 ; for instance, SO P p (or SO P s ) at (ρ = 1.0, χ = 0.0) is smaller than SO P p (or SO P s ) at (ρ = 0.9, χ = 0.5) when C 0 is smaller than 0.232 (or 0.049) bps/Hz. Figure 10 demonstrates the SOPs versus α, which expose that the security measure of secondary/primary communication is optimized with relevant selection of α. The reason behind the optimum value of α that minimizes SOPs is as fol- lows. The increase in α offers the ST to harvest more energy from the PT and to recover successfully the PT 's information with a higher probability in Stage I, thus probably enhancing security performance. Nonetheless, this increment degrades security capability in Stage II due to the decrease in secrecy capacity which is proportional to 1−α. Accordingly, α can be controlled to balance gains in two stages. Moreover, security performance of primary/secondary communication at the optimal value of α is improved with better channel estimation and artificial noise cancellation, i.e., SO P p (or SO P s ) at the optimal value of α for (ρ = 1.0, χ = 0.0) is smaller than SO P p (or SO P s ) at the optimal value of α for (ρ = 0.9, χ = 0.5). Nonetheless, the optimum secondary data security is inferior to the optimum primary data security at channel estimation-and-artificial noise cancellation perfection (i.e., SO P p < SO P s at the optimal value of α for (ρ = 1.0, χ = 0.0)) but artificial noise cancellation-andchannel estimation imperfection reverses the performance tendency where the best security of primary communication is inferior to that of secondary communication (i.e., SO P p > SO P s at the optimal value of α for (ρ = 0.9, χ = 0.5)). Figure 11 plots the SOPs versus λ, which expose that security capability of the secondary communication is almost constant and only improved for high λ (e.g., λ ≥ 0.95). The reason is that large λ allows ST to scavenge more energy from PT and reduces signal power at ST 's decoder (i.e., reduces the possibility of restoring correctly the PT 's information); thus, the power for transmitting the ST 's infor-mation is higher in Stage II, eventually declining SO P s . Nevertheless, λ can be selected appropriately to optimize secrecy performance of primary communication. The optimal value of λ for the smallest SO P p is to balance between the possibility of restoring the primary data at the ST and the scavenged energy. Moreover, primary communication is more secure than secondary communication due to κ = 0.7, which is a similar comment observed from the previous figures. Furthermore, better artificial noise cancellation and channel estimation enhance security capability of primary communication but deteriorate that of secondary communication. The power allocation factor for artificial noise and secondary signal as the ST decodes incorrectly the PT 's signal is specified by τ . Therefore, to recognize the affect of τ plainly, we should investigate the case that the ST decodes unsuccessfully the PT 's signal. This case can be set-up by selecting a large value of C t . Figure 12 demonstrates the SOPs versus τ for C t = 5 bps/Hz. This figure exposes that primary communication is in outage because the large C t causes the ST to fail in decoding the PT 's information and hence, P R does not receive it for decoding. Moreover, there exists the optimum value of τ which maximizes secrecy performance of secondary communication. This optimum τ aims to balance the power allocation for the ST 's information and the artificial noise. Furthermore, better artificial noise cancellation and channel estimation enhance secondary data security capability, i.e., SO P s at (ρ = 1.0, χ = 0.0) is smaller than SO P s at (ρ = 0.9, χ = 0.5). Conclusion This paper implemented the overlay mechanism in cognitive radio networks where the secondary transmitter assists the data transmission of the primary transmitter as well as transmits its private data. The secondary transmitter is capable of harvesting radio frequency energy and generating the artificial noise to self-power its operation and secure primary/secondary communication against eavesdroppers. Secrecy capability of primary/secondary communication is measured in terms of primary/secondary secrecy outage probability under uncertainties of artificial noise cancellation-and-channel estimation imperfection, which was numerically rated by the suggested precise closedform formulas. Various results are generated to validate such formulas as well as shed insights into security measure of artificial noise-aided energy harvesting overlay networks with respect to main system parameters. Moreover, optimum system parameters can be found through exhaustive searches relied on the recommended formulas that well guides system design. Furthermore, the secrecy performance compromise between secondary and primary communication can be managed by controlling system parameters appropriately.
8,240.4
2021-02-25T00:00:00.000
[ "Computer Science" ]
Suppression of WDM four-wave mixing crosstalk in fibre optic parametric amplifier using Raman-assisted pumping We perform an extensive numerical analysis of Raman-Assisted Fibre Optical Parametric Amplifiers (RA-FOPA) in the context of WDM QPSK signal amplification. A detailed comparison of the conventional FOPA and RA-FOPA is reported and the important advantages offered by the Raman pumping are clarified. We assess the impact of pump power ratios, channel count, and highly nonlinear fibre (HNLF) length on crosstalk levels at different amplifier gains. We show that for a fixed 200 m HNLF length, maximum crosstalk can be reduced by up to 7 dB when amplifying 10x58Gb/s QPSK signals at 20 dB net-gain using a Raman pump of 37 dBm and parametric pump of 28.5 dBm in comparison to a standard single-pump FOPA using 33.4 dBm pump power. It is shown that a significant reduction in four-wave mixing crosstalk is also obtained by reducing the highly nonlinear fibre interaction length. The trend is shown to be generally valid for different net-gain conditions and channel grid size. Crosstalk levels are additionally shown to strongly depend on the Raman/parametric pump power ratio, with a reduction in crosstalk seen for increased Raman pump power contribution. ©2015 Optical Society of America OCIS codes: (060.2320) Fiber optics amplifiers and oscillators; (190.4970) Parametric oscillators and amplifiers; (190.4380) Nonlinear optics, four-wave mixing. References and links 1. D. J. Richardson, “Applied physics. Filling the light pipe,” Science 330(6002), 327–328 (2010). 2. R.-J. Essiambre, G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel, “Capacity limits of optical fiber networks,” J. Lightwave Technol. 28(4), 662–701 (2010). 3. M. F. C. Stephens, I. D. Phillips, P. Rosa, P. Harper, and N. J. Doran, “Improved WDM performance of a fibre optical parametric amplifier using Raman-assisted pumping,” Opt. Express 23(2), 902–911 (2015). 4. X. Guo, X. Fu, and C. Shu, “Gain-saturated spectral characteristics in a Raman-assisted fiber optical parametric amplifier,” Opt. Lett. 39(12), 3658–3661 (2014). 5. C. Headley and G. P. Agrawal, Raman Amplification in Fiber Optical Communication Systems (Academic, 2005). 6. M. E. Marhic, Fiber Optical Parametric Amplifiers, Oscillators and Related Devices (Cambridge University, 2008). 7. T. Torounidis, P. A. Andrekson, and B. E. Olsson, “Fibre-optical parametric amplifier with 70-dB gain,” IEEE Photonics Technol. Lett. 18(10), 1194–1196 (2006). 8. M. E. Marhic, K. Y. K. Wong, and L. G. Kazovsky, “Wideband tuning of the gain spectra of one-pump fiber optical parametric amplifiers,” IEEE J. Sel. Top. Quantum Electron. 10(5), 1133–1141 (2004). 9. J. M. C. Boggio, A. Guimarães, F. A. Callegari, J. D. Marconi, and H. L. Fragnito, “Q penalties due to pump phase modulation and pump RIN in fiber optic parametric amplifiers with non-uniform dispersion,” Opt. Commun. 249(4-6), 451–472 (2005). 10. A. Szabo, B. J. Puttnam, D. Mazroa, A. Albuquerque, S. Shinada, and N. Wada, “Numerical comparison of WDM interchannel crosstalk in FOPA and PPLN-based PSAs,” IEEE Photonics Technol. Lett. 26(15), 1503– 1506 (2014). 11. X. Guo, X. Fu, and C. Shu, “Gain saturation in a Raman-assisted fiber optical parametric amplifier,” Opt. Lett. 38(21), 4405–4408 (2013). #244287 Received 2 Jul 2015; revised 24 Aug 2015; accepted 24 Aug 2015; published 8 Oct 2015 (C) 2015 OSA 19 Oct 2015 | Vol. 23, No. 21 | DOI:10.1364/OE.23.027240 | OPTICS EXPRESS 27240 12. X. Guo and C. Shu, “Cross-gain modulation suppression in a Raman-assisted fiber optical parametric Amplifier,” IEEE Photonics Technol. Lett. 26(13), 1360–1363 (2014). 13. M.-C. Ho, K. Uesaka, M. Marhic, Y. Akasaka, and L. G. Kazovsky, “200-nm-Bandwidth fiber optical amplifier combining parametric and Raman gain,” J. Lightwave Technol. 19(7), 977–981 (2001). 14. S. Peiris, N. Madamopoulos, N. Antoniades, M. A. Ummy, M. Ali, and R. Dorsinville, “Optimization of gain bandwidth and gain ripple of a hybrid Raman/parametric amplifier for access network applications,” Appl. Opt. 51(32), 7834–7841 (2012). 15. S. H. Wang, L. Xu, P. K. A. Wai, and H. Y. Tam, “Optimization of Raman-assisted fiber optical parametric amplifier gain,” J. Lightwave Technol. 29(8), 1172–1181 (2011). 16. N. Antoniades, G. Ellinas, and I. Roudas, WDM Systems and Networks. Modelling, Simulation, Design and Engineering (Springer, 2012). 17. M. A. Ummy, M. F. Arend, L. Leng, N. Madamopoulos, and R. Dorsinville, “Extending the gain bandwidth of combined Raman-parametric fiber amplifiers using highly nonlinear fiber,” J. Lightwave Technol. 27(5), 583– 589 (2009). 18. G. P. Agrawal, Nonlinear Fiber Optics (Academic, 2001). 19. M. Morshed, L. B. Du, and A. J. Lowery, “Mid-span spectral inversion for coherent optical OFDM systems: fundamental limits to performance,” J. Lightwave Technol. 31(1), 58–66 (2013). 20. A. J. Lowery, S. Wang, and M. Premaratne, “Calculation of power limit due to fiber nonlinearity in optical OFDM systems,” Opt. Express 15(20), 13282–13287 (2007). Introduction The need for higher capacity optical communications systems appears evident as worldwide demand for data continues to surge with ever more data-hungry multimedia applications and E-services appearing [1].A logical way to increase the optical system capacity is via the development of new optical amplifiers which can provide gain at wavelengths beyond the current C/L bands catered for by the Erbium Doped Fibre Amplifier (EDFA) [2].The Raman-Assisted Fibre Optic Parametric Amplifier (RA-FOPA) has recently been shown as a promising approach towards achieving this [3,4].The RA-FOPA combines useful properties of discrete Raman amplifiers (low crosstalk, gain bandwidth tuneability) with those of conventional FOPAs (high gain coefficient, gain bandwidth tuneability) to offer a tuneable gain region and potentially high discrete gain.However, the impact of four-wave mixing (FWM) crosstalk on the performance of the RA-FOPA has not been characterised extensively. Here we show that the RA-FOPA offers significantly reduced FWM crosstalk compared with the conventional FOPA over all conditions whilst providing gain levels which would not be easily achievable using purely discrete Raman gain without encountering extreme problems of signal corruption and noise from double Rayleigh backscattering [5]. The conventional FOPA has been actively investigated in recent years and operates via a phase-matched degenerate four-wave mixing process between (typically) a single high power forward-travelling pump and signal(s) in highly nonlinear fibre (HNLF) [6].Peak gain as high as 70 dB has been demonstrated [7] and a gain bandwidth of 200 nm shown [8] after optimization of HNLF and pump parameters.However, FOPA performance has also been shown to strongly depend on the quality of parametric pump [9] and to also suffer from the generation of unwanted FWM crosstalk components when amplifying WDM signals [10].This remains a major limitation to the prospect of using FOPAs in telecoms applications.In order to improve this aspect of FOPA performance, the hybrid RA-FOPA has been proposed, based on simultaneous Raman and parametric gain within a single length of HNLF.The RA-FOPA consists of a parametric pump, co-propagated with signal(s), and a typically (although not exclusively required) backwards-travelling Raman pump.The Raman pump provides direct signal amplification through Raman scattering and indirect signal gain through amplification of the parametric pump.This approach can potentially widen the amplification bandwidth, increase overall gain and improve performance in comparison with the conventional FOPA. The RA-FOPA has been studied theoretically, experimentally and numerically in recent years.In paper [11] gain saturation characteristics in RA-FOPA have been investigated.Different saturation characteristics have been experimentally observed and analysed using a single continuous wave as a signal probe to measure the gain.In [12] reduction of cross-gain modulation in a RA-FOPA has been demonstrated using two 10 Gb/s RZ-OOK signals.By optimising the HNLF properties together with the pump powers and frequency tuning, gain in excess of 10 dB over a 208-nm bandwidth in fiber optical amplifier combining parametric and Raman gain has been demonstrated [13].In paper [14], the authors have described a mathematical model and presented simulation results for the optimization of a RA-FOPA, exhibiting a bandwidth of 170 nm and low ripples.The relationship between the overall gain and different combinations of Raman and parametric pump powers have been investigated both theoretically and experimentally using a single channel signal [15]. In this paper, we extend our previous work [3] by numerically characterising a RA-FOPA using the key WDM metrics of signal gain and FWM crosstalk power at both maximum and minimum wavelengths of the amplified spectrum (encompassing both max and min crosstalk products).We vary the RA-FOPA net-gain level, Raman/parametric pump powers, pump ratios, number of WDM channels and length of HNLF in order to observe the impact on the crosstalk magnitude.We demonstrate that the RA-FOPA crosstalk is minimised by employing a combination of short HNLF length with high Raman pump power.The required net-gain is then subsequently achieved via adjustment of the parametric pump power.In practise, the Raman pump power would most likely just be increased to a suitable level before double Rayleigh effects start to dominate for that fibre. Mathematical model and methodology The RA-FOPA was simulated using the arrangement shown schematically in Fig. 1.For verification and comparison with our previous work [3], the input signals consisted of ten 100 GHz-spaced NRZ-QPSK modulated channels ranging from 193.5 to 194.4 THz and multiplexed together using a 70 GHz-wide arrayed waveguide grating (WDM1).To examine the impact of channel spacing, subsequent simulations used a doubled channel count of twenty 50 GHz-spaced signals whilst occupying the same overall bandwidth.The QPSK modulation data was derived from two decorrelated 2 12 pseudo-random binary bit sequences at a symbol rate of 29 Gbaud/s.A 100 kHz linewidth parametric pump laser was phase modulated using a 3 Gb/s electrical PRBS pattern to provide mitigation against stimulated Brillouin scattering (SBS) and optically amplified.The SBS process itself was not numerically simulated in this work.The power and wavelength of the pump were variable parameters used to achieve the required net-gain for all signals in either a) a conventional FOPA (C-FOPA) arrangement (no Raman) or b) a RA-FOPA arrangement.The amplified pump was bandpass filtered (BPF1) to remove amplified spontaneous emission before combination with the signals using WDM2.The combined pump/signals were transmitted through highly nonlinear fibre (HNLF) of length 0.2 km or 1 km to assess length dependence.Per-signal input power to the HNLF was fixed at a relatively high level of -10 dBm in order to generate significant FWM crosstalk under conventional FOPA operation.For RA-FOPA operation, the HNLF was additionally backward-pumped using a continuous-wave (CW) Raman pump at 1455 nm with its power PR an additional variable.All pumps and signals were simulated as single polarisation and perfectly aligned. The mathematical modelling of the RA-FOPA consisted of two stages: bidirectional power analysis and field analysis [16].In the first stage, the interaction between the signals, co-propagating parametric pump and counter propagating Raman pump was determined using the coupled balanced Eq. ( 1) , where R P is the time-averaged power of the Raman pump, S P is the total average power of the WDM-signals and parametric pump.S g and R g are the Raman coefficients, S  and R  are the attenuation coefficients of the parametric pump and Raman pump, respectively. We assume here that the Raman gain is constant for parametric pump and WDM-signals.This is an acceptable approximation because the bandwidth of the pump, WDM-signals and idlers is much less than the Raman gain bandwidth.The approximate solution of Eq. ( 1) was obtained by an iteration process using the fourth-order Runge-Kutta method [17]. In the second stage, the signal field analysis was performed by substituting the resultant power distributions along the fiber length into the nonlinear Schrödinger equation (NLSE): where S A is the sum of complex field envelopes, R f is the fractional contribution of the delayed Raman response, and k  and S  are dispersion and Kerr coefficients, respectively. The power distributions R P obtained from Eq. ( 1) were substituted along the fibre length into the Eq. ( 2) to take into account Raman gain.The NLSE was solved using the split-step Fourier method [18].The HNLF parameters were as follows [3]: fiber loss was 0.8 dB/km, zero dispersion wavelength was 1564.3 nm, dispersion slope 0.084 ps•nm -2 •km -1 and nonlinear coefficient 8.2 (W•km) -1 .Values of others coefficient were as follows: Experimental data taken from [3], and simulated output spectra of the C-FOPA and RA-FOPA are shown in Fig. 2 for the representative conditions of 20 dB net-gain and -12 dBm input power per signal.The signal at 194.4THz was removed to illustrate the crosstalk level present at this frequency.It can be seen that there is close agreement of signal power, spectral flatness and crosstalk distribution for both schemes, providing confidence in the simulation predictions.In the RA-FOPA case, the rate of change of signal gain increases along the length of HNLFhowever, in the equivalent C-FOPA the rate of change of signal gain can be seen to drop.The RA-FOPA behaviour is a direct result of the counter-propagated RP and consequential monotonic amplification of the PP.This leads to greater signal gain occurring at the output end of the fibre, suppressing unwanted nonlinear interactions between the waves involved in the parametric process along the fibre.It should be noted that the C-FOPA signal gain saturates under these conditions (and in reality a shorter length of HNLF would be used), but substantial margin remains for the RA-FOPA because of the parametric pump power growth along the HNLF due to the Raman amplification.In other words, Raman pumping can prevent parametric pump depletion, providing higher small signal gain compared to the C-FOPA.Assuming uniform spacing between the channels in the WDM multiplex, there are a number of FWM products generated from various combination of channels interacting along the HNLF at any particular channel frequency.The total power of the FWM waves generated at frequency m f can be presented as where frequencies involved in the FWM processes, satisfy the condition The strength of each component is weighted according to the mixing efficiency.Neglecting phase mismatch due to the low dispersion of the HNLF we assume equal contribution of each component to the total power.There is no loss of generality for us in supposing that the HNLF is a lossless medium.In this case, the propagation of the signal-signal FWM waves are governed by a simple equation, derived by Morshed et al [ along HNLF, the output power of single signal-signal FWM component can be found by integrating (4) over the HNLF length and squaring, which gives: Equation ( 5) clarifies that the output power of single FWM component depends on both HNLF length and signal power profile along the fibre.Hence, there are two key parameters which have significant impact on the FOPA crosstalk performance.By decreasing the HNLF length and maintaining low signal power along the HNLF as far as possible before the required gain is achieved, the FWM crosstalk level can be significantly reduced. Table 1 shows simulated and theoretical (based on Eq. ( 5) and included only signal-signal FWM components) estimation of FWM crosstalk reduction for different configuration of the RA-FOPA in comparison with the conventional C-FOPA.The integral in Eq. ( 5) was solved numerically using signal power profiles obtained from simulations.The net-gain here is 20 dB, HNLF length is 0.2 km and the frequency of the channel under consideration is 194.4THz.It can be seen that there is good agreement between the two obtained estimations for this set of parameters.However, the analytical expression is based on a large number of assumptions and for simplicity we neglect the pump-signal FWM products which can make a considerable contribution to the overall crosstalk.Hence, an estimation based on Eq. ( 5) should always be compared with simulation results. Crosstalk vs distance in RA-FOPA A key conclusion of Section 3 is that under a fixed gain condition, the RA-FOPA can be operated with lower average (vs length) parametric pump power than the equivalent C-FOPA, and this has been experimentally shown to result in reduced FWM crosstalk [3].To characterize the behaviour, the signal-to-crosstalk (S-to-X) power ratio was measured at different signal wavelengths across the band.This was calculated by running two simulations per measurement, both with and without the channel under test being present.When not present, the input power of the remaining nine channels was increased proportionally (0.45dB) to keep the total signal power into the HNLF constant.By doing this, the small crosstalk-reduction obtained due to removing the signal under test could be partially recovered (although not fully recovered due to the different frequency distribution between the nine and ten-signal cases).The signal power and an estimated FWM crosstalk power at the exact signal frequency under test could then be measured and a signal-to-crosstalk ratio calculated with consistency over all simulated conditions.Figure 4 shows the dynamics of the signal gain and signal-to-crosstalk ratio at 194.4 THz along the HNLF for 20 dB net-gain and for two different lengths of HNLF.The signal at 194.4 THz is chosen here for illustration as this has previously been shown to possess the highest crosstalk of the ten amplified signals following C-FOPA amplification [3].This is because the high frequency region of the amplified signal spectrum generates FWM crosstalk not only from signal-signal interactions, but also from second order interactions between the original signals and newly generated signal-pump-signal waves surrounding the pump [19].In addition, the particular dispersion and hence phase-matching conditions of the HNLF influence the crosstalk distribution.For the pump/signal frequency combination described in this work, the crosstalk within the signal band is maximum at 194.4THz, although this may not be a general rule for WDM amplification in C-FOPAs using different sets of signal bands and/or pump frequencies and/or HNLF properties.The pump powers were optimised as follows: 0.2km-C-FOPA -33.4 dBm PP; 0.2km-RA-FOPA -28.5 dBm PP & 37 dBm RP; 1km-C-FOPA -29.5 dBm PP; 1km-RA-FOPA -24.5 dBm PP & 32 dBm RP.It can be seen that there is an inflection point in the S-to-X ratio profile.This occurs where the crosstalk power begins to dominate the ASE floor.Note that in the case of the C-FOPA, this point always occurs after a shorter distance than the equivalent RA-FOPA.This results in a 7 dB and 10 dB difference of the S-to-X ratio between C-FOPA and RA-FOPA for the 0.2 km and 1 km lengths of HNLF respectively.The absolute crosstalk power is also seen to be significantly lower (higher S-to-X ratio) in the 0.2 km length RA-FOPA over the 1 km RA-FOPA by approximately 10 dB. Figure 5 shows the evolution dynamics of the S-to-X ratio at 194.4 THz along the HNLF for 20 dB net-gain and different Raman/parametric pump power ratios.It can be seen that the crosstalk level decreases with increased Raman pump power for both 0.2 km-and 1 km-RA-FOPA due to the lower required power of the parametric pump.This results in a 7 dB and 15 dB difference of the S-to-X ratio between C-FOPA and RA-FOPA when using the maximum Raman pump power simulated for the 0.2 km and 1 km lengths of HNLF respectively.It should also again be noted that significant suppression of absolute FWM crosstalk power is obtained by reducing the highly nonlinear fibre interaction length for the same net-gain if no other parameters are changed.Figure 6 shows the resulting signal gain and S-to-X ratio profiles for 194.4 THz along the HNLF.It can be seen that as might be expected from standard theory the crosstalk level increases with signal gain due to FWM products power being proportional to the interacting power of the signals.In all cases, for minimised crosstalk the RP power has been maximised to an experimentally-achievable 37 dBm and any extra gain required being provided by adjusting the level of PP power.It can be seen that employing even shorter HNLF lengths may offer scope for further reduced crosstalk, but the reduced interaction length would require compensation with greater PP power.The overall result is therefore not easily predictable without further simulations and is moreover likely to be experimentally challenging due to SBS considerations and high-power tolerant filter availability.It can be seen that the level of FWM product at the high frequency side of the spectrum is consistently higher than at the low frequency side.This is a result of an unequal satisfaction of the phase-matching conditions between different WDM-signals involved in the FWM processes, combined with the impact of pumpsignal interactions and second-order mixing.In the C-FOPA case, this results in a 5 dB, 9 dB and 14 dB spread of the S-to-X ratio between the 193.5 THz and 194.4 THz WDM-signals for the 15 dB, 20 dB and 25 dB signal gains respectively.For the RA-FOPA simulated with the stated pump power levels, the crosstalk spread can be seen to be reduced at each gain level as the contribution from the Raman pump is increased, resulting in complete suppression/equalisation in the 15dB case.For the higher gains, the reduction in spread is lessened, even at maximum Raman contribution.At 20dB gain, the spread is reduced to ~3dB (from 9dB C-FOPA), and at 25dB it is reduced to ~7.5dB (from 14dB C-FOPA). Conclusion We have characterised WDM QPSK signal amplification and FWM crosstalk generation for the first time in both conventional C-FOPAs and RA-FOPAs, achieving close agreement between simulation and experimental data.The RA-FOPA showed reduced crosstalk over the C-FOPA for fixed gain, HNLF length and channel count conditions.A maximum reduction of 10 dB was seen within the explored parameter-space, and is likely to be more significant at higher channel counts.Furthermore, the crosstalk dependence on HNLF length has been explored in the RA-FOPA.A significant 10 dB reduction in crosstalk was seen when reducing the HNLF length from 1 km to 0.2 km for 20 dB net-gain amplification of 10x58 Gb/s, 100 GHz-spaced signals.Similar RA-FOPA improvements were seen over the C-FOPA for three different net-gain conditions (15, 20 and 25 dB) and for two different channel grid spacings (100 and 50 GHz).In terms of optimum pump power ratio for the RA-FOPA, the lowest crosstalk was spectrally achieved using the highest Raman power availablein practise it is expected however for high gains that issues of double Rayleigh scattering would cause signal corruption before these FWM crosstalk-optimum Raman powers are reached.This has not been the focus of the research presented here and will be addressed in a future paper.In summary therefore, potential has been demonstrated for the RA-FOPA to be a viable future WDM optical amplifier in new regions of the transmission spectrum. Fig. 2 . Fig. 2. Output spectra of the RA-FOPA and C-FOPA averaged over 10 runs and plotted with 12.5GHz resolution bandwidth for -12 dBm per-signal input power and 20 dB average net-gain. 3 . WDM RA-FOPA signal evolution characteristics By employing both parametric and Raman gain in the same HNLF, the RA-FOPA offers useful advantages over an equivalent hybrid FOPA/Raman amplifier employing the same individual pump powers in separate isolated fibres of the same total length.This is because the peak of the Raman gain in the RA-FOPA can be tuned to coincide and thus provide gain to both the WDM signals and the parametric pump (PP).The latter is important because it provides additional indirect Raman amplification due to the parametric process.To understand and illustrate this phenomenon, three 20 dB net-gain scenarios were compared for the same 1 km HNLF: a) C-FOPA with 29.5 dBm PP b) RA-FOPA1 with 27.5 dBm PP & 28 dBm RP and c) RA-FOPA2 with 24.4 dBm PP & 32 dBm RP.Fig. 3 shows the evolution of the 194.4THz signal and PP power along the length of HNLF whilst all ten WDM signals are amplified.The important difference in signal profile between the C-FOPA and RA-FOPA can clearly be seen and shows similar characteristics to single channel amplification [4]. Fig. 4 . Fig. 4. Signal gain and S-to-X ratio along the HNLF for a C-FOPA and RA-FOPA with different length of HNLF.Per-signal input power is -10 dBm and average net-gain is 20 dB. Fig. 5 . 5 . Fig. 5. S-to-X ratio along the HNLF for a C-FOPA and different configuration of RA-FOPA with different length of HNLF.Per-signal input power is -10 dBm and average net-gain is 20 dB. Fig. 6 . Fig. 6.Signal gain and S-to-X ratio along the HNLF for RA-FOPA with different average netgain. Figure 7 Figure 7 shows how S-to-X-ratio depends on signal gain level for 193.5 THz and 194.4 THz WDM-signals.Solid symbols correspond to the C-FOPA (15 dB gain -32.4 dBm PP, 20 dB gain -33.4 dBm PP, 25 dB gain -34.4 dBm PP) whilst open symbols correspond to the RA-FOPA with the Raman/parametric pump power ratios adjusted as follows (in order of reduced spread or higher Raman contribution): a) 15 dB gain -28/31.8,29/31.6,30/31.4,31/31.1,32/30.7,33/30.2,34/29.6,35/28.6,36/27.2,37/24.8dBm b) 20 dB gain -26/33.1,28/32.9,30/32.6,32/32.1,34/31.2,36/29.7,37/28.5 dBm c) 25 dB gain -26/34.1,28/33.9,30/33.7,32/33.2,34/32.6,36/31.5, 37/30.6 dBm.It can be seen that the level of FWM product at the high frequency side of the spectrum is consistently higher than at the low frequency side.This is a result of an unequal satisfaction of the phase-matching conditions between different WDM-signals involved in the FWM processes, combined with the impact of pumpsignal interactions and second-order mixing.In the C-FOPA case, this results in a 5 dB, 9 dB and 14 dB spread of the S-to-X ratio between the 193.5 THz and 194.4 THz WDM-signals for the 15 dB, 20 dB and 25 dB signal gains respectively.For the RA-FOPA simulated with the stated pump power levels, the crosstalk spread can be seen to be reduced at each gain level as the contribution from the Raman pump is increased, resulting in complete suppression/equalisation in the 15dB case.For the higher gains, the reduction in spread is lessened, even at maximum Raman contribution.At 20dB gain, the spread is reduced to ~3dB (from 9dB C-FOPA), and at 25dB it is reduced to ~7.5dB (from 14dB C-FOPA). Fig. 7 . 6 . Fig. 7. S-to-X ratio for a C-FOPA and RA-FOPA and different average net-gain.Per-signal input power is -10 dBm and HNLF length is 0.2 km.
5,903.8
2015-10-19T00:00:00.000
[ "Engineering", "Physics" ]
Exact Test of Independence Using Mutual Information Using a recently discovered method for producing random symbol sequences with prescribed transition counts, we present an exact null hypothesis significance test (NHST) for mutual information between two random variables, the null hypothesis being that the mutual information is zero (i.e., independence). The exact tests reported in the literature assume that data samples for each variable are sequentially independent and identically distributed (iid). In general, time series data have dependencies (Markov structure) that violate this condition. The algorithm given in this paper is the first exact significance test of mutual information that takes into account the Markov structure. When the Markov order is not known or indefinite, an exact test is used to determine an effective Markov order. Introduction Mutual information is an information theoretic measure of dependency between two random variables [1].Unlike correlation, which characterizes linear dependence, mutual information is completely general.The mutual information (in bits) of two discrete random variables X and Y is defined as y) p(x)p(y) ) . ( Zero dependence occurs if and only if p(x, y) = p(x)p(y); otherwise I(X; Y ) is a positive quantity. In this article we are interested in the case that the marginal and joint probabilities are not known beforehand, but are approximated from data, so that estimates of I(X; Y ) will not be exactly zero when X and Y are independent.Thus, in order to make a decision as to whether I(X; Y ) > 0, a significance test is necessary.A significance test allows an investigator to specify the stringency for rejection of the null hypothesis I(X; Y ) = 0. The problem of determining significance of dependency can be formulated as a chi-squared test [2] or as an exact test (such as Fisher's test [3] or permutation tests [4]).The great advantage of exact tests is that they are valid for small datasets; chi-squared tests are only valid in the asymptotic limit of infinite data.Unfortunately, the exact tests reported in the literature assume that consecutive data samples are drawn independently from identical distributions (iid).In general, time series data have dependencies (Markov structure) that violate this condition.In this paper we give the first exact significance test of mutual information that takes into account Markov structure. Testing the Significance of the Null Hypothesis I(X; Y ) = 0 To introduce the need for a significance test, suppose the random variables X and Y are the values obtained from the rolls of a pair of six-sided dice, each die independent from the other and equally likely to land on any of its six sides.In the limit of infinite data, the mutual information between X and Y computed using Equation ( 1) is zero.However, what should we expect for a small number of rolls, say, 75? In Figure 1 we plot the result of a numerical simulation of 10, 000 trials of 75 rolls each; the horizontal axis is I(X; Y ) and the vertical axis is the probability distribution.The marginals p(x) and p(y) in Equation ( 1) are estimated by counting the number of occurrences of each of the six symbols {1, 2, 3, 4, 5, 6} for each die and dividing by the total size of the dataset (75).Similarly, the joint probability p(x, y) is obtained by counting the number of occurrences of each of the possible die value pairs, symbols {(1, 1), (1, 2), . . ., (6,6)}, divided by the total dataset size.Bias correction is typically employed in practice [7]; however the issue of estimation accuracy is separate from significance testing.The procedure we give here for significance testing is applicable for any choice of bias correction.The most probable value of mutual information is 0.3 bits/roll, which-if we did not know better-might seem significant considering that the total uncertainty in one die roll is log 2 6 ≈ 2.585 bits. The true significance of I, however, can only be determined knowing the distribution I(X; Y ) for independent dice (solid line, Figure 1).Knowing this distribution, we would not regard a measurement of I = 0.3 as being significant, since the values of I around 0.3 are, in fact, the most probable to occur when X and Y are independent. The logic we are describing is that of a null hypothesis significance test (NHST) for mutual information, the null hypothesis being that the mutual information is zero.The probability of obtaining the measured I(X; Y ) value, or one larger, is the p-value, and the p-value at which we reject the null hypothesis is the significance level, typically denoted by α.A significance level of α = 0.05 means that we reject the null hypothesis if the p-value is less than or equal to 0.05.For the dice example, rejection would occur at I ≥ 0.42 if α = 0.05. To be clear, the p-value is the probability, assuming the null hypothesis, of the mutual information attaining its observed value or larger.(It is not the probability of the null hypothesis being correct.)While a very small p-value leads one to reject the null hypothesis of independence, a large p-value only implies that the data is consistent with the null hypothesis, not that the null hypothesis should be accepted.In addition, the significance threshold for rejection is entirely up to the investigator to decide. Generating the Mutual Information Distribution from Surrogates To perform an NHST we need to know the distribution of the test statistic given the null hypothesis.In general, this distribution is not known a priori, but in some cases it can be estimated from the data.Fortunately, the mutual information NHST lends itself to resampling methods [8,9].Resampling is a procedure that creates multiple datasets-referred to hereafter as surrogates-from the original data.The null hypothesis distribution is extracted from the surrogate data.For an exact NHST, surrogates need to meet two conditions: (1) the null hypothesis must be true for the surrogates; and (2) in every other way they should be like the original data. In the case of dice, these conditions can be met exactly by randomly permuting the elements of X and Y .Permutation destroys any dependence that may have existed between the datasets but preserves symbol frequencies.Referring to Figure 1, the solid line is the actual distribution of I estimated from 10, 000 trials of 75 data points each.The open circles are the null hypothesis distribution estimated from 10, 000 permutation surrogates of a single time series of 75 data points.We have chosen a data length for which the permutation surrogates recreate the actual distribution well; in contrast, if the original dataset is very small or atypical, the null hypothesis distribution obtained using surrogates will depart from the true distribution. Also shown in Figure 1 is the significance level, α = 0.05 (dashed line), occurring at approximately I = 0.42.Measured I values that are equal to or greater than 0.42 (shaded region) require rejection of the null hypothesis that the dice are independent.Notice that α = 0.05 implies a five per cent chance of incorrectly rejecting the null hypothesis, known as a Type I error.Lowering the significance level reduces the Type I error rate, but also reduces the sensitivity of the test.In any case, an ideal NHST test will have a Type I error rate equal to the significance level.For the independent die scenario, we repeated the experiment 10, 000 times and found 503 rejections of the null hypothesis, compared with the expected number of rejections 10, 000 × 0.05 = 500. An equivalent way to compute exact p-values is to create a set of contingency tables and use Fisher's exact test [3,6].The elements c ij of the contingency table are the number of times (x n , y n ) = (i, j) are observed in the data.Here subscript n is used to indicate x, y pairs that occur at the same time.The table elements, along with the row and column sums, define the joint and marginal probabilities, respectively, and therefore the mutual information I.The probability of obtaining the observed contingency table is equal to the number of possible sequences having the observed contingency table divided by the number of possible sequences having the observed row and column sums.For iid data, the probability of obtaining a particular table is given by the hypergeometric distribution.Finally, the p-value is obtained by summing up the probabilities of all tables with I values equal to or greater than the observed I value. In this context, counting tables is equivalent to counting sequences with fixed marginals, neither of which is remotely practical except for very small data sets.For the case of 75 rolls of a fair 6-sided die, the number of permutation surrogates is in the order of 10 53 .In contrast to Fisher's exact test, the permutation test requires only a uniform sampling from the set of sequences with fixed marginals, rather than a full enumeration.The exact p-value is approximated as the fraction of samples that have mutual information equal to or greater than the observed I.In the limit of infinite surrogate samples the approximated p-value equals the exact p-value.In practice, 10, 000 surrogates are sufficient to perform the NHST when α = 0.05. Accounting for Markov Structure Permutation surrogates preserve single symbol frequencies but not multiple symbol (or word) frequencies.For the dice roll distributions, which are iid, this approach is perfectly adequate, but in general we must take into account that future states may depend on present and past states.For example, let us endow a pair of dice with a Markov property, i.e., the result of the next roll for each die depends probabilistically on its present roll.Suppose we use the following 6 × 6 transition probability matrix for each die: where T ij is the transition probability of going from state i to state j.Inspecting T we see that each die has probability 0.5 of repeating the result of the last roll, probability 0.25 of turning up one higher than the last roll, and 0.25 probability of being one lower.The entropy rate for each Markov die is 1.5 bits/roll.We use simulation to discover the true distribution for I(X; Y ) assuming the null hypothesis, this time using 10, 000 trials of 150 rolls each.The results are plotted in Figure 2. As before, the solid line is the true null distribution and the open circles represent the null distribution obtained from permutation surrogates of a single time series.In this case, the permutation distribution, being biased towards smaller values, does not fit the true distribution.Using permutation surrogates, the most probable observed mutual information value (I ≈ 0.2) would lead to an incorrect rejection of the null hypothesis at significance level α = 0.05.This error is due to the fact that the permutation surrogates do not preserve the Markov structure of the original data and thus do not meet the second condition for exactness. To create an exact test, the surrogates need to be constrained such that not only single symbol counts but also the counts of consecutive symbol pairs are preserved.By preserving the counts of both single and consecutive symbol pairs, the transition probability of the surrogate sequences is made identical to that of the observed sequence. To be more general, let x k = x n , x n−1 , . . ., x n−k ∈ X k+1 denote a (k + 1)-length word and let N (x k ) be the number of such words appearing the data.A surrogate of Markov order k is one that has exactly the same N (x k ) as the original data.A surrogate of Markov order zero is obtained by simple permutation.In the Appendix, we provide an efficient algorithm for producing surrogate sequences with prescribed word counts N (x k ) for any k ≥ 0. For an exact test of the I(X; Y ) = 0 null hypothesis, the Markov order of the surrogates must match the order of the data. Knowing that our Markov dice are order one, we generate the correct null hypothesis distribution from surrogates of order one (Figure 2, open triangles).Performing 1000 trials using order-preserving surrogates we found 44 Type I errors, which is in line with the expected number of 1000 × 0.05 = 50. Importantly, the permutation test, which does not preserve Markov order, resulted in 489 Type I errors!Using permutation NHSTs in the presence of Markov structure yields invalid inferences. The algorithm described in the Appendix can be simply modified to enumerate every sequence of a given Markov order and given marginals.The exact p-value is the fraction of such sequences that have mutual information greater than or equal to the observed I.More usefully, the algorithm can also provide uniform sampling of the set of such sequences so that the first few digits of the exact p-value can be obtained quickly.To the best of our knowledge, this is the only practical method for performing an exact significance test of the null hypothesis that I(X; Y ) = 0 when the processes are not iid. Finding the Markov Order Our algorithm enables the investigator to produce surrogates of a given order but introduces another issue: finding the Markov order of the data.To illustrate, let us take the X and Y processes to be independent instantiations of the logistic map, z n+1 = rz n (1−z n ), where r = 3.827 is in the intermittent chaos regime (Figure 3).For the purpose of computing the mutual information using Equation ( 1), we partition the interval [0, 1] into 10 equally sized bins and collect statistics from time series of 250 samples.Unlike the previous example, the partitioned logistic map data does not correspond to a Markov process of definite order.In Figure 4 we plot the distribution of I(X; Y ) computed from 10, 000 trials of two independent logistic maps, r = 3.827, 250 iterations per trial (solid line).Subplots (a)-(e) show distribution estimates from surrogates of Markov orders k = 0, 1, 2, 3, 4, respectively (dashed lines). For the 250-sample logistic map data, the null distribution estimate improves up to order two and then degrades gradually thereafter, based on the root mean square error between the estimated and actual distributions. I(X;Y) Pr[I(X;Y)] What is needed is a method for selecting the optimal order.Fortunately, this is the context in which the order-preserving surrogates were originally developed [5].In short, to compute the p-value of the null hypothesis that a process is order k, the distribution of a (k + 1)-order test statistic is obtained from an ensemble of order k surrogates.The p-value is the probability of obtaining a test statistic equal to or more extreme than the one observed.A convenient test statistic is the block entropy of the next highest order Note that because entropy is reduced by the presence of higher order structure, the p-value is the probability of obtaining a block entropy less than or equal to the observed value.For further explanation, see [5]. The results of the significance tests for orders k = 0, 1, 2, 3, 4 are shown in Figure 5.The horizontal axes are the block entropies for length k + 2 words.The heavy vertical line indicates the observed block entropy and the bars represent the distribution of the entropies obtained from the surrogates of order k.The p-value, shown next to the vertical line, is the fraction of the surrogate block entropy distribution that lies below the observed block entropy.show histograms of block entropies H k+1 (X), k = 0, 1, 2, 3, 4, respectively, computed from 10, 000 surrogates of order k.The histograms represent the distribution of H k+1 (X) given the null hypothesis that the data is order k.The observed value of H k+1 (X) is indicated by the heavy vertical line in each case.The p-values, shown next to the vertical lines, are the fraction of the distribution that is equal to or less than the observed block entropy.Orders k = 0, 1 have zero probability and can therefore be rejected as candidate orders for this data. Pr[H (X)] k+1 Using the standard significance level (α = 0.05), the zeroth and first order hypotheses are rejected, whereas the significance test fails to reject second through fourth order hypotheses.To select an adequate order but prevent over-fitting, we propose choosing the lowest order in which the p-value equals or exceeds the significance level.Note that this test should be performed for each process because different orders may be required for X and Y .In this trial, second order was selected for both processes, but only the X data order tests are shown. Using this methodology to select the Markov orders, we repeated the exact NHST I(X; Y ) = 0, where X and Y are generated from independent logistic maps, 1000 times and found 54 Type I errors, compared with the expected number of 1000 × 0.05 = 50.For the X data, the order test selected second order 576 times, third order 369 times, fourth order 45 times, fifth 8 times, and first and sixth order once each.Because the sampled logistic map data does not have a definite Markov order, the effective order will vary sensitively depending on the sample.In spite of the variation, the above methodology achieves a near ideal Type I error rate. Conclusions In summary, we have described an exact significance test for I(X; Y ) = 0 that can be performed for data of any Markov order.There are two parts: (1) an exact test for selecting the appropriate orders of the X and Y data, and (2) an efficient method for generating order-preserving X and Y surrogates.While a complete enumeration of all order-preserving surrogates is possible (thus giving the exact p-value to all digits), we show how to implement uniform sampling for efficiently determining the first few digits of the p-value.The new method should be used in place of a permutation test any time non-iid data is suspected.We avoided any discussion of entropy bias corrections [7] or bin sizing strategies because these choices do not affect the implementation of the significance test.In the Appendix we give the details of the algorithms needed to generate the order-preserving surrogates. As a final comment, we wish to point out that this exact test is not sufficient for conditional mutual information quantities, such as transfer entropy [12], although permutation tests are presently being used for this purpose.Permutation tests assume zero mutual information, whereas conditional mutual information quantities can be zero even when mutual information is not.An exact test for conditional mutual information remains an outstanding problem. where F i• is the sum of row i and C vu is the (v, u)-th cofactor of the matrix As an example, consider the following sequence of twelve binary observations: The sequence x has u = 0, v = 1 and transition count ) From Equation ( 5) we compute The cardinality of the set Γ(x) is therefore 80. From Whittle's formula we can construct a sequence with a prescribed transition count.Let the sequence y = {y 1 . . .y N } be a member of Γ starting with y 1 = u, ending with y N = v, and having the transition count matrix F .The candidates for the second element y 2 are the set {w|F y 1 w > 0}.For each candidate w we compute N wv (F ′ ), the number of sequences left.Here F ′ ij = F ij − δ y 1 w is the original transition count matrix less the candidate transition.We choose a candidate randomly in proportion to the number of sequences left; a path that leads to a small number of possible sequences is chosen less frequently than one that leads to a large number.Thus Once y 2 is chosen, F is reset to the appropriate F ′ and the process is repeated for y 3 and so on until y N −1 is reached.Returning to our example, we have y 1 = 0, y N = 1, y 12 = 1, and w = {0, 1}.The two choices for y 2 lead to the following number of remaining sequences: Figure 1 . Figure 1.Mutual information between a pair of independent dice rolled 75 times.Distribution computed from Equation (1) over 10, 000 trials (solid line).The dashed line indicates significance level α = 0.05.Open circles are estimates of the distribution from 10, 000 permutation surrogates of a single trial. Figure 2 . Figure 2. Mutual information between a pair of independent Markov dice rolled 150 times.Distribution computed from Equation (1) over 10, 000 trials (solid line).Open circles are the distribution estimated from permutation surrogates.Open triangles are the distribution estimated from surrogates of Markov order one. Figure 3 . Figure 3.A typical trajectory of the logistic map, r = 3.827. Figure 4 . Figure 4. Distribution of I(X; Y ) computed from 10, 000 trials of two independent logistic maps, r = 3.827, 250 iterations per trial (solid line).Subplots (a)-(e) show distribution estimates from surrogates of Markov orders k = 0, 1, 2, 3, 4, respectively (dashed lines).The root mean square error between the actual and estimated distribution is shown in each plot. Figure 5 . Figure 5. Markov order tests for a logistic map, r = 3.827, 250 iterations.Subplots (a)-(e)show histograms of block entropies H k+1 (X), k = 0, 1, 2, 3, 4, respectively, computed from 10, 000 surrogates of order k.The histograms represent the distribution of H k+1 (X) given the null hypothesis that the data is order k.The observed value of H k+1 (X) is indicated by the heavy vertical line in each case.The p-values, shown next to the vertical lines, are the fraction of the distribution that is equal to or less than the observed block entropy.Orders k = 0, 1 have zero probability and can therefore be rejected as candidate orders for this data.
5,052.6
2014-05-23T00:00:00.000
[ "Computer Science", "Mathematics" ]
Fredholm Weighted Composition Operators on Dirichlet Space Let H be a Hilbert space of analytic functions on the unit disk D. For an analytic function ψ on D, we can define the multiplication operator Mψ : f → ψf, f ∈ H. For an analytic selfmapping φ of D, the composition operator Cφ defined on H as Cφf f ◦ φ, f ∈ H. These operators are two classes of important operators in the study of operator theory in function spaces 1–3 . Furthermore, for ψ and φ, we define the weighted composition operator Cψ,φ on H as Introduction Let H be a Hilbert space of analytic functions on the unit disk D. For an analytic function ψ on D, we can define the multiplication operator M ψ : f → ψf, f ∈ H.For an analytic selfmapping ϕ of D, the composition operator C ϕ defined on H as C ϕ f f • ϕ, f ∈ H.These operators are two classes of important operators in the study of operator theory in function spaces 1-3 .Furthermore, for ψ and ϕ, we define the weighted composition operator C ψ,ϕ on H as 1.1 International Journal of Mathematics and Mathematical Sciences Recall the Dirichlet space D that consists of analytic function f on D with finite Dirichlet integral: where dA is the normalized Lebesgue area measure on D. It is well known that D is the only m öbius invariant Hilbert space up to an isomorphism 10 .Endow D with norm Furthermore D is a reproducing function space with reproducing kernel Denote M {ψ : ψ is analytic on D, ψf ∈ D for f ∈ D}.M is called the multiplier space of D. By the closed graph theorem, the multiplication operator M ψ defined by ψ ∈ M is bounded on D. For the characterization of the element in M, see 11 . For analytic function ψ on D and analytic self-mapping ϕ of D, the weighted composition operator C ψ,ϕ on D is not necessarily bounded.Even the composition operator C ϕ is not necessarily bounded on D, which is different from the cases in Hardy space and Bergman space.See 12 for more information about the properties of composition operators acting on the Dirichlet space. The main result of the paper reads as the following.If ψ z 1, then the result above gives the characterization of bounded Fredholm composition operator C ϕ on D, which was obtained in 12 . As corollaries, in the end of this paper one gives the characterization of bounded invertible and unitary weighted composition operator on D, respectively.Some idea of this paper is derived from 4, 13 , which characterize normal and bounded invertible weighted composition operator on the Hardy space, respectively. Proof of the Main Result In the following, ψ and ϕ denote analytic functions on D with ϕ D ⊂ D. It is easy to verify that ψ ∈ D if C ψ,ϕ is defined on D. where I is the identity operator.Since where k w K w / K w is the normalization of reproducing kernel function K w .Since S is compact and k w weakly converges to 0 as |w| → 1, Sk w → 0 as |w| → 1.It follows that there exists constant r, 0 < r < 1, such that Sk w < 1/2 for all w with r < |w| < 1. Inequality 2.3 shows that which implies that ψ has no zeroes in {w ∈ D, r < |w| < 1}, and, hence, ψ has at most finite zeroes in {w ∈ D, |w| ≤ r}. Since k w weakly converges to 0 as |w| → 1, ψ, k w → 0 as |w| → 1, that is, For the proof of the following lemma, we cite Carleson's formula for the Dirichlet integral 14 . International Journal of Mathematics and Mathematical Sciences Let f ∈ D, f BSF be the canonical factorization of f as a function in the Hardy space, where B ∞ j 1 a j /|a j | a j − z / 1 − a j z , is a Blaschke product, S is the singular part of f and F is the outer part of f.Then The following lemmas is well-known.It is easy to verify by the fact M * ψ K w ψ w K w also.Proof.Since a bounded invertible operator is a bounded Fredholm operator, the proof is similar to the proof of Theorem 1.1.Proof.If C ψ,ϕ is a unitary operator, then it must be an invertible operator.By Corollary 2.7, ϕ is an automorphism of D and ψ is invertible in M. Let n be nonnegative integer, e n z z n , z ∈ D. A unitary is also an isometry, so we have ψ C ψ,ϕ e 0 e 0 1, 2.9 ψϕ n C ψ,ϕ e n e n √ n, n ≥ 1. 2.10 Let α ∈ D such that ϕ α 0. Since ϕ is an automorphism of D, ϕ n is a finite Blaschke product with zero α of order n.By Carleson's formula for Dirichlet integral, we have International Journal of Mathematics and Mathematical Sciences That is, Let n → ∞, then 1 T P α ξ |ψ ξ | 2 |dξ|/2π .By 2.12 , we have D ψ 0 and |ψ 0 ϕ 0 | 0. By 2.9 , we obtain ψ is a constant with |ψ| 1, which implies that ϕ 0 0, that is, ϕ is a rotation of D. The sufficiency is easy to verify.Proof.If ψ a 0 for a ∈ D, then C * ψ,ϕ K a ψ a K ϕ a 0, which implies that K a is in the kernel of C * ψ,ϕ .Thus if ψ had infinitely many zeros, the kernel of C * ψ,ϕ would be infinite dimensional and hence this operator would not be Fredholm. If Proposition 2 . 1 . Let C ψ,ϕ be a bounded Fredholm operator on D. Then ψ has at most finite zeroes in D and ϕ is an inner function.Proof.If C ψ,ϕ is a bounded Fredholm operator, then there exist a bounded operator T and a compact operator S on D such that T C ψ,ϕ * Lemma 2 . 5 .Lemma 2 . 6 .Corollary 2 . 7 . Let ψ ∈ M. Then M ψ is an invertible operator on D if and only if ψ is invertible in M. Let ψ ∈ M. Then M ψ is a Fredholm operator on D if and only if ψ is bounded away from the unit circle.Now we give the proof of Theorem 1.1.Proof of Theorem 1.1.If C ψ,ϕ is a bounded Fredholm operator on D, by Corollary 2.4, ψ ∈ M and ϕ is an automorphism of D. Since C ϕ is invertible, M ψ is a Fredholm operator.So ψ is bounded away form the unit circle follows from Lemma 2.6.On the other hand, if ψ ∈ M and bounded away from the unit circle, then M ψ is a bounded Fredholm operator on D. If ϕ is an automorphism of D, then C ϕ is invertible.Hence C ψ,ϕ M ψ C ϕ is a bounded Fredholm operator on D. As corollaries, in the following, we characterize bounded invertible and unitary weighted composition operators on D. Let ψ and ϕ be analytic functions on D with ϕ D ⊂ D. Then C ψ,ϕ is a bounded invertible operator on D if and only if ψ ∈ M, invertible in M, and ϕ is an automorphism of D. Corollary 2 . 8 . Let ψ and ϕ be analytic functions on D with ϕ D ⊂ D. C ψ,ϕ is a bounded operator on D. Then C ψ,ϕ is a unitary operator if and only if ψ is a constant with |ψ| 1 and ϕ is a rotation of D. Remark 2 . 9 . The key step in the proof of the main result is to analyze zeros of the symbol ψ and univalency of ϕ.The following result pointed out by the referee gives a simple characterization of the symbols ψ and ϕ for the bounded Fredholm operator C ψ,ϕ on D. Proposition 2.10.Let ψ and ϕ be analytic functions on D with ϕ D ⊂ D. C ψ,ϕ is a bounded Fredholm operator on D. Then ψ has only finitely many zeros in D and ϕ is univalent. ϕ a ϕ b for a, b ∈ D with a / b, by a similar reasoning as 1, Lemma 3.26 , there exist infinite sets {a n } and {b n } in D which is disjoint such that ϕ a n ϕ b n .Since ψ has only finitely many zeros in D, we can choose infinitely many a n and b n such that ψ a n / 0, ψ b n / 0. Hence, Theorem 1.1. Let ψ and ϕ be analytic functions on D with |, P α ξ is the Poisson kernel, and μ is the singular measure corresponding to S. Let C ψ,ϕ be a bounded operator on D, ψ BF with B a finite Blaschke product.Then C F,ϕ is bounded.Proof.Let M B be the multiplication operator on D. Then C ψ,ϕ M B C F,ϕ .Since B is a finite Blaschke product, by the Carleson's formula, we have By Lemma 2.2, C F,ϕ is a bounded operator on D. Since C ψ,ϕ M B C F,ϕ and M B is a Fredholm operator, C F,ϕ is a Fredholm operator also.By Proposition 2.1 and Lemma 2.3, by the inequality above it is easy to verify that C F,ϕ is bounded on D if C ψ,ϕ is bounded.Lemma 2.3.Let F be an analytic function on D with zero-free.If C F,ϕ is a bounded Fredholm operator on D, then ϕ is univalent.Proof.If ϕ a ϕ b for a, b ∈ D with a / b, by a similar reasoning as 1, Lemma 3.26 , there exist infinite sets {a n } and {b n } in D which is disjoint such that ϕ a n ϕ b n .Hence, * is finite dimensional.Corollary 2.4.If C ψ,ϕ is a bounded Fredholm operator on D, then ϕ is an automorphism of D and ψ ∈ M. Proof.By Proposition 2.1, ψ has the factorization of BF with B a finite Blaschke product and F zero free in D.
2,429
2012-08-26T00:00:00.000
[ "Mathematics" ]
PTHrP Overexpression Increases Sensitivity of Breast Cancer Cells to Apo2L/TRAIL Parathyroid hormone-related protein (PTHrP) is a key component in breast development and breast tumour biology. PTHrP has been discovered as a causative agent of hypercalcaemia of malignancy and is also one of the main factors implicated in breast cancer mediated osteolysis. Clinical studies have determined that PTHrP expression by primary breast cancers was an independent predictor of improved prognosis. Furthermore, PTHrP has been demonstrated to cause tumour cell death both in vitro and in vivo. Apo2L/TRAIL is a promising new anti-cancer agent, due to its ability to selectively induce apoptosis in cancer cells whilst sparing most normal cells. However, some cancer cells are resistant to Apo2L/TRAIL-induced apoptosis thus limiting its therapeutic efficacy. The effects of PTHrP on cell death signalling pathways initiated by Apo2L/TRAIL were investigated in breast cancer cells. Expression of PTHrP in Apo2L/TRAIL resistant cell line MCF-7 sensitised these cells to Apo2L/TRAIL-induced apoptosis. The actions of PTHrP resulted from intracellular effects, since exogenous treatment of PTHrP had no effect on Apo2L/TRAIL-induced apoptosis. Apo2L/TRAIL-induced apoptosis in PTHrP expressing cells occurred through the activation of caspase-10 resulting in caspase-9 activation and induction of apoptosis through the effector caspases, caspase-6 and -7. PTHrP increased cell surface expression of Apo2L/TRAIL death receptors, TRAIL-R1 and TRAIL-R2. Antagonistic antibodies against the death receptors demonstrated that Apo2L/TRAIL mediated its apoptotic signals through activation of the TRAIL-R2 in PTHrP expressing breast cancer cells. These studies reveal a novel role for PTHrP with Apo2L/TRAIL that maybe important for future diagnosis and treatment of breast cancer. Introduction Breast cancer is one of the highest causes of cancer related deaths amongst women. Despite advances in the detection of localised disease and a decline in the mortality rates of primary breast cancer patients, current therapies are only palliative for advanced metastatic breast cancer patients. Approximately 70% of women with advanced breast cancer will have bone metastases [1]. Once tumour cells metastasise to bone, mortality increases to 70% [2]. Thus a greater understanding of tumour progression and the key factors involved is vital not only for understanding cancer biology but also for improving cancer treatment. Parathyroid hormone-related protein (PTHrP) was discovered as the causative agent of hypercalcaemia in cancer patients [3]. Since its discovery the involvement of PTHrP in the hypercalcaemia of breast cancer has been extensively studied. PTHrP has also been implicated in breast cancer progression and the bone metastasis process [4,5]. In the bone microenvironment, PTHrP is involved in the osteotrophism of breast cancer cells, through its ability to activate osteoclastic bone resorption and thus participation in driving the 'vicious cycle' [4]. Studies showed that PTHrP levels were much higher in primary tumours of breast cancer patients who later developed bone metastasis [6][7][8], thus leading to the hypothesis that PTHrP expression in primary breast tumours increases the probability of bone metastasis and decreased patient survival. Contrary to this, a larger clinical study that examined the relationship between PTHrP production and bone metastasis in patients with operable breast cancer revealed that patients with PTHrP positive tumours had significantly improved survival rate with less metastases to bone than patients with PTHrP-negative tumours [5,9]. Together, these studies support the idea of a dual role for PTHrP in breast cancer, a protective function early on in the disease leading to improved survival and reduced metastasis, and a destructive role once the tumour progresses and metastasise to the bone. Apo2 ligand (Apo2L/TRAIL) is a member of the tumour necrosis factor (TNF)-cytokine family that can induce apoptosis in a variety of transformed cells, including breast cancer, whilst sparing most non-transformed cells [10][11][12]. Apo2L/TRAIL is a type II transmembrane protein that induces apoptosis through interactions with its death receptors; TRAIL-R1/DR4 and TRAIL-R2/DR5 [13,14]. Recombinant Apo2L/TRAIL and agonistic antibodies targeting Apo2L/TRAIL receptors are currently in clinical trials for cancer. Mapatumumab, an agonistic antibody against TRAIL-R1, is in Phase II clinical trials in patients with colorectal cancer and non-small cell lung cancer [15,16]. However, one of the main hurdles of Apo2L/TRAIL therapy is that many cancer cells remain resistant to Apo2L/TRAIL-induced apoptosis. Although many methods have been identified to overcome Apo2L/TRAIL resistance such as combination therapy with chemotherapeutics and other biological reagents, the mechanism of Apo2L/TRAIL sensitivity and/or resistance and strategies to overcome drug resistance still remains to be explored. In this study, we demonstrate that PTHrP expression in breast cancer cells sensitised them to Apo2L/TRAIL, and in deed converted MCF-7 cells from Apo2L/TRAIL resistant cells to respond to Apo2L/TRAIL-induced apoptosis. Apo2L/TRAIL induced apoptosis in PTHrP overexpressing cells through the activation of caspase-10 resulting in caspase-9 activation and induction of apoptosis through the effector caspases; caspase-6 and -7. There was an increase in cell surface expression of TRAIL-R1 and TRAIL-R2 in PTHrP overexpressing cells. Antagonistic antibodies against the death receptors demonstrated that Apo2L/ TRAIL preferentially bound to TRAIL-R2 to promote apoptosis in the PTHrP overexpressing cells. Measurement of Cell Viability For determination of Apo2L/TRAIL mediated cytotoxicity, 1610 4 cells per well were seeded into 96-well microtiter plates and allowed to adhere to the plate for 24 h. Cells were treated according and/or then treated with 100 ng/ml Apo2L/TRAIL for 24 h. Cell viability was determined by staining the cells with crystal violet and measuring the OD 570 of cell lysates. DAPI staining of nuclei-Cells were seeded on plastic chamber slides and stimulated as indicated. After 2 washes with PBS, cells were fixed in methanol for 5 min, washed again with PBS and incubated with 0.8 mg/ml of 49, 6-diamidine-29-phenylinodole dihydrochloride (DAPI, Roche Diagnostics, Castle Hill, NSW, Australia) in PBS for 15 min at 37uC. After several washes in PBS, the coverslips were mounted on PBS/glycerine. DAPI was visualised by fluorescence microscopy. Flow Cytometry Cells were cultured accordingly then washed twice with PBS and detached using 2 mM EDTA in PBS at 37uC for 5 min. For flow cytometric analysis, all subsequent incubation steps were performed on ice and centrifugation steps performed at 4uC. For analysis of Apo2L/TRAIL receptor expression, cells were resuspended in ice cold PBS and centrifuged for 6 min at 12,000 rpm. Cells were resuspended in ice cold PBS and PBS/ Azide solution (0.1% Azide) and centrifuged at 12,000 rpm, 5 min. Cells were resuspended at 2610 6 cells/ml in blocking buffer (10% BSA/PBS +0.1% Azide). Monoclonal antibodies or the isotype-matched nonbinding control antibodies were diluted in blocking buffer to 10 mg/ml, added to 50 ml aliquots of cell suspension and incubated for 45 min. Cells were then washed twice with 2 ml of wash buffer and collected by centrifugation. PEconjugated antibody was added to the resuspended cell pellets, diluted 1/50 in wash buffer. The cells were incubated for a further 45 min in the dark, washed three times as above, then resuspended and fixed in 0.3 ml ice-cold 1% w/v paraformaldehyde for analysis. For analysis of cell cycle, cells were cultured for 24 h, then serum starved for a further 24 h and media was replaced with growth media. At the appropriate time point cells were detached as above and collected by centrifugation. Cells were washed in ice cold PBS and centrifuged. Cells were then resuspended in 200 ml PBS +0.1% FBS, and fixed in ice cold 70% ethanol then incubated for 1 h at 4uC. Cells were washed as above and resuspended in 1 ml of solution containing; 0.1% Triton-X 100, 200 mg/ml RNAse free and 40 mg/ml propidium iodide, incubated for 20 min at 37uC then analysed. Apoptotic DNA Laddering Assay Cells were cultured and treated with 100 ng/ml recombinant Apo2L/TRAIL or left untreated with 24 h. DNA was isolated and treated using the Apoptotic DNA Ladder Kit (Roche, Germany) according to the manufacture's protocol. Cell Growth Assays Cells were seeded at a density of 6610 4 cells per well in a 6 well plate and allowed to adhere for 24 h. Cells were serum starved for 24 h by replacing media with serum free media. After 24 h cells were released with replacement of full serum media. Cells were collected at various time points by the addition of trypsin and resuspended in PBS prior to counting. Statistical Analysis All data are presented as the mean 6 standard deviation (SD) unless stated otherwise. Statistically significant differences were determined by unpaired t-test or one-way ANOVA followed by Turkeys post hoc test to identify pair wise differences. In all cases, p,0.05 was considered significant. Statistical analyses were carried out using MS Excel 2003 (CA, USA) or GraphPad Prism 5 (CA, USA). PTHrP Overexpression Increases Cell Growth of Breast Cancer Cells To assess the effects of PTHrP in breast cancer cells, PTHrP (1-139 aa) was overexpressed in MCF-7 cells and clonal cells were generated from positively transfected cells, and these cells have been previously described [17]. Both mRNA and protein expression was validated for PTHrP expression in these transfected cells: the PTHrP overexpressing cells produce PTHrP at 16096171 pmol/liter/10 5 cells compared with parental and vector control cells which express PTHrP at 1.860.4 pmol/liter and 4.360.3 pmol/liter/10 5 cells, respectively [17]. The effect of PTHrP overexpression upon cell proliferation was assessed by counting cell growth over a period of 7 days (Fig. 1). Cells were seeded into 6 well plates and allowed to adhere for 24 h. The cells were then synchronised with serum free media for 24 h, then released with the replacement of full serum media. MCF-7 cells overexpressing PTHrP attained a higher number of cells from 24 h and throughout the rest of the culture period, when compared to MCF-7 parental or MCF-7 vector control cells (Fig. 1). Since greater cell numbers were observed by cells overexpressing PTHrP, FACS analyses was performed to determine the rate of cell cycle progression in response to PTHrP overexpression. Cell cycle was analysed by staining cellular DNA content with propidium iodide and reading fluorescent cells by flow cytometry. Cells were synchronised and released with replacement of full serum media. Cells were then fixed, stained and analysed at various time points over 24 h. The FACS histograms revealed that PTHrP overexpressing cells entered into the S phase earlier (32% of PTHrP overexpressing cells compared with 10% of parental cells) (Fig. 2). The PTHrP overexpressing cells entered into a second cycle of the S phase with 23% of cells at 36 h (Fig. 2). The increase in cell cycle progression may account for the increase in cell numbers. These results confirm data that has already been described where PTHrP overexpression in breast cancer cells enhances cell cycle progression [18]. Expression of PTHrP Sensitises MCF-7 Cells to Apo2L/ TRAIL-induced Apoptosis Since PTHrP overexpression has been shown to alter cellular growth and affect apoptosis [19,20], the role of PTHrP in cell death was assessed in breast cancer cells treated with Apo2L/ TRAIL. Breast cancer cells were cultured for 24 h then treated with increasing doses of Apo2L/TRAIL for 24 h. Cell death was not noted in MCF-7 cells even with maximal treatment dose of 100 ng/ml of Apo2L/TRAIL for 24 h. In contrast, MCF-7 PTHrP overexpressing cells displayed a decrease in cell viability with an increase in dose of Apo2L/TRAIL treatment. Maximal cell death of 50% was observed with a treatment dose of 100 ng/ ml of Apo2L/TRAIL when compared to untreated cells, and a dose of 10 ng/ml Apo2L/TRAIL provided a significant difference in cell death relative to untreated cells (Fig. 3A). In view of the results above, the effects of various portions regions of PTHrP were assessed to determine whether the increase in sensitivity to Apo2L/TRAIL treatment was the result of intracellular or extracellular actions of PTHrP. MDA-MB-231 cells were utilised as they are sensitive to Apo2L/TRAIL treatment and as they 1) have higher levels of endogenous PTHrP levels and 2) higher PTH1R expression compared with MCF-7 cells. MDA-MB-231 cells were treated with PTHrP 1-34 aa peptide, which activates PTH1R [21], for 12 h then treated with 100 ng/ml Apo2L/TRAIL for a further 12 h. Treatment with Apo2L/TRAIL confirmed MDA-MB-231 sensitivity to Apo2L/ TRAIL induced cell death with 56% viable cells (Fig. 3B). Treatment with PTHrP 1-34 aa peptide did not show a significant decrease when compared to Apo2L/TRAIL treatment alone (Fig. 3B). Similarly, treatment of MDA-MB-231 cells with PTHrP fragments 38-94 aa, 67-86 aa, 106-139 aa, 120-139 aa, 107-111 aa and 107-139 aa did not result in a significant alteration in the percentage of viable cells when treated with Apo2L/TRAIL, when compared to Apo2L/TRAIL treatment alone (Fig. 3B). The results suggest that increased Apo2L/TRAIL sensitivity is not the result of extracellular receptor-mediated actions of PTHrP, and were more likely to result from intracellular actions of PTHrP. We also investigated the actions of TRAIL on an MDA-MB-231 cell line in which PTHrP expression had been knocked-down by an anti-sense approach. In this cell line only a 50% reduction of PTHrP expression (mRNA and protein) was determined and no significant difference in TRAIL effects was noted between the knock-down cells when compared with their parental cells; percentage of viable cells 52.5% and 59.3%, respectively (data not shown). This suggests that the magnitude of suppression of PTHrP expression was insufficient to lessen the sensitivity of these cells to the effects of TRAIL. PTHrP Overexpression Sensitised MCF-7 Cells to Apo2L/ TRAIL-induced Apoptosis via Activation of the Intracellular Apoptotic Pathway To confirm that the reduction of viable cell number in PTHrP overexpressing breast cancer cells treated with Apo2L/TRAIL was due to an induction of apoptosis, cells were cultured with Apo2L/TRAIL, stained with DAPI and assessed under fluorescent microscopy. Morphological evidence of apoptosis was observed in Apo2L/TRAIL-treated cells including chromatin condensation and nuclear DNA fragmentation (Fig. 4A). Apoptotic cells were not observed in MCF-7 cells treated with Apo2L/TRAIL (Fig. 4A). A lower number of viable cells in Apo2L/TRAIL treated PTHrP overexpressing cells was observed compared to MCF-7 parental cells (Fig. 4A). Another hallmark of apoptosis is the formation of DNA fragments of oligonucleosomal size ranging from 180 to 200 bp. DNA laddering assays were performed on MCF-7 PTHrP overexpressing cells treated with Apo2L/TRAIL. PTHrP overexpressing cells treated with Apo2L/TRAIL displayed DNA laddering indicative of apoptosis compared to untreated cells (Fig. 4B). No discernable DNA laddering was observed with the MCF-7 cells that were untreated or treated with Apo2L/TRAIL (Fig. 4B). To examine the molecular mechanism of Apo2L/TRAILinduced apoptosis in PTHrP overexpressing breast cancer cells, the expression and processing of intracellular proteins involved in the intrinsic apoptotic signalling pathway was assessed by immunoblotting. MCF-7 cells contain a mutation in caspase-3 rendering it inactive, thus these cells cannot activate apoptosis via the extrinsic pathway. In both the MCF-7 and PTHrP overexpressing cells, Apo2L/TRAIL treatment did not alter the levels of caspase-8 precursor protein compared to untreated controls (Fig. 4C). Treatment of PTHrP overexpressing cells with Apo2L/TRAIL decreased levels of precursor proteins for caspase-9, -10, -6 and -7 (Fig. 4C). There was no change in precursor caspase-10, -9, -6 and -7 levels in Apo2L/TRAIL treated MCF-7 cells compared to untreated controls (Fig. 4C). These results indicate that these caspases are activated in PTHrP overexpressing cells treated with Apo2L/TRAIL. Activated caspases can cleave PARP, which then facilitates the cell death process and thus can serve as an apoptosis marker. Consistent with this notion, elevated levels of PARP cleavage product was observed in PTHrP overexpressing cells treated with Apo2L/ TRAIL compared to untreated control cells and treated MCF-7 parental cells (Fig. 4C). Cell Surface Expression of TRAIL-R1 and TRAIL-R2 was Elevated in PTHrP-overexpressing Cells To determine whether there was any correlation between the sensitisation of cells to Apo2L/TRAIL and the levels of expression of Apo2L/TRAIL receptors, immunofluorescent staining of each of the Apo2L/TRAIL receptors was performed and analysed using flow cytometry (Fig. 5A, B). The cell surface expression levels of TRAIL-R1 and TRAIL-R2 were elevated in the Apo2L/ TRAIL sensitive cell line, MCF-7 PTHrP overexpressing cells compared to MCF-7 parental cells (Fig. 5B). The expression levels of both the decoy receptors, TRAIL-R3 and TRAIL-R4, were low and equivalent in both the MCF-7 parental and PTHrP overexpressing cells (Fig. 5A, B). Inhibition of TRAIL-R2 Protects against Apo2L/TRAILinduced Apoptosis in PTHrP-overexpressing Cells To establish whether Apo2L/TRAIL treatment was associated with a functional activation of TRAIL-R1 or TRAIL-R2, MCF-7 PTHrP overexpressing cells were treated with antagonistic anti-TRAIL-R1 and anti-TRAIL-R2 antibodies (10 mg/ml) followed by 100 ng/ml Apo2L/TRAIL and cell survival was assessed. Treatment with Apo2L/TRAIL alone induced 50% cell death when compared to untreated cells (Fig. 6A). When cells were pretreated with both antagonistic antibodies against TRAIL-R1 and TRAIL-R2 no cell death was observed, and was similar to that of untreated cells (Fig. 6A). To elucidate which of the two Apo2L/ TRAIL death receptors were used for Apo2L/TRAIL signalling, each antagonistic antibody was used individually. When the cells were pre-incubation with anti-TRAIL-R1 followed by Apo2L/ TRAIL treatment, 50% cellular apoptosis was observed which was similar to the levels detected with Apo2L/TRAIL treatment alone (Fig. 6B). Pre-incubation with anti-TRAIL-R2 followed by Apo2L/TRAIL treatment inhibited Apo2L/TRAIL-induced apoptosis, with the percentage of viable cells similar to the untreated controls or treatment with anti-TRAIL-R2 alone (Fig. 6C). Combined, these results indicate that Apo2L/TRAIL is signalling via binding to and activation of TRAIL-R2, and not TRAIL-R1, to induce the apoptosis signal in MCF-7 PTHrP overexpressing cells. Discussion PTHrP was originally identified as a tumour factor responsible for HHM, since then it has been well established that PTHrP is expressed in a wide variety of tissues in the foetus and adult (reviewed in [22,23]). The predominant roles of PTHrP relate to development, growth, smooth muscle tone and calcium transport (reviewed in [24]). PTHrP also has a significant role in mammary gland development and function. Both overexpression and underexpression of PTHrP has been shown to disrupt branching morphogenesis during mammary gland development. Firstly, PTHrP rescued-knock out mice failed to develop mammary glands [22]. In PTHrP null and PTH1R knockout mice, the mammary epithelial buds form, but there is a complete failure of formation of a duct system and instead the mammary epithelial ducts degenerate and disappear by birth [25][26][27]. In addition, overexpression of PTHrP in mice, displayed impaired ductal proliferation and elongation and branching morphogenesis during mammary development at puberty and in early pregnancy [28]. Many studies have demonstrated a role for PTHrP in cell growth and apoptosis [18][19][20][29][30][31], and notably the following observations have been made: 1. PTHrP when overexpressed in MCF-7 cells protected cells from serum starvation-induced apoptosis, due to an increase in anti-apoptotic proteins Bcl-2 and Bcl-x L [19]; 2. in HEK293 cells, PTHrP inhibited TNFainduced apoptosis by blocking the extrinsic and intrinsic pathway through regulation of Bcl-2 family of proteins [30]; 3. mutation of the nuclear localisation signal region of PTHrP ablated both apoptotic and proliferative responses in chondrocytes, whilst extracellular PTHrP had no effect [29]; 4. the mid-region PTHrP (50-86 aa) was able to restrain growth and invasion as well as cause striking toxicity and accelerated death of a panel of breast cancer cell lines, the most responsive being MDA-MB-231 [20]; and 5. that extracellular PTHrP treatment induced the up- regulation of pro-apoptotic genes Bcl-xS, Bad and Rip1 was switches on the expression of caspase -2, -5, -6, -7 and -8 in MDA-MB-231 cells [31]. These studies suggest intracellular actions of PTHrP on apoptosis, and possible regulation of apoptosis through regulation of Bcl-2 family members. We have previously described that PTHrP has paracrine, autocrine and intracrine actions, and demonstrated that PTHrP can target to the nucleus as a result on intracellular trafficking via importin beta, and that extracellular PTHrP can be internalized as a result of binding to the PTH1R, and that PTHrP can attain different subcellular locations [21,[32][33][34][35]. With our extensive Figure 4. Apo2L/TRAIL induces apoptosis in PTHrP overexpressing breast cancer cell through activation of the intracellular apoptotic pathway. A) Cells were seeded onto chamber slides and treated with Apo2L/TRAIL (TX) or untreated (UT) for 24 h. Cells were fixed with methanol and incubated with DAPI, before washing in PBS and mounting with PBS/glycerine. DAPI staining was visualised by fluorescence microscopy. B) Apoptosis was assessed by DNA laddering assay. Cells were cultured and treated with Apo2L/TRAIL (TX) or untreated (UT) for 24 h. DNA was isolated and treated using the Apoptotic DNA Laddering Kit (Roche). C) Caspase expression. Cells were cultured as indicated above and lysed after 24 h. Cell extracts were analysed by Bis-Tris gel electrophoresis and transferred to PVDF membranes. Caspase pre-cursors-6, -7, -8, -9, -10, PARP and b-actin protein levels was assessed by immunoblotting. doi:10.1371/journal.pone.0066343.g004 knowledge into the divergent actions and receptor domains of PTHrP, we used a series of peptides that encapsulate the biologically-active domains of PTHrP, and we have previously demonstrated to be biologically active, to demonstrate that the actions were not the result of extracellular signaling, or internalization of PTHrP peptides, leading to the proposition that these PTHrP actions were the result of intracellular actions. Results from this study provide further evidence of a role for PTHrP in apoptosis, whereby PTHrP overexpression sensitised MCF-7 breast cancer cells to Apo2L/TRAIL-induced apoptosis. The actions of PTHrP to confer this phenotype were the result of intracellular signalling of PTHrP and not through its extracellular actions. Apo2L/TRAIL-induced apoptosis of PTHrP overexpressing cells occurred through binding of Apo2L/TRAIL to TRAIL-R2, resulting in the activation of caspase-10 and -9, which suggests that the intrinsic apoptotic pathway is activated, along with the subsequent activation of the effector caspases, caspase-6 and -7, leading to apoptosis (Fig. 7). Apo2L/TRAIL targeted therapies such as Apo2L/TRAIL peptide and receptor agonists are currently evaluated in clinical trials in a variety of tumours. Apo2L/TRAIL has proven to be a prospective molecule because of its potent ability to induce death, selectivity to tumour cells and lack of significant toxicity in normal organs [12]. This was demonstrated in vivo, whereby SCID mice with subcutaneously implanted human tumours displayed a decrease in tumour size when treated with Apo2L/TRAIL, and no toxic effects were observed in normal tissues [36]. In a mouse model where MDA-MB-231 cells were transplanted into the tibiae of athymic nude mice, it was shown that treatment with Apo2L/ TRAIL reduced osteolysis and tumour burden and no detectable soft tissue invasion [37]. The same group also demonstrated a similar outcome with Apo2L/TRAIL treatment in a mouse model of multiply myeloma [38]. However, one of the limitations of Apo2L/TRAIL therapy is resistance of tumours to its treatment, in particular breast cancer [39,40]. Studies have demonstrated that combination treatment using various agents with Apo2L/ TRAIL may overcome Apo2L/TRAIL resistance. Treatment with the histone deacetylase inhibitor, suberoylanilide hydroxamic acid (SAHA), can sensitise generated Apo2L/TRAIL-resistant MDA-MB-231 cells to Apo2L/TRAIL induced apoptosis [41]. Consistently, Apo2L/TRAIL resistant explanted breast tumour cells were re-sensitised when Apo2L/TRAIL was used in combination with chemotherapeutic drugs including taxol, etoposide, doxorubicin, cisplatin or SAHA [37]: many of these agents are known to enhance PTHrP expression [23]. One of the proposed mechanisms of Apo2L/TRAIL sensitivity in cancer cells is dependent on the level of expression of the Apo2L/TRAIL receptors. Results from these PTHrP overexpression studies showed an increase in cell surface expression of the death receptors TRAIL-R1 and TRAIL-R2, which may explain in part the cells susceptibility to exogenous Apo2L/TRAIL-induced apoptosis. Consistent with this, previous studies have demonstrated that increased cell surface expression of Apo2L/TRAIL death receptors sensitise tumour cells to Apo2L/TRAIL. Higher cell surface expression levels of TRAIL-R2 was shown to be associated with the preferential response of ovarian, colon and renal cancer cells lines to TRAIL-R2 agonistic antibodies [42][43][44][45]. However, Apo2L/TRAIL apoptotic signalling via its death receptors may not be equivalent or interchangeable in different tumours and settings. Several studies have demonstrated that membrane expression of Apo2L/TRAIL death receptors does not associate with the cell's susceptibility toward either TRAIL-R1 or -R2 stimulation [46][47][48]. For example, cancer cell lines, including lung, colon and breast, all expressing similar membrane levels of TRAIL-R1 and -R2 expression, displayed a higher sensitivity to TRAIL-R2 signalling [46]. In addition, knock-down of TRAIL-R2 in MDA-MB-231 inhibited the toxic effects of Apo2L/TRAIL, while loss of TRAIL-R1 had no effect [40]. Preferential binding and signalling with TRAIL-R2 was also demonstrated in this study as the Apo2L/TRAIL-induced apoptotic response was inhibited when PTHrP overexpressing MCF-7 cells were treated with TRAIL-R2 antagonistic antibodies. Alternatively, PTHrP may alter a cells susceptibility to apoptosis by altering the ratio of anti-and pro-apoptotic factors, including members of the Bcl-2 family. PTHrP has been demonstrated to induce the up-regulation of Bcl-xS, Bad, Rip1 and switches on the expression of caspases in MDA-MB-231 cells [31]. The ratios of the apoptosis regulating proteins Bcl-2 to Bax and Bcl-X L to Bax were higher in breast cancer cells overexpressing PTHrP but not in NLS-mutated PTHrP overexpressing cells [19]. Additionally, mitogenic effects of PTHrP have been attributed to intracrine actions, as mutation of the NLS region of PTHrP ablated the apoptotic and proliferative response and exogenous PTHrP had no effect [29]. In contrast, studies have demonstrated that intracrine PTHrP protects against serum starvation induced apoptosis in MCF-7 cells [19] and that nuclear localisation of PTHrP in chondrocytes delayed apoptosis induced by serum starvation [49]. Results from this study further support an intracellular role of PTHrP, as treatment with various fragments of PTHrP did not affect Apo2L/TRAIL induced apoptosis. Results described herein demonstrate that PTHrP overexpression sensitises breast cancer cells to Apo2L/TRAIL-induced apoptosis. Thus, clinical implications from this study suggest that PTHrP may be an indicative diagnostic factor in determining therapeutic strategies with Apo2L/TRAIL in treating cancer patients. Currently, serum levels of patients with HHM are assessed using RIA to detect PTHrP, this assessment would be a potential factor in determining Apo2L/TRAIL sensitivity of patients with PTHrP positive tumours. However, new treatment strategies are of importance not only for primary breast tumours but also for patients with metastatic disease. PTHrP is normally expressed in the majority of breast cancers, especially in the late stage metastatic breast cancers, and as such, the expression of PTHrP by cancers may be influential in determining the effectiveness of Apo2L/TRAIL-based therapies in a clinical setting.
6,017.2
2013-06-18T00:00:00.000
[ "Biology", "Medicine" ]
Complex modulation of androgen responsive gene expression by methoxyacetic acid Background Optimal androgen signaling is critical for testicular development and spermatogenesis. Methoxyacetic acid (MAA), the primary active metabolite of the industrial chemical ethylene glycol monomethyl ether, disrupts spermatogenesis and causes testicular atrophy. Transcriptional trans-activation studies have indicated that MAA can enhance androgen receptor activity, however, whether MAA actually impacts the expression of androgen-responsive genes in vivo, and which genes might be affected is not known. Methods A mouse TM3 Leydig cell line that stably expresses androgen receptor (TM3-AR) was prepared and analyzed by transcriptional profiling to identify target gene interactions between MAA and testosterone on a global scale. Results MAA is shown to have widespread effects on androgen-responsive genes, affecting processes ranging from apoptosis to ion transport, cell adhesion, phosphorylation and transcription, with MAA able to enhance, as well as antagonize, androgenic responses. Moreover, testosterone is shown to exert both positive and negative effects on MAA gene responses. Motif analysis indicated that binding sites for FOX, HOX, LEF/TCF, STAT5 and MEF2 family transcription factors are among the most highly enriched in genes regulated by testosterone and MAA. Notably, 65 FOXO targets were repressed by testosterone or showed repression enhanced by MAA with testosterone; these include 16 genes associated with developmental processes, six of which are Hox genes. Conclusions These findings highlight the complex interactions between testosterone and MAA, and provide insight into the effects of MAA exposure on androgen-dependent processes in a Leydig cell model. Background Androgen signaling is critical for development of the male sexual phenotype, maturation of secondary sex characteristics and maintenance of muscle mass and bone density [1]. Disruption of androgen signaling can lead to a spectrum of developmental problems in male sexual characteristics and reproductive behavior [2]. Androgen action is mediated by androgen binding to androgen receptor (AR), a ligand-activated transcription factor that binds genomic regulatory elements associated with androgen responsive genes [3]. AR binding sites are often far (>10 kb) from transcription start sites of androgen-regulated genes, and many AR binding sites contain non-canonical androgen response elements [4][5][6]. Many transcription factors interact with AR, including GATA factors [7], STAT5 [8], NF1 and SP1 [9], which can increase AR transcriptional activity, as well as Forkhead proteins [10][11][12], P53 [13] and LEF/ TCF factors [14], which are reported to exert both repression and enhancement of AR transcriptional activity. Some of these effects may involve local interactions, as binding sites for GATA and Forkhead, as well as OCT family factors are often enriched nearby AR binding sites [4][5][6]. These findings suggest that physiological or pathophysiological conditions that affect the expression or activity of AR-interacting transcription factors such as these may impact AR activity. Many foreign chemicals can modulate AR activity; these include drugs and environmental chemicals that bind directly to AR and antagonize its transcriptional activity [15,16]. AR activity can also be modulated by foreign chemicals that exert effects on AR indirectly, via intracellular signaling [17,18]. One example is methoxyacetic acid (MAA), a testicular toxicant and the primary, active metabolite of the industrial chemical ethylene glycol monomethyl ether [19,20]. MAA enhances the transcriptional activity of several nuclear receptors [21,22], including AR [19,[22][23][24], by a mechanism that involves tyrosine kinase activity and requires PI3-kinase signaling [22]. The inappropriate enhancement of AR transcriptional activity by MAA could contribute to the testicular toxicity associated with MAA exposure, given the importance of AR in somatic cells of the testis for spermatocyte survival [25]. Earlier studies of the potentiation of AR transcriptional activity by MAA used AR reporter gene assays to demonstrate enhancement of androgen response [22]. However, while reporter gene assays are an important tool for studying gene regulation, transfected reporter gene constructs do not always reflect the regulation of endogenous genes in untransfected cells in vivo. Moreover, in the case of MAA, artefactual effects on the CMV promoter used in one study to express estrogen receptor required for reporter gene activity were reported [26]. It is therefore important to determine the effects of MAA on the expression of endogenous androgen responsive genes to determine whether MAA can, indeed, potentiate androgen responses, to identify the specific genes whose expression is affected, and to elucidate the nature and extent of interactions between MAA and androgen, both positive and negative. In this study, we develop an androgen-responsive mouse testicular Leydig cell line, TM3-AR, and use it to investigate the impact of MAA on androgen responsive gene expression by global transcriptional profiling. Our findings reveal that MAA alters the expression of large numbers of testosterone-responsive genes. We also find that the androgenic environment can influence the effects of MAA on gene expression, with many examples of both stimulatory and inhibitory interactions between MAA and testosterone. Motif analysis identified binding sites for transcription factors whose putative targets are enriched in genes showing either positive or negative interactions between MAA and testosterone, providing further insight into the mechanisms that govern these gene interactions. Enriched micro-RNA binding sites in the 3'-untranslated region (3'-UTR) of target genes were also identified. These findings demonstrate that the impact of MAA on androgen gene responses is complex and suggest target genes and pathways through which MAA may exert toxicity to somatic cells of the testis. Chemicals and reagents MAA, horse serum and testosterone were purchased from Sigma Chemical Co, St. Louis, MO. DMEM-F12 culture medium, fetal bovine serum (FBS), HEPES buffer and TRIzol reagent were purchased from Invitrogen Corp. (Carlsbad, CA). Mouse TM3 Leydig cells and LNCaP cells were obtained from American Type Culture Collection, Manassas, VA. TM3 and TM3-AR cells (see below) were grown in DMEM-F12 medium containing 5% horse serum and 2.5% FBS. LNCaP cells were maintained in RPMI 1640 containing 10% FBS. RNA was isolated using TRIzol reagent using the manufacturer's protocol. Mouse TM3 cells stably expressing human AR cDNA were prepared by retroviral infection of TM3 cells, as follows. The coding sequence of AR was excised from plasmid pSV-ARO (Dr. A.O. Brinkmann, University Medical Center Rotterdam, The Netherlands) and subcloned by blunt end ligation into the retroviral plasmid vector pWZL-Blast (Dr. D. White, Millenium Pharmaceuticals, Cambridge MA) to yield pWZL-Blast-AR. pWZL-Blast is based on the pBabe plasmid [27] and encodes a blasticidin-resistance gene transcribed from the retroviral long terminal repeat. Retroviral particles were generated as described [28] by transfecting the packaging cell line HEK293 with pWZL-Blast-AR. Culture medium containing retroviral particles was collected 48 h later and applied to TM3 cells. Pools of blasticidin-resistant cells were selected for 4 days using blasticidin S-hydrochloride and then verified as expressing AR by qPCR. Cell culture and TM3-AR cell preparation To obtain samples for microarray analysis, TM3-AR cells were treated for 24 hr with either testosterone (10 nM) or MAA (5 mM), or with testosterone in combination with MAA. The concentration of testosterone was chosen to saturate AR, and the concentration of MAA was chosen based on considerations described in our earlier studies [22,29], and based on its correspondence to the plasma concentration associated with ethylene glycol monomethyl ether-induced germ cell toxicity in mice [30]. The concentration of MAA used did not alter the cell growth rate or cause any loss of cell viability over the course of at least 48 hr. RNA was isolated and validated by RNA integrity number >8.5, as determined using an Agilent Bioanalyzer 2100 instrument (Agilent Technologies, Santa Clara, CA). qPCR Total RNA isolated from treated or untreated cultured cells was incubated with RQ1 RNAse-free DNAse for 1 h at 37°C followed by heating at 75°C for 5 min. cDNA synthesis and qPCR analysis using SYBR Green I-based chemistry were performed as described [31]. Dissociation curves were examined after each qPCR run to ensure amplification of a single, specific product. qPCR primers were designed using Primer Express software (Applied Biosystems) and are shown in Additional file 1, Table S1. Relative RNA levels were calculated after normalization to the 18S rRNA content of each sample using the comparative Ct method, under conditions where the Ct number is in its log2 linear range. Microarray analysis Each RNA sample used for microarray analysis was a pool prepared from three independent TM3-AR cell cultures (three different passages), each treated as described above. Two such pools of TM3-AR cell RNA (representing a total of 5 independent treated cell cultures) were prepared and used in two independent sets of microarrays, each of which represented 3 of the 5 independent cultures. Each set of microarrays was comprised of four separate competitive hybridization arrays (i.e., four microarray experiments): testosterone vs. control, MAA vs. control, testosterone + MAA vs. testosterone, and testosterone + MAA vs. MAA. This approach, employing pools of biological replicates, minimizes the impact of culture-to-culture variations that are unrelated to the treatments per se. cDNAs transcribed from each pooled RNA sample were labeled with Alexa 647 or Alexa 555 dyes in a fluorescent reverse pair design (dye swap) for competitive hybridization to Agilent Whole Genome Mouse Microarrays (Agilent Technology, array platform G4122F). Sample labeling, hybridization to microarrays, scanning, analysis of TIFF images using Agilent's feature extraction software, calculation of linear and LOWESS normalized expression ratios and p-value calculation using Rosetta Resolver (version 5.1, Rosetta Biosoftware) were carried out as described [32,33]. For dye swapping experiments, the Alexa 555-labeled RNA from one of the treatment conditions (testosterone and/or MAA treated) was mixed with Alexa 647-labeled RNA for the appropriate reference control (as specified above), and vice versa. Features flagged as saturated in both fluorescence channels or flagged as non-uniformity outliers in either channel were excluded from analysis. The full set of normalized expression ratios and p-values is available at the Gene Expression Omnibus web site of NCBI [34] as GEO series GSE27410. Microarray annotation and statistical analysis Agilent mouse microarray G4122F contains 41,174 mouse probes (features), each 60-nt in length. Accession numbers were obtained for 39,355 out of the 41,174 probes, of which 33,011 were assigned gene names. An additional 3,570 probes were assigned gene names using the microarray probe annotation tool AILUN [35], which maps microarray probes to Entrez genes. Each probe corresponding to a distinct mouse transcript is referred to as representing a separate gene/gene product. For each microarray probe, a mean fold-change and p-value was calculated based on the set of microarray expression ratios using the Rosetta Resolver-based error model [32]. The error model uses technology-specific data parameters to stabilize intensity variation estimates, along with error-weighted averaging of replicates. This approach has been demonstrated to provide an effective increase in statistical power [32]. The statistical significance of differential expression of each gene was determined by application of a filter (p < 0.005) to the Rosetta-generated p-values. Next, a |fold-change| filter of >2-fold was combined with the above p-value filter to determine the number of probes that were differentially regulated in any of the four microarray experiments. In total, 6,416 probes met the combined thresholds for differential expression (|fold-change| >2) and statistical significance (p < 0.005) in at least one of the four experiments. In those cases where two or more differentially expressed probes mapped to the same gene and gave the same pattern of expression across all four microarrays (reflecting probe redundancy in the array platform), a single representative probe was retained in the final data set. A total of 884 redundant probes were thus eliminated, giving a total of 5,532 non-redundant probes that met the threshold criteria for both differential expression (|fold-change| >2) and statistical significance (p < 0.005) in at least one of the four experiments (Additional file 2, Table S2A). The number of probes expected to meet the combined threshold by chance is 0.005 × 6,416, or 32 probes. The actual number of probes meeting the combined threshold was 7,811, corresponding to an apparent false discovery rate of 32/7,811, or 0.41%. Commonly used multiple testing correction methods such as Bonferroni or Holm step-down were not applied as these eliminate a large number of true positives and introduce an inappropriate overcorrection. A system of binary and decimal flags, termed total flag sum (TFS), was used to cluster the differentially regulated genes into subgroups based on their patterns of expression across the four microarray experiments [36]. Briefly, all genes that met the above fold-change and p-value threshold criteria for one or more of the four microarray experiments were assigned a binary flag value of 1, 2, 4 and 8 respectively. The sum of these binary flag values defines the whole number portion of the flag assigned to each gene and indicates which of the four microarrays met the specified threshold criteria in our analysis. In addition, decimal values of 0.1, 0.01, 0.001 and 0.0001, or 0.2, 0.02, 0.002, and 0.0002 were respectively assigned to each of the four microarrays to indicate the direction of regulation of the genes in the array (decimal flags with values of 1 indicate up-regulation, whereas those with a value of 2 indicate down-regulation). Thus, for each gene, the TFS group designation, comprising the binary sum plus the decimal values, indicates which of the four arrays met the threshold criteria for inclusion and the direction of regulation, as outlined in Additional file 2, Table S2B. As an example, the 472 genes in TFS group 9.1001 (see Table 1, below) all meet the combined threshold for up regulated expression in array experiments 1 and 4 (testosterone vs. control, and testosterone + MAA vs. MAA, respectively), but not in array experiments 2 and 3 (MAA vs. control, and testosterone + MAA vs. testosterone). The whole number portion of the TFS group number, 9, equals the sum of the binary flag values 1 + 8, i.e., significant regulation on the 1 st and 4 th array experiments. Similarly, TFS group 6.0220 indicates down regulation in the 2 nd and 3 rd array experiments, etc. Gene Ontology (GO) and motif enrichment analysis GO term enrichment analysis for each TFS group was carried out using DAVID data sets [37]. Briefly, genes in each TFS group were iteratively compared with genes in each gene set that share a common GO term, and the number of overlapping genes was used to calculate an enrichment score and a Fisher's exact test p-value for each TFS group and each gene set. GO terms enriched at p < 0.001 and containing >5 genes with the specific GO term for at least one TFS group were selected, and TFS groups with at least one enriched GO term were selected. A total of 156 unique GO terms enriched in 17 TFS groups were obtained (Additional file 3, Table S3A). Hierarchical clustering was implemented using Cluster [38], and a corresponding heat map was drawn using Java Treeview [39]. Cis-regulatory elements associated with the gene expression changes induced by testosterone and MAA were identified by gene set enrichment analysis (GSEA) by searching each group of genes against the 836 motif gene sets and against the 221 predicted microRNA (miRNA) target gene sets that comprise the C3 module of the Molecular Signatures database [40]. The motif gene sets contain genes sharing a cis-regulatory motif conserved across the human, Regulated genes were clustered based on whether there is an interaction between testosterone and MAA, as follows: Class I represents genes that respond to testosterone or to MAA with no interactive effect. Class II represents genes whose response to testosterone was enhanced by MAA (IIa), genes whose response to MAA was enhanced by testosterone (IIb), and genes whose responses to testosterone and MAA showed mutual enhancement (IIc). Class III corresponds to genes whose response to testosterone could be blocked by MAA (IIIa, IIIc) or whose response to MAA could be blocked by testosterone (IIIb, IIId). See Additional file 2, Table S2C for further details. mouse, rat, and dog genomes, and the motifs represent known or likely transcription factor binding sites in a 4 kb genomic region centered on the transcription start site of each gene. The miRNA target gene sets are comprised of genes with the corresponding miRNA binding sites present in 3'-UTR sequences. Results Generation of TM3-AR cells and AR expression TM3 mouse Leydig cells are reported to be MAA responsive [19], however, we found AR expression to be very low, and correspondingly, the androgen responsiveness of these cells was very weak, as judged by qPCR analysis (Figure 1). To increase the androgen response, AR cDNA was stably transfected into TM3 cells using a retroviral vector. The resulting pool of TM3-AR cells showed a marked increase in AR expression, comparable to that of the widely studied androgen responsive cell line LNCaP ( Figure 1A). The androgen responsiveness of TM3-AR cells was confirmed by the~5-fold increase in expression of Rhox5 (Pem) and by the~10 fold decrease in expression of Igfbp3 following testosterone treatment; neither gene showed a significant response to testosterone in TM3 cells, but the repression of Igfbp3 by MAA [22] was evident in both cell lines ( Figure 1B). Impact of MAA on TM3-AR cell gene expression The global impact of MAA on androgen-responsive gene expression was evaluated by microarray analysis. TM3-AR cells were treated for 24 h either with testosterone, MAA, a combination of testosterone and MAA, or vehicle control. Total RNA from each group was then analyzed on whole-mouse genome two-color expression microarrays for the following four comparisons: Array 1, testosterone vs. control; Array 2, MAA vs. control; Array 3, testosterone + MAA vs. testosterone; Array 4, testosterone + MAA vs. MAA. Normalized expression ratios and p-values were determined, and genes meeting our combined threshold for significance (|fold-change| > 2 and p < 0.005; see Methods) for at least one of the four microarray comparisons were identified. A total of 5,532 genes of interest were thus obtained after elimination of redundant probes. Hierarchical clustering of these 5,532 genes revealed closest correlation between arrays 1 and 4 (effects of testosterone in the absence and presence of MAA, respectively), and, to a lesser extent, between arrays 2 and 3 (effects of MAA in the absence and presence of testosterone, respectively) ( Figure 2). A complete listing of these genes, along with their expression ratios, measured signal intensities and gene annotations is provided in Additional file 2, Table S2A. Testosterone induced 1,233 genes and repressed 1,205 genes (array 1), while MAA induced 1,206 genes and repressed 525 genes (array 2). Rhox5 is a testosterone-inducible Sertoli cell marker gene that is also expressed at a low level in Leydig cells in vivo [56]. Igfbp3 is repressed by testosterone, and by MAA [22], with the latter response also seen in the TM3 cells deficient in AR. Data are mean ± SD values based on n = 3 replicates, with the untreated TM3 cell value set to 1.0. Significantly different from untreated control at p < 0.05 (*) or at p < 0.001 (***); significantly different from T at p < 0.05 (#) or at p < 0.001 (###); and significantly different from MAA at p < 0.001 (+++). The combination of testosterone + MAA induced 1,553 genes and repressed 748 genes when compared with testosterone treatment alone (array 3), while 1,587 genes were induced and 1,396 genes were repressed by testosterone + MAA, when compared with MAA treatment alone (array 4). Among the genes induced by testosterone were Rhox5 (Pem) [41] and Amotl1 [42], two wellcharacterized androgen-inducible genes. 87% of the MAA-responsive genes in TM3-AR cells identified on array 2 overlap with the set of MAA-responsive genes that we previously identified in TM3 cells that do not express AR [29], validating the robustness of the MAA response. Ingenuity Pathway Analysis revealed testosterone related gene networks that respond to MAA; these include cell death and cellular development, reproductive system disease, and small molecule biochemistry ( Figure 3 and Additional file 4, Figure S1). MAA affects androgen response in multiple ways: clustering by significance and differential expression The impact of MAA on testosterone gene responses was investigated by classification of the regulated genes using a binary flagging system [36], whereby each gene was assigned to a specific category, termed TFS (total flagging sum), based upon its expression ratio and p-value in each of the four microarray experiments (Additional file 2, Table S2B). This system provides a simple way to identify gene groups that responded to testosterone or MAA and to determine whether there is any interaction between them. Of the 5,532 genes of interest, 5,230 (95%) could be grouped into four major classes based on the interactions of testosterone and MAA (Table 1 and Additional file 2, Table S2C). Class I is comprised of 1,655 genes (30% of the total) distributed into 7 TFS gene groups. These genes responded to testosterone and/or MAA but showed no interaction between testosterone and MAA. Class II is comprised of 2,235 genes (40%) distributed into 12 TFS groups. These genes displayed positive interactions between testosterone and MAA, i.e., testosterone enhanced responses to MAA, and/or vice versa, or the combination of both agents induced gene responses not observed with the individual treatments. Class III is comprised of 1,240 genes (24%) distributed into 8 TFS groups. These genes showed negative interactions between testosterone and MAA, i.e., the response to testosterone could either be blocked or reversed by MAA, or vice versa. The remaining 302 genes were distributed into 25 small TFS groups and were not considered further (Additional file 2, Table S2C). It should be noted that the induction or repression observed by treating with testosterone + MAA is being compared with that obtained with testosterone alone (array 3) or to MAA alone (array 4), and not to the vehicle-treated control. In case of class III genes, this is of particular importance, as in some cases, testosterone alone may cause gene induction, while treatment with testosterone + MAA might cause repression relative to the level of expression with testosterone alone but not when compared to vehicle control. For example, in case of Cep70 in TFS group 5.2010, the microarray signal intensities (corresponding to expression levels) in the control, testosterone, and testosterone + MAA samples were 6,815, 3,303 and 7,092, respectively (Additional file 2, Table S2A). These values indicate repression by testosterone and induction by testosterone + MAA as compared to testosterone, but not when compared to vehicle control. The net result, however, is that MAA blocks the repressive action of testosterone. Patterns such as these, where testosterone or MAA block or reverse the response to the other agent, characterize the genes in class III. Real time qPCR validation To confirm the results of the microarrays, qPCR analysis was carried out for 15 genes representing five different TFS groups (Figure 4 and Additional file 5, Table S4). Results were in close agreement, although in several cases fold-change values determined by qPCR were greater than those obtained by microarray (e.g., 38.6fold induction of Tulp2 by testosterone + MAA vs. testosterone alone by qPCR, vs. 7.6-fold induction by microarray; Additional file 5, Table S4). This finding is consistent with the compression of expression ratios commonly seen using microarrays. Functional impact of MAA on androgen responsive gene expression Gene Ontology (GO) term analysis was carried out to identify the functional gene categories (i.e., the GO terms) enriched in the sets of genes that comprised each major TFS group. These analyses were useful for elucidating the functional consequences of testosterone and MAA treatment and their interactions. A summary of the major results is presented in Figure 5, with full details provided in Additional file 3, Table S3A. Among the gene groups showing no interaction between MAA and testosterone (class I genes), 964 genes responded to testosterone but not to MAA. The 472 class I genes up regulated by testosterone (TFS group 9.1001) were most highly enriched in GO terms associated with negative regulation of apoptosis, ion binding and lipid metabolism ( Figure 5). In contrast, the 492 class I genes down regulated by testosterone (TFS 9.2002) were enriched for immune response, cytokine activity, chemotaxis and extracellular matrix and developmental processes ( Figure 5). Class II genes, whose responses are enhanced by testosterone and/or MAA, were distributed into three subclasses (Table 1), based on whether MAA enhanced responses to testosterone (class IIa, 819 genes), testosterone enhanced responses to MAA (class IIb, 619 genes), or the enhancement was mutual (class IIc, 734 genes). Class IIa genes showed the highest enrichment for lipid biosynthesis (TFS 8.0001), apoptosis, cell differentiation, and regulation of biological processes (TFS 8.0002). Class IIb genes showed highest enrichment for extracellular matrix, cell adhesion and chemotaxis (TFS 13.022), while class IIc genes showed highest enrichment for plasma membrane (TFS 12.0011) and for extracellular matrix, cell adhesion, and organ development (TFS 12.0022). Class III genes were distributed into subclasses, based on whether MAA blocked the response to testosterone (IIIa, IIIc) or testosterone blocked the response to MAA (IIIb, IIId) and whether testosterone and MAA are active alone (IIIa, IIIb), or not (IIIc, IIId). The largest TFS group in class IIIa (TFS 1.1000; 329 genes) showed greatest enrichment for cellular and biopolymer metabolic processes, nucleic acid binding, kinase activity and metal ion binding, and included 55 genes that encode nuclear factors, indicating a wide range of impact of MAA on testosterone responses. Finally, the genes in class IIId, TFS group 10.0102, whose induction by MAA was blocked by testosterone, and whose suppression by testosterone was only manifested when MAA was present, showed greatest enrichment for extracellular region and defense response ( Figure 5 and Additional file 3, Table S3A). Motif enrichment analysis Species-conserved transcription factor binding site motifs and 3'-UTR miRNA binding sites enriched in the genes belonging to each TFS group were identified by GSEA [40], as described under Methods. A total of 64 motifs enriched in 13 TFS groups were identified after filtering out motifs not showing enrichment at p < 0.001 and TFS groups with no enriched motifs at p < 0.001. Table 2 summarizes the results, and the discovered motifs and miRNA binding sites are clustered in Additional file 6, Figure S2. Detailed information for each motif and miRNA-binding site, including enrichment scores, is provided in Additional file 3, Table S3B. Motifs enriched in the promoters of class I genes in TFS group 9.2002 including binding sites for FOXO and other forkhead family transcription factors, with FOXO motifs showing the most significant enrichment (p = 3.2 E06). Six Hox genes that were repressed by testosterone (TFS 9.2002) or whose repression was mutually enhanced by MAA and testosterone (TFS 12.0022), i.e., Hoxb5, Hoxb9, Hoxc6, Hoxc8, Hoxd3 and Hoxd13, are putative targets of FOXO (Additional file 2, Table S2D), suggesting an important role for Fox and Hox genes in the modulation of AR activity by MAA. Motifs for MEF2A, TCF/LEF and STAT5 factors were also enriched among the class I TFS groups, as were 3'-UTR binding sites for several miRNAs. The enriched motifs for class II genes, whose response to testosterone and MAA showed mutual enhancement, include binding Figure 1B. qPCR analysis was carried out for the five indicated genes; their TFS group assignments (Additional file 2, Table S2) are shown in parentheses. Two pools of TM3-AR cell RNA, each comprised on RNA isolated from 3 independent cell cultures, were assayed and are represented by the pair of bars in each treatment group. Data are mean ± SD values based on n = 3 replicates, with the first vehicle control pooled sample set to 1.0. Significantly different from control at p < 0.05 (*) or at p < 0.01 (**); significantly different from T at p < 0.05 (#); and significantly different from MAA at p < 0.01 (++). qPCR primers used for this analysis are shown in Additional file 1, Table S1. Bagchi et al. Reproductive Biology and Endocrinology 2011, 9:42 http://www.rbej.com/content/9/1/42 Figure 5 GO term enriched in TFS groups. Shown is a hierarchical clustering of GO terms that were significantly enriched (p < 0.0001, and number of included genes >10) in at least one TFS group. GO terms are displayed at the right and TFS group numbers are shown across the top. The color bar at the top left represents -log10 p-values, with higher numbers (darker colors) indicating more significant enrichment. A complete list of enriched GO terms, p-values, enrichment scores and regulated genes in each GO term is provided in Additional file 2, Table S2A. GO terms associated with apoptosis are in the boxed region at the top. TFS groups represent the following groups of genes 9.1001, genes induced by testosterone, both without and with MAA present; 9.2002, genes suppressed by testosterone, both without and with MAA present; 8.0001, genes induced by testosterone, but only in the presence of MAA; 8.0002, genes suppressed by testosterone but only in the presence of MAA; 12.0011, genes induced by testosterone + MAA, relative to testosterone alone and relative to MAA alone; 12.0022, genes suppressed by testosterone + MAA, relative to testosterone and relative to MAA alone; 13.2022, genes suppressed by testosterone either alone or in combination with MAA, and MAA enhances the suppression by testosterone; 1.1000, genes whose induction by testosterone is blocked by MAA; and 10.0102, genes induced by MAA, but only in the absence of testosterone, and suppressed by testosterone, but only in the presence of MAA. (Table 2). Binding sites for STAT5, TCF/LEF, FOXF2, PGR, MYCN and several miRNAs were enriched in class III (TFS groups 1.1000, 2.0100 and 2.0200). These analyses also revealed a large number of miRNA binding sites that are common to the 3'-UTR sequences of testosterone-regulated genes. Discussion MAA, the active metabolite of the industrial chemical ethylene glycol monomethyl ester, is an established testicular toxicant. Earlier studies suggested that MAA could potentiate AR transcriptional activity without significantly altering the dose-response curve for androgen activity, as determined in reporter gene studies [22]. Presently, the impact of MAA on expression of androgen-regulated genes was characterized globally in a mouse Leydig cell model. Mouse TM3 cells stably expressing AR were treated with testosterone, MAA, or with both chemicals in combination, and 5,532 genes responding to one or more treatments were identified and then classified, and sub-classified, based on their patterns of response to each treatment. GO term and motif enrichment analysis were carried out for genes in each subgroup to help identify the biological functions and pathways affected by MAA as it impacts cellular responses to testosterone. MAA has a wide range of impact on androgen responses AR activated by testosterone can have direct effects on target gene transcription [43], as well as indirect effects mediated by intracellular signaling pathways. These range from stimulation of protein kinases, to direct modulation of voltage-and ligand-gated ion channels and transporters [44], some of which may lead to changes in gene expression [44]. Here, our microarray analysis identified large numbers of genes that responded to testosterone, a subset of whose responses were modulated by MAA. These genes contribute to a wide range of biological processes, including cell death, development, ion binding, kinase activities and transcription. These findings may help explain some of the previous findings about the toxicities of MAA. For example, MAA stimulates apoptosis of male germ cells [19,23,[45][46][47][48] by mechanisms proposed to involve various kinases and ion transporters [45,47]. Ion transport is important for the maintenance of intracellular pH, perturbation of which can affect germ cell fertility [49]. Proteins involved in transport comprise a large group of MAA regulated genes, including genes whose expression is affected by MAA alone, and genes that are additively or synergistically regulated by MAA and testosterone. For instance, GO term enrichment analysis identified 52 ion binding protein and 19 kinase genes that were significantly enriched in the set of genes induced by testosterone whose induction is blocked by MAA (TFS group 1.1000). This same gene set showed enrichment for genes that negatively regulate apoptosis. In contrast, both positive and negative regulators of apoptosis were enriched in the set of genes repressed by testosterone in the presence of MAA (TFS group 8.0002). Further investigation will be required to determine whether these gene responses contribute to the testicular toxicities of MAA seen in mouse models, as well as their relevance to humans exposed to MAA. FOXO proteins can associate with AR and other nuclear/steroid hormone receptors, leading to either inhibition or enhancement of receptor transcriptional activity [12]. These interactions have the potential to impact the development of hormone-dependent cancers, including prostate, breast and ovarian cancer [12]. Here, we found that FOXO motifs were enriched in 53 genes repressed by testosterone irrespective of whether MAA was present (11% of the genes in TFS group 9.2002), and in 12 genes down-regulated by testosterone, but only when MAA was present, and vice versa (15% of genes in TFS group 12.0022) (Additional file 2, Table S2D). These findings suggest that FOXO factors plays an important role in cellular responses to testosterone and their modulation by MAA. 18 of the potential FOXO targets are involved in transcription regulation, including 6 Hox genes (Hoxb5, Hoxb9, Hoxc6, Hoxc8, Hoxd3 and Hoxd13) (Additional file 2, Table S2D). Of note, loss of Hoxc6 has been reported to induce apoptosis [50]. Moreover, 16 of the 65 FOXO target genes down regulated by testosterone are associated with developmental processes, as indicated by their GO terms. Based on our microarray signal intensity data, at least three FOXO genes are either highly expressed (Foxo1) or moderately expressed in untreated TM3-AR cells (Foxo6, Fox3a), suggesting these factors may mediate the effects on FOXO target genes. FOX family genes are primarily regulated through the phosphoinositide-3-kinase (PI3k)-Akt pathway via phosphorylation and nuclear exclusion [11], which is consistent with our earlier finding that the PI3K/Akt pathway is required for the effects of MAA on AR transcriptional activity [22]. Two other transcription factors that are expressed in TM3-AR cells and may be involved in the interactions between testosterone and MAA are LEF/TCF and STAT5. Binding sites for LEF/TCF are significantly enriched in several sets of genes that are regulated by testosterone and MAA, while binding sites for STAT5 are enriched in genes repressed by testosterone (TFS group 9.2002) and in genes whose induction by testosterone was blocked by MAA (TFS group 1.1000) ( Table 2; Additional file 3, Table S3B). These findings are consistent with reports that STAT5 and LEF/TCF can modulate AR-regulated gene responses, with STAT5 showing positive interactions with AR [8], and LEF/TCF either repressing or enhancing AR activity [14]. Similarly, our finding that binding sites for MEF2 are enriched in TFS groups responsive to testosterone or MAA (Table 2) is consistent with the finding that binding sequences for MEF2 family transcription factors are commonly found near binding sites for AR, at least in muscle cells [51]. Possible roles for miRNAs in MAA and testosterone responses miRNAs are short,~22 nucleotides long RNAs that generally bind to 3'-UTR sequences of target mRNAs, resulting in post-transcriptional mRNA down regulation and translational repression [52]. Here, we identified several miRNAs whose putative target sites are overrepresented in genes responsive to MAA or testosterone, suggesting a possible role for these miRNAs in mediating responses to MAA and and testosterone. Genes in TFS group 1.1000, whose induction by testosterone was blocked by MAA, were enriched in 3'-UTR binding sites for the largest number of miRNAs (Table 2). These include mir-9 and miR-519e, which have been reported to down regulate AR protein [53]. Conceivably, MAA could induce these two miRNAs, which in turn, would down regulate AR protein and functional activity. Two other miRNAs whose binding sites were enriched in the genes of TFS group 1.1000, namely mir-20A and mir-202, are induced in testicular tubules following suppression of FSH and androgen [54], which leads to a block in spermiation. The enrichment of these miRNAs in TFS 1.1000 genes suggests that testosterone may down regulate these miRNAs, which would, in turn, lead to the observed up regulation (de-repression) of the TFS 1.1000 genes with mir-20A and mir-202 sites. Moreover, the inhibition of this gene induction by MAA suggests that MAA may block or perhaps reverse the down regulation of these miRNAs by testosterone. Further study is required to determine the effects of testosterone and MAA on these and other testis-expressed miRNAs, and their impact of spermatogenesis and the toxicities associated with MAA exposure. Impact of testosterone and MAA on expression of CYP and GST genes CYP (cytochrome P450) and GST (glutathione S-transferase) enzymes metabolize a broad range of endogenous and exogenous compounds. Here, we found that the expression of 20 CYP and 12 GST genes was affected by either MAA or testosterone (Additional file 2, Table S2E). Nine of these genes were induced by MAA alone (Cyp2d22, Cyp26a1, Cyp26b1, Gstk1, Gstm6, Gstm7, Gstt2, Mgst2 and Mgst3), while four genes were induced by MAA but down regulated by testosterone (Cyp1a1, Cyp2s1, Cyp2f2 and Mgst3). Three CYPs that show female-predominant expression in mouse liver [55] were further induced by testosterone in the presence of MAA compared to testosterone treatment alone (Cyp2b9, Cyp2b10 and Cyp2b13). Further studies are needed to determine whether these enzymes play a metabolic role in MAA modulation of testosterone signaling and/or the detoxification of MAA.
8,569.4
2011-03-31T00:00:00.000
[ "Medicine", "Biology", "Chemistry" ]
An Investigation of the Photonic Application of TeO2-K2TeO3-Nb2O5-BaF2 Glass Co-Doped with Er2O3/Ho2O3 and Er2O3/Yb2O3 at 1.54 μm Based on Its Thermal and Luminescence Properties A glass composition using TeO2-K2TeO3-Nb2O5-BaF2 co-doped with Er2O3/Ho2O3 and Er2O3/Yb2O3 was successfully fabricated. Its thermal stability and physical parameters were studied, and luminescence spectroscopy of the fabricated glasses was conducted. The optical band gap, Eopt, decreased from 2.689 to 2.663 eV following the substitution of Ho2O3 with Yb2O3. The values of the refractive index, third-order nonlinear optical susceptibility (χ(3)), and nonlinear refractive index (n2) of the fabricated glasses were estimated. Furthermore, the Judd–Ofelt intensity parameters Ωt (t=2,4,6), radiative properties such as transition probabilities (Aed), magnetic dipole-type transition probabilities (Amd), branching ratios (β), and radiative lifetime (τ) of the fabricated glasses were evaluated. The emission cross-section and FWHM of the 4I13/2→4I15/2 transition around 1.54 μm of the glass were reported, and the emission intensity of the visible signal was studied under 980 nm laser excitation. The material might be a useful candidate for solid lasers and nonlinear amplifier devices, especially in the communications bands. Introduction Due to their unique optical characteristics, glasses are frequently utilized as optical materials.In recent years, there has been an obvious increase in interest in the possible applications of tellurite glasses due to their intriguing and significant optical features.Tellurite glasses have been used to fabricate a wide range of devices, such as planar waveguides, nanowires, and optic amplifiers [1][2][3][4].Numerous studies have been conducted to examine the physical characteristics of several tellurite (TeO 2 ) glass combinations.TeO 2 glasses have attracted some technical and scientific interest because of their many applications [5].Technological optical fiber devices, lasers, optical fibers, solar cells, sensors, memoryswitching devices, gas sensors, optoelectronics, and optical waveguide applications have all demonstrated significant potential for using these glasses [5][6][7][8].Furthermore, due to TeO 2' s excellent nonlinear optical characteristics, low melting point, superior chemical stability, and elevated index of refraction, it has drawn greater interest as a glass former than other glass formers such as silicates and phosphate [9][10][11].Given their huge transparent window, great refractive index, and outstanding stability, TeO 2 glasses containing a heavy metal oxide-such as Nb 2 O 5 , which is a heavy metal tellurite glass-are appealing for the additional development of infrared lasers and amplifiers [12].Furthermore, niobic TeO 2 glasses have an elevated third-order nonlinear optical susceptibility, which makes them a viable option for nonlinear fiber devices such as all-optical switches [12].The contents of non-bridging oxygen (NBO) and TeO 3 units both rise when modifiers such as BaO 2 disrupt the random arrangement of glasses [13].Moreover, the alkaline earth metal element barium (Ba) has a high basicity, a large ionic radius, and a higher atomic number.Consequently, the presence of Ba in the TeO 2 glass framework modifies the glass's construction and enhances its chemical stability, density, gloss, and refractive index [14,15].When fluorine ions are added to TeO 2 glass, the glass's formation range is increased, its viscosity is decreased, its degree of transparency is enhanced, and its moisture resistance is increased [16].Also, adding fluorine to TeO 2 glasses disintegrates the network of the glassy system due to fluorine's electronegativity being greater than that of oxygen [16].This implies that the network's configuration of the glass in these kinds of glasses is impacted by the fluoride's replacement of oxygen.When the framework is altered, a lot of fundamental characteristics also change.Unlike both oxygen and fluorine matrix structures, an oxide-fluoride glass matrix might offer rare earth ions a unique home.As a result, oxyfluoride glasses with a high proportion of rare earth elements are novel beneficial substances [17].Fluoride glasses have a minimal cost limit and a wide spectrum range, which makes them ideal for optical fiber applications involving sensors.It is commonly known that increasing a glass matrix's metal fluoride compounds improves its transparent nature and refractive index [9].The prospective utilization of co-doping of glasses with rare earth ions in solid-state lasers, three-dimensional displays, and optical amplifiers has garnered significant interest in the last decade [18].The rare earth ions erbium (Er 3+ ), ytterbium (Yb 3+ ), and holmium (Ho 3+ ) have received the most attention.Er 3+ , Ho 3+ , and Yb 3+ ion insertion into the matrix of glass allows for the production of up-conversion luminescence when it is exposed to midinfrared (MIR) rays.A high-power laser diode energy of 980 nm might be used to actively excite Er 3+ ions.According to the mutual concentration of these ions, a transfer of energy process among them might then alter the up-conversion emission intensity [19].Moreover, erbium-doped glasses are extensively utilized in a diversity of optical implementations, mostly in the areas of eye-safe lasers and optical amplifiers for fiber networking [18,20,21].Furthermore, the observable up-conversion emission can be amplified by co-doping the host substance with Ho 2 O 3 /Er 2 O 3 or Yb 2 O 3 /Er 2 O 3 couplings.This is because of a greater absorbing cross-section and greater energy transmission operations from Ho 3+ to Er 3+ ions or from Yb 3+ to Er 3+ ions, respectively [18,22,23].The present work aimed to prepare TeO 2based glass (70TeO 2 -15K 2 TeO 3 -10Nb 2 O 5 -5BaF 2 ) that was co-doped with Er 2 O 3 /Ho 2 O 3 and Er 2 O 3 /Yb 2 O 3 and to investigate the impact of these rare earth oxides on the thermal and optical properties of the glasses.Differential scanning calorimetry and a double-beam spectrophotometer were utilized to study these characteristics.Furthermore, optical spectrum measurements were conducted to estimate the optical factors.The energy levels of the glass co-doped with Er 2 O 3 /Yb 2 O 3 were determined, and the branching ratios (β), radiative lifetimes (τ rad ), electric dipole-type transition probabilities (A ed ), magnetic dipoletype transition probabilities (A md ), and Judd-Ofelt intensity factors Ω t (t = 2, 4, 6) were calculated. The primary goal of this work was to investigate how the amount of Yb 3+ and Ho 3+ affects the spectroscopic characteristics of Er 3+ -co-doped TeO 2 glasses.This will help maximize the transition's gain and emission cross-section between 4 I 13/2 → 4 I 15/2 and will also determine whether or not these glasses are suitable as optical glasses for laser and fiber amplifiers.Utilizing measurements of the absorption spectra and McCumber theory, the absorbing, emitting, and gain cross-sections of the 4 I 13/2 → 4 I 15/2 transition were derived at approximately 1.54 µm.Finally, the emission intensity of the visible signal was studied under 980 nm laser excitation.The FWHM of the 4 I 13/2 → 4 I 15/2 transition of the glass was reported.Furthermore, we estimated the nonlinear refractive index and third-order susceptibility of fabricated glass which have good transmission with multiple absorption peaks in the near-infrared wavelength spectrum.Therefore, fabricated glass is a unique property that can used in nonlinear devices. Experimental Section The TeO 2 glasses with the composition 70TeO 2 -15K 2 TeO 3 -10Nb 2 O 5 -5BaF 2 (in mol%) co-doped with Er 2 O 3 /Ho 2 O 3 and Er 2 O 3 /Yb 2 O 3 were produced using a melt-quenching process.All the starting chemicals were procured from Aldrich and were 99.99% pure.The particulars of their compositions are displayed in Table 1.After mixing, the mixture was heated for 25 min at 950 • C in a platinum crucible within a furnace.After that, a graphite mold was filled with the extremely viscous melt.Following two hours of annealing at 270 • C, the quenched glass was gradually cooled to room temperature (RT).Figure 1 shows pictures of the sample glasses as they were produced.The samples were cut and polished to 2.1 mm thick.To investigate the thermal properties of these glasses, a DSC Shimadzu 50 with a resolution ± 1.0 • C at a heating rate of 10 • C/min over a temperature range of 550 • C was employed.Toluene, a known-density immersing solution (0.8669 g/cm 3 ), was employed using the Archimedes method with a resolution ± 0.001 g/cm 3 to determine the glass sample's density at RT.In the 190-2500 nm wavelength range with a resolution of 1nm, the optical absorbing and transmission spectra were attained utilizing a JASCO V-570 spectrophotometer.The primary goal of this work was to investigate how the amount of Yb 3+ and Ho 3+ affects the spectroscopic characteristics of Er 3+ -co-doped TeO2 glasses.This will help maximize the transition's gain and emission cross-section between 4 I13/2 4 I15/2 and will also determine whether or not these glasses are suitable as optical glasses for laser and fiber amplifiers.Utilizing measurements of the absorption spectra and McCumber theory, the absorbing, emitting, and gain cross-sections of the 4 I13/2 4 I15/2 transition were derived at approximately 1.54 µm.Finally, the emission intensity of the visible signal was studied under 980 nm laser excitation.The FWHM of the 4 I13/2 4 I15/2 transition of the glass was reported.Furthermore, we estimated the nonlinear refractive index and third-order susceptibility of fabricated glass which have good transmission with multiple absorption peaks in the near-infrared wavelength spectrum.Therefore, fabricated glass is a unique property that can used in nonlinear devices. Experimental Section The TeO2 glasses with the composition 70TeO2-15K2TeO3-10Nb2O5-5BaF2 (in mol%) co-doped with Er2O3/Ho2O3 and Er2O3/Yb2O3 were produced using a melt-quenching process.All the starting chemicals were procured from Aldrich and were 99.99% pure.The particulars of their compositions are displayed in Table 1.After mixing, the mixture was heated for 25 min at 950 °C in a platinum crucible within a furnace.After that, a graphite mold was filled with the extremely viscous melt.Following two hours of annealing at 270 °C, the quenched glass was gradually cooled to room temperature (RT).Figure 1 shows pictures of the sample glasses as they were produced.The samples were cut and polished to 2.1 mm thick.To investigate the thermal properties of these glasses, a DSC Shimadzu 50 with a resolution ± 1.0 °C at a heating rate of 10 °C/min over a temperature range of 550 °C was employed.Toluene, a known-density immersing solution (0.8669 g/cm 3 ), was employed using the Archimedes method with a resolution ± 0.001 g/cm 3 to determine the glass sample's density at RT.In the 190-2500 nm wavelength range with a resolution of 1nm, the optical absorbing and transmission spectra were attained utilizing a JASCO V-570 spectrophotometer. Thermal Characteristics The DSC thermograms for the 70TeO 2 -15K 2 TeO 3 -10Nb 2 O 5 -5BaF 2 glass co-doped with Er 2 O 3 /Ho 2 O 3 and Er 2 O 3 /Yb 2 O 3 at a rate of 10 • C/min are shown in Figure 2.These graphs demonstrate that at the glass transition, distinct endothermic peaks are seen, followed by an exothermic crystallization peak.Table 2 lists the glass transition temperature (T g ), crystallization start temperature (T c ), and peak crystallization temperature (T p ) for the investigated glasses. Thermal Characteristics The DSC thermograms for the 70TeO2-15K2TeO3-10Nb2O5-5BaF2 glass co-doped with Er2O3/Ho2O3 and Er2O3/Yb2O3 at a rate of 10 °C/min are shown in Figure 2.These graphs demonstrate that at the glass transition, distinct endothermic peaks are seen, followed by an exothermic crystallization peak.Table 2 lists the glass transition temperature (Tg), crystallization start temperature (Tc), and peak crystallization temperature (Tp) for the investigated glasses.The glassy nature of the current samples was confirmed based on the curve forms.Additionally, Tg offers details regarding the glass network's interconnectivity and binding strength.It is understood that Tg increases as the glass's interconnectivity and binding strength grow [24].These values of (Tg) are close to the tellurite-based glasses [25].The distinction between Tc and Tg is employed to determine the thermal stability (ΔT) [26,27] of the produced glasses (ΔT = Tc − Tg).The glass's excellent thermal stability is indicated by the significant difference between Tc and Tg.Another method for calculating the glass's stability would be to use Sestak's estimation of the Hruby index, H = ΔT/Tg.[28,29].The ∆T and H are shown in Table 2, and these are important for determining the glass devitrification process [27].The next relation could be employed to evaluate the value of the factor KSP, which is associated with glass's stability versus crystallization [28][29][30][31]: The glassy nature of the current samples was confirmed based on the curve forms.Additionally, T g offers details regarding the glass network's interconnectivity and binding strength.It is understood that T g increases as the glass's interconnectivity and binding strength grow [24].These values of (T g ) are close to the tellurite-based glasses [25].The distinction between T c and T g is employed to determine the thermal stability (∆T) [26,27] of the produced glasses (∆T = T c − T g ).The glass's excellent thermal stability is indicated by the significant difference between T c and T g .Another method for calculating the glass's stability would be to use Sestak's estimation of the Hruby index, H = ∆T/T g [28,29].The ∆T and H are shown in Table 2, and these are important for determining the glass devitrification process [27].The next relation could be employed to evaluate the value of the factor K SP , which is associated with glass's stability versus crystallization [28][29][30][31]: Sample Code The K SP magnitudes of the produced glasses are documented in Table 2.It can be seen that all these values of the thermal stability parameters (∆T, H, K SP ) of the TKNB glass co-doped with Er 2 O 3 /Yb 2 O 3 (TKNB2) are slightly lower than those of the TKNB glass co-doped with Er 2 O 3 /Ho 2 O 3 (TKNB1).This may suggest a decrease in rigidity in the glassy matrix due to the replacement of Ho 2 O 3 with Yb 2 O 3 .On the other hand, this decrease in thermal stability due to the substitution of Ho 2 O 3 with Yb 2 O 3 might be credited to the following causes: (i) the low bond strengthening Yb-O (387.7 kJ mol −1 ) in contrast to Ho-O (606 kJ mol −1 ) [32]; or (ii) the cation radius of Yb 3+ (2.28 Å) being slightly lower than the cation radius of Ho 3+ (2.33 Å).In addition to reducing the length of bonds and producing a high coulombic force of attraction between opposing ions, cation's radius is directly proportional to polarizability [33]. Density and Molar Volume The densities of the examined glasses were determined by utilizing Archimedes' principle.We used toluene (ρ 0 = 0.8669 g/cm 3 ) as the immersing fluid.The density (ρ) was computed employing the equation presented below after the weights of the glasses were first measured separately in the air (W a ) and then in the previously mentioned liquid (W l ). Using the formula V m = M w /ρ, where M w is the glass sample's molecular weight, the molar volume (V m ) was computed.The values of ρ and V m are listed in Table 1.It is worth mentioning that the substitution of Ho 2 O 3 with Yb 2 O 3 leads to a slight increase in glass density, while the V m slightly decreases.This phenomenon could be ascribed to both the higher ρ and the higher M w of Yb 2 O 3 (9.17g/cm 3 , 394.08 g/mol) compared with Ho 2 O 3 (8.41g/cm 3 , 377.86 g/mol).Given that the ρ is inversely correlated to the V m and is proportionate to the average M w , it is generally predicted that the two quantities will behave in opposition to one another. The absorption coefficient (α) of the examined glasses is computed using the following formula [32][33][34]: where L is the thickness of the sample before and after the light traverses the sample, its intensities are I 0 and I t , and OD is the optical density. The spectrum of absorption of glasses with varying compositions is used to study changes in the optical band gap (E opt ) and refractive index (n).The E opt , which is the difference between the highest energy in the valence band and the lowest energy in the conduction band, and temperature both affect how many electrons are excited toward the conduction band.When a material's E opt falls within the ranges of 0 to 4 eV, it is classified as a semiconductor.A substance is classified as an insulator if its E opt value is substantial, falling between 4 and 12 eV.These statistics are crucial for designing semiconductor devices, since the energy gap determines the electrical and optical characteristics of the device.Clarifying the transformation and electronic band construction of substances has been shown to have been an extremely beneficial effect in research on the optical absorbing edge in the UV area.Using this connection, the optical band gap E opt of the specimens is computed from their absorption patterns [35]. where b is a constant, hv represents the energy of the incoming photon, and α is the absorbing coefficient.An index called s, which is equivalent to ½, 2, 3/2, or 3 for directly allowed, indirectly allowed, directly forbidden, and indirectly forbidden transitions, respectively, is used to describe the optical absorption processes.In this instance, indirect transmissions in substances are covered by this formula with s = 2, which is employed to characterize experimental findings for amorphous substances [33][34][35].The change in (αhν) 1/2 vs. hν for indirect transitions is shown in Figure 5.By means of extrapolation of the linear fitting of the curve in the UV-VIS nm range across the hν axis at (αhν) 1/2 = 0, the E opt of the fabricated glasses was determined.Table 3 The absorption coefficient (α) of the examined glasses is computed using the following formula [32][33][34]: The absorption coefficient (α) of the examined glasses is computed using the following formula [32][33][34]: It is discovered that the E opt values of the two glass samples are lower than the pristine TeO 2 glass's 3.79 eV value, determined in a previous study [36].The values of E opt for the TKNB glass co-doped with Er 2 O 3 /Yb 2 O 3 (TKNB2) are slightly lower than the TKNB glass doped with Er 2 O 3 /Ho 2 O 3 (TKNB1).The BO in the glass-forming system is altered by the addition of rare earth ions, and any alteration in BO, including the creation of NBO, alters the absorption properties, which in turn reduces the optical band gap.The TeO 2 matrix's coordination number varies when rare earth is added.The sample matrix's structural organization and chemical makeup both have an impact on the optical band gap [37].When Ho 2 O 3 is replaced with Yb 2 O 3 , the slight decrease in E opt is because of the substitution of the Ho-O bond with the Yb-O bond.The decrease in E opt with the addition of Yb 2 O 3 can be elucidated by the change in electron density [38].When Yb 2 O 3 is added to a glass sample instead of Ho 2 O 3 , the amount of NBO may increase, which consequently decreases the E opt [37].The value of the materials' molar mass is the cause of the increase in NBO [39].In comparison to the molar mass of Ho 2 O 3 , the molar mass of Yb 2 O 3 is higher.Thus, there is an increase in NBO atoms, which lowers the value of E opt for the Yb 2 O 3 -containing glass sample.Additionally, the E opt falls under the category of semiconductor substances, ranging from 2.689 eV to 2.663 eV [40]. The following equations characterize the extinction coefficient (k) and n of the examined glasses [41]: where R is the reflectance.Figures 6 and 7 show the computed values of the k and n of the examined glasses, respectively.Both the variations in structure and the fluctuation in the incident wavelength (λ) can affect the n and k.According to Figure 7, the n drops as the λ increases.The following figure shows the various optical factors that are determined using the n and k spectrum distributions vs. λ for the tested glasses. be elucidated by the change in electron density [38].When Yb2O3 is added to a glass sample instead of Ho2O3, the amount of NBO may increase, which consequently decreases the Eopt [37].The value of the materials' molar mass is the cause of the increase in NBO [39]. In comparison to the molar mass of Ho2O3, the molar mass of Yb2O3 is higher.Thus, there is an increase in NBO atoms, which lowers the value of Eopt for the Yb2O3-containing glass sample.Additionally, the Eopt falls under the category of semiconductor substances, ranging from 2.689 eV to 2.663 eV [40]. The following equations characterize the extinction coefficient (k) and n of the examined glasses [41]: 𝑘 = 𝛼𝜆 4𝜋 (5) ) where R is the reflectance.The data for the refractive indexes in Figure 7 are fitted to a three-term Sellmeier equation: The data for the refractive indexes in Figure 7 are fitted to a three-term Sellmeier equation: where A, B, C, D, and E are the glass substance dispersion factors (Sellmeier coefficients), and λ is the wavelength in µm.The n is influenced by both smaller and greater energy gaps from electronic adsorption, as indicated by the first and second terms.The final term indicates the refractive index-lowering effect of network absorbing [42].The square of twice the IR transmitting edge could be utilized for calculating the E [42,43].Table 3 displays Sellmeier's factors, which were obtained by fitting the experimental data [44] with the use of Equation (7).The Wemple and DiDomenico (WDD) single-oscillator model was utilized to study the n dispersion [45].The relation between n and hν can be represented as follows utilizing this model: where E d is the dispersion energy, which is an estimate of the oscillator strength or average strength of the interband optical transition, and E o is the oscillator energy.Equation ( 8 When hv→0, the Wemple-DiDomenico dispersion relationship, Equation ( 8), is extrapolated to obtain the static refractive index (n0) of the as-prepared glasses, which yields the following formula: The determined values of Eo, Ed, and no are recorded in Table 4.It can be seen that the value of no for TKNB glass doped with Er2O3/Yb2O3 (TKNB2) is higher than that of TKNB glass doped with Er2O3/Ho2O3 (TKNB1) due to the higher density of the TKNB2 sample.When hv→0, the Wemple-DiDomenico dispersion relationship, Equation ( 8), is extrapolated to obtain the static refractive index (n 0 ) of the as-prepared glasses, which yields the following formula: The determined values of E o , E d , and n o are recorded in Table 4.It can be seen that the value of n o for TKNB glass doped with Er 2 O 3 /Yb 2 O 3 (TKNB2) is higher than that of TKNB glass doped with Er 2 O 3 /Ho 2 O 3 (TKNB1) due to the higher density of the TKNB2 sample.Important factors for the production of optical equipment, such as fiber optic and laser substances, are the n, molar polarizability (α m ), and molar refraction (R m ).Consequently, the following formulas [34] are applied to calculate these properties of the examined glasses: where N A is Avogadro's number.The R m and α m values are recorded in Table 3.These values for TKNB glass doped with Er 2 O 3 /Yb 2 O 3 (TKNB2) are higher than those of glass doped with Er 2 O 3 /Ho 2 O 3 (TKNB1).This provides a qualitative explanation for the rise in refractive indices observed when Yb 2 O 3 replaces Ho 2 O 3 .We conclude that a rise in molar refraction may be the cause of the observed pattern, which is a rise in the oxide ion polarizability, accompanied by an elevation in n.This relationship [31] was utilized to estimate the metallization criterion (M) for the as-prepared glasses: The metallic or non-metallic character of the substance is indicated by the M. M < 0 suggests a metallic nature of the substances, whilst a value of M > 0 indicates an insulating nature.The M values are listed in Table 3 and were between 0.503 and 0.507.Therefore, the produced glasses demonstrated an insulating nature [30,31].On the other hand, the exchange of Ho 2 O 3 with Yb 2 O 3 causes an upsurge in the valance band's width and a decrease in M and, thus, a decrease in E opt .As shown in Table 3, the glass doped with Er 2 O 3 /Ho 2 O 3 had higher values of M and E opt , while the glass co-doped with Er 2 O 3 /Ho 2 O had smaller values of M and E opt . The nonlinear optical parameters can be determined based on Miller's rule.The dispersion of the optical linear susceptibility χ (1) and the third-order nonlinear optical susceptibility χ (3) are, respectively, deduced based on their empirical relations [46]. The nonlinear refractive index n 2 of the examined glasses relates to the third-order nonlinear optics χ (3) and static refractive index n o , which can be determined using the following equation [47,48]: The calculated values of χ (1) , χ (3) , and n 2 are listed in Table 4.It can be observed that these values are higher for the TKNB glass doped with Yb 2 O 3 /Er 2 O 3 (TKNB2) than for the TKNB glass doped with Ho 2 O 3 /Er 2 O 3 (TKNB1).The reported results demonstrate an increase in the nonlinear susceptibility, and the examined glasses show high values of optical linear susceptibility, χ (1) , replacing an atom with an atomic radius that is smaller than the removed one.This also applies when replacing Ho 3+ (atomic radii = 2.33 Å) with Yb 3+ (atomic radius = 2.28 Å) in the TKNB glass doped with Yb 2 O 3 /Er 2 O 3 (TKNB2), which causes an increase in the χ (3) value.This is dependent on the value of the third-order nonlinear optical susceptibility, χ (3) , and nonlinear refractive index, n 2 , which suggest that these glasses can be used in nonlinear optical devices, especially in the communication bands. Absorption Spectra and Judd and Ofelt Analysis Figure 9 displays the Er 3+ /Yb 3+ -co-doped glasses and their absorption spectra in the UV-VIS-NIR wavelength between 400 and 1800 nm.The 4f 11 -4f 11 transitions of Er 3+ and Yb 3+ are responsible for all of the bands.Three bands at 800, 975, and 1530 nm are detectable in the infrared section of the spectrum (Figure 9a).The first and third bands result from Er 3+ transitions between 4 I 9/2 and 4 I 13/2 , which are ground states, and 4 I 15/2 .With an increased optical density, the second band represents an overlapping absorption among the two transmissions, 2 F 7/2 → 2 F 5/2 (originating from Yb 3+ ion) and 4 I 15/2 → 4 I 11/2 (originating from Er 3+ ion).As seen in Figure 9b, the UV-visible region of the absorbed spectrum demonstrates well-resolved lines, which are ascribed to the transition of Er 3+ ions in the ground state, 4 I 15/2 to the 4 F 3/2 , 4 F 3/2 , 4 F 5/2 , 4 F 7/2 , 4 H 11/2 , 4 S 3/2 , and 4 F 9/2 .These absorption bands were assigned and located in accordance with Carnall et al. [49] and all relevant research.The absorption spectra of this glass medium (TKNB2) were subjected to a Judd and Ofelt (JO) investigation in order to ascertain their spectroscopic characteristics.Below is a quick synopsis of the JO investigation. The well-known theory developed by JO in 1962 makes it possible to compute the probability of an electric dipole transition between rare earth ion energy levels in diverse contexts [50].Theoretically, the electric and magnetic ( ⬚ ) dipole line strengths can be utilized for describing the radiative transmissions of the first J level to the succeeding J' level in the 4fn conformation of rare earth ions [51,52]. The calculated line strength (, ) between the initial state , described by (, , ), and the final state ', given by (', ', '), can be expressed using the following relation [53,54]: The influence of the host glasses on the luminescence intensity is represented by the JO intensity factors, Ω ( = 2, 4, 6), and the doubly reduced matrix elements of grade t among the statuses with the quantum numbers of (, , ) and ( , , ) are ( ) The absorption spectra of this glass medium (TKNB2) were subjected to a Judd and Ofelt (JO) investigation in order to ascertain their spectroscopic characteristics.Below is a quick synopsis of the JO investigation. The well-known theory developed by JO in 1962 makes it possible to compute the probability of an electric dipole transition between rare earth ion energy levels in diverse contexts [50].Theoretically, the electric S cal ed and magnetic (S md ) dipole line strengths can be utilized for describing the radiative transmissions of the first J level to the succeeding J ′ level in the 4fn conformation of rare earth ions [51,52]. The calculated line strength S calc ed (SLJ, S ′ L ′ J ′ ) between the initial state J, described by (S, L, J), and the final state J ′ , given by ( S ′ , L ′ , J ′ ), can be expressed using the following relation [53,54]: The influence of the host glasses on the luminescence intensity is represented by the JO intensity factors, Ω t (t = 2, 4, 6), and the doubly reduced matrix elements of grade t among the statuses with the quantum numbers of (S, L, J) and (S ′ , L ′ , J ′ ) are SLJ U (t) S ′ L ′ J ′ 2 [51].The reduced matrix elements, which may be obtained in the literature [52][53][54][55][56][57][58], are only dependent on the angular momentum of the Er 3+ states.The reduced matrices are defined as the total of the relevant matrix elements for two or more manifolds.Table 5 provides the matrix element values for each Er 3+ absorption band.Meanwhile, the measured electric dipole line strength S meas ed (SLJ, S ′ L ′ J ′ ) can also be calculated from the absorption spectra (Figure 1) [56][57][58] by using the following equation: where J is the total angular momentum quantum number of the ground state J = 15 2 , e is the charge of the electron, and N is the ion content (ions/cm 3 ).The average wavenumber of the absorbing band is denoted by λ (nm), the refractive index of the host with respect to λ is represented by n, L is the thickness of the sample under study (L = 1 mm), and j J→J ′ OD(λ)dλ symbolizes the experimentally determined integrated optical density in the respective wavelength ranges. The magnetic dipole transitions S md (SLJ, SLJ ′ ) contribute as follows [59]: The symbols m and c stand for the electron mass and light velocity, respectively.The magnetic dipole matrix elements between the LS-coupled states are represented by ⟨S∥L + 2S∥SLJ ′ ⟩ [60]. Table 6 demonstrates that the only transition with a contribution of the magnetic dipole, S md = 0.7148 × 10 −20 cm 2 , in the case of Er 3+ ions is the 4 I 15/2 → 4 I 13/2 ( selection rules : ∆J = 1).The measured line strengths S meas ed ) of the electric dipoles were utilized to calculate the values of the JO intensity factors Ω 2 , Ω 3 , and Ω 6 .If the JO factors produce a column vector Ω, and the double-decreased matrix elements produce an n × 3 matrix A, where n is the number of transmissions to fit and 3 matches the three JO factors, then the measured electric dipole line strength can be transcribed as a 1 × n column vector.The equality among Equations ( 13) and ( 14) can be articulated as S meas ed = A.Ω.The group of JO factors was obtained from the matrix Ω = A T .A −1 .A T .S meas ed , where A T is the transposition of matrix A. This matrix-based process is perfect for computations that are carried out on a computer. The optimal modification was performed, accounting for the initial six transitions and yielding the following values: Ω 2 = 2.387 × 10 −20 cm 2 , Ω 4 = 1.881 × 10 −20 cm 2 , and Ω 6 = 0.657 × 10 −20 cm 2 .These computed Ω t values were utilized in Equation ( 12) to obtain the values of S calc ed .Table 6 provides an overview of the outcomes of some of the calculated parameters, as well as the S calc ed and S meas ed absorption line strengths for the Er 3+ /Yb 3+ -co-doped TKNB2 sample. Table 6 presents the values of some of the factors that were employed in computation, as well as the S meas ed and S calc ed absorbing line strengths for the Er 3+ /Yb 3+ -co-doped TKNB2 sample. The formula below indicates the root mean square δ rms difference between the predicted and observed line strengths of the transitions, which serves as a gauge for the fit's accuracy: where N trans is the number of strong absorption transitions. The calculated Ω t parameters for different glasses accord well with those reported in existing works.The Ω t measurements of Er 3+ ions in several other common glasses [67][68][69][70][71][72][73] are shown in Table 7.The intensity parameters comprise two terms, based on the JO theory [74].Firstly, the symmetry and distortions associated with the constructional alteration in the presence of rare earth ions are described by the crystal field parameter.The other represents the covalency between the ligand anions and doped rare earth ions, which is connected to the excited opposite parity electronic states and 4f radial integral states of wave functions.Further, the covalent bonding of binder anions and rare earth ions (less ionic in nature) in the host and the symmetry of the immediate environment surrounding them are correlated with the intensity parameter Ω 2 .The degree of asymmetry surrounding rare earth sites increases with an increasing value of Ω 2 , indicating a greater covalency between the metal and ligand link.The bulk characteristics of the glass framework, such as stiffness, viscosity, and basicity, are primarily described by the intensity factors Ω 4 and Ω 6 , which are also influenced by the acidity and alkalinity of the host substance [75].The host material's hardness and basicity increase and decrease, respectively, with higher values of Ω 4 and Ω 6 .Table 7 demonstrates that all three intensity parameters exhibit an upward trend of Ω 2 > Ω 4 > Ω 6 , which aligns with the majority of results from previous studies.A greater asymmetry and superior covalence are indicated by the higher Ω 2 value compared with Ω 4 and Ω 6 [76].In the meantime, all three Er 3+ intensity characteristics that were measured in the current study were reduced to within the values of other glass hosts, suggesting a local asymmetry associated with the Er 3+ /Yb 3+ ions and moderate covalency of the Er-O/Yb-O bonds in the current TeO 2 glass.The manufactured glass's spectroscopic quality factor, Q = Ω 4 /Ω 6 , has a value of 2.8630.When compared with the glass hosts ZBLAN [77], Boro-tellurite [78], and TLNT [79], the generated glass had a higher value of Q = Ω 4 /Ω 6 .The glass that was developed had higher spectroscopic quality factor values, Q = Ω 4 /Ω 6 , than those found for TeO 2 glasses in earlier investigations, indicating that it is a better fit for optical devices.For optical amplifier and laser usage, the glass used in the present research (TKNB2) is a great option [80].[80,83] The relationship between the emission line strengths related to the transmissions from the highest energy levels, 4 I 15/2 , 4 I 13/2 , 4 I 11/2 , 4 I 9/2 , 4 F 9/2 , 4 S 3/2 , 2 H 11/2 , and 4 F 7/2 , and their related lower-lying energy levels can be computed utilizing the JO intensity parameters.The spontaneous emission probabilities A rad (J → J ′ ) can be calculated utilizing these line strengths as follows: S ed + n 3 S md (20) where A ed and A md denote the respective radiative transmission probabilities for electric and magnetic dipoles.Equations ( 13) and ( 15) are employed to calculate S ed and S md , respectively.The relative transmission probability, or branching ratio β rad (J → J ′ ), can be obtained using the following equation if there are many transitions from the level: where all terminal states J ′ are covered by the total.An energy level's radiative lifetime τ rad , which is based on the rates of spontaneous emission across all transitions from this level, can be calculated utilizing the following equation: Table 8 presents the electric dipole transmission probability A ed , magnetic dipole transmission probability A md , spontaneous emission transmission probability A rad , fluorescence branch ratio β rad , and radiation lifetime τ rad of the Er 3+ ions in TKNB2 glass, which relate to the transmissions that occur from the higher energy levels of 4 I 13/2 , 4 I 11/2 , 4 I 9/2 , 4 F 9/2 , 4 S 3/2 , 4 H 11/2 , and F 7/2 to their lower-lying energy levels, respectively.In general, the green and red UV emissions were below NIR excitement for the Er 3+ and Yb 3+ -co-doped glass (see Figure 10).According to estimates, the branched ratios of the transitions 2 H 11/2 → 4 I 15/2 (green), 4 S 3/2 → 4 I 15/2 (green), and 4 F 9/2 → 4 I 15/2 (red) were 94.8585%, 69.0467%, and 92.5887%, respectively.These findings indicate that the green and red emission transition channels predominate over all related higher-level radiative transitions. The ions' capacity for absorbing or emitting light is measured using the absorption and emission cross-sections.A significant emission cross-section indicates a large gain coefficient and lower pump laser threshold energy.Utilizing the Beer-Lambert equation, the absorbing cross-section (σ a ) of the 4 I 13/2 → 4 I 15/2 transmission (1540 nm) of Er 3+ was found based on the absorbing patterns of Er 3+ /Yb 3+ -co-doped TeO 2 glass [41]: where N is the content of Er 3+ ions (ions/cm 3 ), L is the thickness of the sample, and OD(λ) is the optical density of the realistic absorbing patterns of the manufactured glass.The McCumber theory allows for the extraction of the stimulated emission cross-section ( σ e ) utilizing the following relationship [42][43][44]: In general, the green and red UV emissions were below NIR excitement for the Er 3+ and Yb 3+ -co-doped glass (see Figure 10).According to estimates, the branched ratios of the transitions 2 H11/2 4 I15/2 (green), 4 S3/2 4 I15/2 (green), and 4 F9/2 4 I15/2 (red) were 94.8585%, 69.0467%, and 92.5887%, respectively.These findings indicate that the green and red emission transition channels predominate over all related higher-level radiative transitions.The ions' capacity for absorbing or emitting light is measured using the absorption and emission cross-sections.A significant emission cross-section indicates a large gain coefficient and lower pump laser threshold energy.Utilizing the Beer-Lambert equation, the absorbing cross-section ) of the 4 I13/2 4 I15/2 transmission (1540 nm) of Er 3+ was found based on the absorbing patterns of Er 3+ /Yb 3+ -co-doped TeO2 glass [41]: where N is the content of Er 3+ ions (ions/cm 3 ), L is the thickness of the sample, and OD(λ) is the optical density of the realistic absorbing patterns of the manufactured glass.The McCumber theory allows for the extraction of the stimulated emission cross-section ( ) utilizing the following relationship [42][43][44]: The absorption cross-section and partition functions of the lesser and higher states engaged in the optical transmission under consideration are represented by () , , and .The Planck constant 6.The absorption cross-section and partition functions of the lesser and higher states engaged in the optical transmission under consideration are represented by σ a (λ), Z l , and Z u .The Planck constant 6.63 × 10 −34 J S, Boltzmann constant 1.38 × 10 −23 , and temperature (RT in this case) are denoted by the parameters h, k B , and T. Additionally, the wavelength at which the Stark sublevels of the emission are lower and multiple transmissions are achieved is denoted by λ ZL .Figure 11 displays the computed values of the absorbing and emission cross-sections of Er 3+ /Yb 3+ -co-doped TeO 2 glass (between 1430 and 1630).The stimulation-emitted cross-section peak, , is located at 6.86 × 10 cm .This value is in good agreement with those of other TeO2 glasses reported in the literature [77][78][79][80].Moreover, the emitted cross-section and the full width at half-maximum () are crucial factors in obtaining high gain and wideband amplification in optical amplifiers.The resulting × could be employed to determine an optical amplifier's bandwidth characteristics.A broader gain bandwidth and lower pump threshold power are implied by the higher values of the × produced and the higher radiative lifetime .Table 9 lists the × values of Er 3+ /Yb 3+ -co-doped glasses.It can be seen that the × in TKNB2 has a comparable bandwidth characteristic to other glass hosts such as phosphate tellurite glass (337.5 × 10 nm cm ) and PBGG (338.5 × 10 nm cm ) [35].Finally, we found that the effect of Er 3+ /Ho 3+ was slightly changed compared with Er 3+ /Yb 3+ in the same host glass; therefore, we only reported the emission cross-section and increased spectroscopic values for a transition energy level of 4 I3/2 → 4 I5/2 of TKNB2.The stimulation-emitted cross-section peak, σ peak e , is located at 6.86 × 10 −21 cm 2 .This value is in good agreement with those of other TeO 2 glasses reported in the literature [77][78][79][80].Moreover, the emitted cross-section σ [35].Finally, we found that the effect of Er 3+ /Ho 3+ was slightly changed compared with Er 3+ /Yb 3+ in the same host glass; therefore, we only reported the emission cross-section and increased spectroscopic values for a transition energy level of 4 I 3/2 → 4 I 5/2 of TKNB2.When assessing the effectiveness of laser mediums, the optical gain coefficient is an essential factor to consider.Once the absorbed and emitted cross-sections for the changes between the two working laser levels are known, the following equation can be applied to determine the optical gain coefficient G(λ) [41]: where P is the rate of population inversion. The electron population ratio across the two energy levels is represented by the p value, which progressively increases from 0 to 1, as seen in Figure 12.This figure demonstrates that a positive gain can be achieved when p ≥ 0.6, suggesting that TKNB2 glass may find a use as a matrix substance for 1.54 µm fiber lasers.demonstrates that a positive gain can be achieved when p ≥ 0.6, suggesting that TKNB2 glass may find a use as a matrix substance for 1.54 µm fiber lasers. Conclusions The thermal stability parameters for the Er2O3/Yb2O3-co-doped glass were slightly lower than those for Er2O3/Ho2O3 due to the substitution of Ho2O3 with Yb2O3 in the glassy matrix.The value of Eopt reduced from 2.689 to 2.663 eV following the substitution of Ho2O3 with Yb2O3, because this substitution increased the amount of NBO in the glass network.The physical parameters (αm, Rm, Ed, Eo, and no) were correlated to the existence of rare earth oxides (Er2O3/Ho2O3 and Er2O3/Yb2O3) in the glassy matrix.The spectroscopic characteristics of the Er 3+ /Yb 3+ -co-doped glasses were evaluated based on their intensity factors, radiative rates, branching ratios, and radiative lifetimes.The maximum emission cross-section reported was 6.8 × 10 cm , and the gain coefficient of Er 3+ /Yb 3+ for the transition of 4 I13/2 → 4 I15/2 was 6.0 cm −1 with a high quality factor ( × = 377.3× 10 nm .cm ).When the Ho 3+ ions are replaced with Yb 3+ ions in the same host glass, it leads to a slight change in the spectroscopic properties, viz, the emission crosssection and gain and intensity parameters of the transition energy level, 4 I13/2 → 4 I15/2.These Conclusions The thermal stability parameters for the Er Figures 6 and 7 show the computed values of the k and n of the examined glasses, respectively.Both the variations in structure and the fluctuation in the incident wavelength (λ) can affect the n and k.According to Figure 7, the n drops as the λ increases.The following figure shows the various optical factors that are determined using the n and k spectrum distributions vs. λ for the tested glasses. ) may be used to derive the E o and E d through graphing (n 2 −1) −1 versus (hv) 2 , as Figure 8 illustrates.Applying the fit of the linear parameters, E o and E d may be calculated based on the graph.The intercepts and slopes of the arcs yield the Ed and Eo values.Materials 2024, 17, x FOR PEER REVIEW 10 of 21 63 × 10 , Boltzmann constant (1.38 × 10 , and temperature (RT in this case) are denoted by the parameters ℎ, , and .Additionally, the wavelength at which the Stark sublevels of the emission are lower and multiple transmissions are achieved is denoted by .Figure 11 displays the computed values of the absorbing and emission cross-sections of Er 3+ /Yb 3+ -co-doped TeO2 glass (between 1430 and 1630). peak e and the full width at half-maximum (FW HM) are crucial factors in obtaining high gain and wideband amplification in optical amplifiers.The resulting FW HM × σ peak e could be employed to determine an optical amplifier's bandwidth characteristics.A broader gain bandwidth and lower pump threshold power are implied by the higher values of the FW HM × σ peak e produced and the higher radiative lifetime τ rad . 2 O 3 / Yb 2 O 3 -co-doped glass were slightly lower than those for Er 2 O 3 /Ho 2 O 3 due to the substitution of Ho 2 O 3 with Yb 2 O 3 in the glassy matrix.The value of E opt reduced from 2.689 to 2.663 eV following the substitution of Ho 2 O 3 with Yb 2 O 3 , because this substitution increased the amount of NBO in the glass network.The physical parameters (α m , R m , E d , E o, and n o ) were correlated to the existence of rare earth oxides (Er 2 O 3 /Ho 2 O 3 and Er 2 O 3 /Yb 2 O 3 ) in the glassy matrix.The spectroscopic characteristics of the Er 3+ /Yb 3+ -co-doped glasses were evaluated based on their intensity factors, radiative rates, branching ratios, and radiative lifetimes.The Table 1 . The composition, density ρ, and molar volume V m of the TKNB glassy system co-doped with Er 2 O 3 /Ho 2 O 3 and Er 2 O 3 /Yb 2 O 3 . Table 2 . Thermal parameters of the TKNB glass system co-doped with Er 2 O 3 /Ho 2 O 3 and Er 2 O 3 /Yb 2 O 3 . 1.0 lists each value of E opt . Table 4 . Dispersion and nonlinear optical parameters of the TKNB glass system co-doped with Ho2O3/Er2O3 and Yb2O3/Er2O3. Table 4 . Dispersion and nonlinear optical parameters of the TKNB glass system co-doped with Ho 2 O 3 /Er 2 O 3 and Yb 2 O 3 /Er 2 O 3 . Table 5 . Reduced matrix element values for Er 3+ absorption transitions. Table 9 lists the FW HM × σ peak e values of Er 3+ /Yb 3+ -co-doped glasses.It can be seen that the FW HM × σ peak e in TKNB2 has a comparable bandwidth characteristic to other glass hosts such as phosphate tellurite glass ( 337.5 × 10 −21 nm•cm 2 and PBGG 338.5 × 10 −21 nm•cm 2
11,184.6
2024-08-23T00:00:00.000
[ "Materials Science", "Physics" ]
Pulsed Electric Current Sintering of Transparent Alumina Ceramics Aluminum oxide (Al2O3) commonly referred as to alumina is one of the most widely used as engineering oxide ceramics. From crystalline structure difference, there are many forms of Al2O3 (α, χ, η, δ, θ, γ and ρ), with α-Al2O3 being thermodynamically the most stable form. An example of α phase of Al2O3 is corundum or sapphire [1]. In the present chapter, α-Al2O3 is discussed and described as Al2O3. With a high melting temperature, chemical stability, Al2O3 is leading to applications as high-temperature components, catalyst substrates and biomedical implants. Al2O3 has excellent optical transparency and along with additives such as chromium and titanium, it is important as a sodium lamp (sapphire), a gem stone (sapphire and ruby) and a laser host (ruby). oxides. Fundamentals of PECS were also discussed in the present chapter in order to understand PECS for transparent Al 2 O 3 . Inoue [11,12] developed the first concept of the PECS technology in 1966. It introduced different electric current waveforms, i.e. low-frequency alternate current (AC), high-frequency unidirectional AC or pulsed DC. These sintering techniques were combined in one sintering process of electric-discharge sintering (EDS) [11], also known as spark sintering (SS). In SS process, a unidirectional pulsed DC or a unidirectional AC, is applied, then DC is eventually superimposed. This process led the development of current PECS technology, e.g. plasma activated sintering (PAS), spark plasma sintering (SPS), filed assisted sintering and plasma pressure compaction® (P 2 C) [13]. In the late 1980s various companies started to manufacture PECS machines based on Inoue's patents. Since then, the number of the PECS applications has been extended further. In the early 1990s, Sumitomo Coal Mining Co. commercialized the new PECS apparatuses (2-20 kA DC pulse generators, 98-980 kN load cells) [14,15]. The PECS process is schematically shown in Figure 1. It simultaneously applies an electric current along with a uniaxial pressure in order to accelerate densification of powders with desired configuration [16]. The electric current delivered during PECS processes could in general assume different intensity and waveform which depend upon the power supply characteristics [2,3,14,16]. The PECS process is characterized by the application of the pulsed electric current during sintering. The heating rate in the PECS process depends on the materials and shapes of the die/sample ensemble and on the electric power supply. Heating rates from 100 to 600 K/min can be obtained in the current PECS equipments. As a consequence, the PECS process can be in time ranges from a few ten seconds to minutes depending on the material and its size to be sintered, configuration and equipment capacity. The temperature is measured either with a pyrometer focused on the surface of the graphite die or with the thermocouple inserted into the die. Usually, the measured temperature at the surface of the die (die temperature) is lower than that of the sample (sample temperature). The magnitude of this temperature difference depends on a number of factors such as thermal conductivity of the die and the sample, the heating rate used, the pressure used, how well the die is thermal insulated etc. [17]. The current and consequent temperature distributions within the sample inside are very important to the homogeneity of density and grain size distribution of the product. Locally dense parts, at the beginning of current flow in particular, may result in locally overheating or even melting [16]. Experimental evidence of temperature distributions with different conductivity materials have been reported in [18][19][20][21][22][23][24]. It has been verified that the electrical properties of the sample influence significantly the temperature distributions inside the die as well as sample inside. Thus, in a nonconductive sample (i.e. Si 3 N 4 and Al 2 O 3 ), larger thermal gradients has been sometimes observed than in the case of a conductive one (i.e. Ti and Ni), indicating that the temperature distribution within the nonconductive sample is not as homogeneous as within a conductive sample. Current understanding of the effects of pulse current waveform on compact density in a PECS process is still incomplete. The pulse current did not affect significantly the PECS of cast-iron powder [20] and Ni-20Cr powder [21]. In the PECS process of Al powder, densification behavior is independent of pulse frequency ranging from 300 Hz to 20 kHz [25]. The applied current, however, can significantly affect the growth of the product layer in chemical reaction between Mo and Si plates [26,27]. With the PECS process, the pulse DC current affected the growth of Nb-C system, Mo 2 C layer formed in Mo/C, Ti/C and Zr/C diffusion couples [28][29][30]. Inoue claimed that there was a frequency-dependent effect in his patent [31]. The densification rate of Fe and Ni based alloy processed by pulsed current was about 5% faster than by direct current [32]. However, the sintering mechanism of insulating oxides such as Al 2 O 3 using the PECS method is still an on-going research area. Many papers on PECS of Al 2 O 3 powder focused on densification and grain growth behavior by investigating effects of various parameters such as particle size, heating rate, sintering time, pressure and sintering temperature during the PECS -Position measurement system -Atmosphere control system -Water-cooling system - process. Influences of the sintering parameters on densification and grain growth are not clear yet. There are no reports about pulse current waveform effects on sintering behavior of Al 2 O 3 . The waveform of applied current is probably an important factor to the sintering process of Al 2 O 3 . An effect of two types of pulse current waveforms, inverter and pulsed DC, on sample temperature and densification of Al 2 O 3 powder by using the PECS process has been clarified in [33][34][35]. The magnitude of the voltage peaks increased with an increase of the "OFF" time relative to the "ON" time for all of pulse power generator. Maximum voltage value of the inverter generator was higher than that of the pulsed DC generator. PECS with the inverter generator had higher sample temperature than that with the pulsed DC generator. In PECS of Al 2 O 3 powder, the electric current would be mostly applied to the punches and graphite die for heating up to sintering temperature. The average peak height of the 12/2 pulsed DC pattern is lower than that of the 2/6 pulsed DC pattern as well as lower than that of 40/10 inverter and 10/20 inverter pattern at the same die temperature. The inverter-type PECS had a higher voltage applied to the graphite die than the pulsed DC-type ones at the same die temperature. When the number of the "OFF" pulses increased as in the 10/20 inverter or the 2/6 pulsed DC pattern, the peak height of voltages of the "ON" pulses must have increased to keep the output power constant. Temperature difference in Al 2 O 3 sample is generated in PECS [33][34][35]. When PECS of Al 2 O 3 sample with ϕ15 in diameter and 3 mm in thickness was conducted, temperature of sample outside was 20 -30 K higher than that of the inside sample. The difference of inside/outside temperature using pulsed DC was approximately 10 K lower compared to the inside/outside temperature using the inverter. PECS with an inverter had a higher sample temperature than that with a pulsed DC power generator and it also higher than the die temperature. When the die temperature is increased, the temperature difference between the die surface and the sample also increases. The sample temperature would be strongly affected by the applied current profile during the PECS process. The current flow should be strongly dependent on the characteristics of the different elements which compose the system (powder, punches, die) and, particularly, their electrical and thermal characteristics. For an insulating material, the applied current does not flow through the sample when a pulse current power is applied to the die-sample, but could only flow from one punch to the other punch via the die. The current forms a magnetic field in the near surface of punches and the die inside where is close to the sample surface, and this magnetic field affects current density [19,[33][34][35][36][37]. The highest current density should be located close to the sample surface as can be illustrated in Figure 2. The temperature distribution is closely related to the current distribution because the heat transfer is generated by the flow of current at the graphite die and the punches. Thus, during the PECS process, the Al 2 O 3 powder must be sintered by the heat transferred from the die inside close to the sample surface and punches by means of heat conduction. Given that heat generation and transfer lead to a temperature distribution, temperature outside is higher than that inside the sample [33][34][35]. In the punch-compression direction, the die temperature is lower than the sample because the punches are in contact with water-cooled jacket and the die is cooled by radiation from the die outer surface. The ON/OFF pulse patterns and power generator frequency could also affect the sample temperature. In the case of inverter power waveform, high voltage with high frequency at long "OFF" time flows into die and punch gives higher heat transfer to heat sample than that of other pulse patterns. The difference in sample temperature using pulse current waveform inverter and pulsed DC could be explained by the skin effect as shown in Figure 2. In the punch/die/sample system, the back color area shows the current concentration during the PECS process. In both cases, the heat is generated only in the conductive die and the distribution of the heat generation does not change drastically with the electric conductivity of sample. Electric current distribution is the main cause of the temperature gradient between the sample and the external surface of the die, together with radiation heat from the die surface. When the high frequency current (inverter generator) is applied to punches and graphite die, the current density near the inner surface of the punches/die should be higher than that at its center. In contrast, at low frequency (pulsed DC generator), current density would be uniformly distributed across graphite die and punches. This difference suggests that higher temperature could be achieved due to higher applied voltages, and the required energy for heating a sample with an inverter is higher than that with pulsed DC generator [19,[33][34][35][36][37]. The relative density as a function of the outside/inside sample temperature was discussed in [33][34][35][36][37]. These results show a consistent relative density increase trend with an increase in the sample temperature, independent of the applied pulse current waveforms and ON/OFF patterns. It was also revealed that the average grain size increases with an increase in sample temperature even in different pulse current waveforms and ON/OFF patterns. Densification and grain growth were predominated by sample temperature. The pulse electric current waveform had effects on the sample temperature, but did not have direct influence on the densification, grain growth and homogeneity of the sample sintered by the PECS process. Sintering of transparent polycrystalline alumina Transparent polycrystalline Al 2 O 3 has increasingly become the focus of recent investigations primarily because of their unique combination of properties. Single crystals of Al 2 O 3 are highly transparent in visible and IR region. However polycrystalline Al 2 O 3 ceramics are usually opaque because of light scattering of pores and grain boundaries as well as impurities. High density is the most important factor to produce polycrystalline transparent ceramics, as well as grain size. Because of the high efficiency of pores for light scattering, transparency in polycrystalline materials requires extremely low level in porosity, less than 0.01 vol.%. Samples with such low porosity could only be produced under proper sintering conditions involving high temperatures and long sintering time. Residual porosity is much more important than grain boundaries for obtaining the transparency, even in crystallographically anisotropic materials in optical properties. The scattering efficiency for spherical pores, however, decreases dramatically when the pore size in the nanometric range could be achieved [38][39][40]. It is believed that nanostructured polycrystalline materials would possess higher transparency than ones with the micrometric grain size range. The sintering process at high temperature causes extensive grain growth and then seriously degrades the mechanical properties of the material. What is more important, the higher/bigger grain size larger than 410 μm leads to significant light scattering coming from the birefringence of coarse Al 2 O 3 grains [41]. Recently, fine-grained transparent polycrystalline Al 2 O 3 has attracted much attention due to its superior mechanical and optical properties. This material is prepared by sintering using HP and HIP at low temperature ranging from 1150 to 1400°C. The formation of nanostructure (< 1 μm) results in a significant improvement in both the mechanical strength and the optical transparency. It is reported that the mechanical strength of the fine-grained transparent Al 2 O 3 is reached up to 400 -600 MPa together with a high in-line transmission up to 60 % for visible light [41,45]. Thus far, the addition of small amount of MgO is known to suppress normal and abnormal grain growth. Transparent Polycrystalline Al 2 O 3 Produced by using PECS Many reports on PECS for sintering transparent Al 2 O 3 have been published as well as other transparent polycrystalline oxides. As process technology for ceramic powder is progressed, oxide ceramic powders with fine grain size and less agglomeration have been developed. Transparent polycrystalline Al 2 O 3 with fine grains have been able to be prepared with such advanced oxide powder by using PECS. Recently transparent oxide ceramics with fine grains such as 300 nm have been reported with different the PECS techniques as well as Al 2 O 3 . Munir and his colleagues promote PECS with ultra-high pressure such as 500 MPa [82]. High-pressure PECS is effective for preparing highly transparent polycrystalline Al 2 O 3 [83], and also Y 2 O 3 -doped ZrO 2 [82] and Y 2 O 3 [84]. Highpressure PECS is very useful for eliminate closed pores. However sample size is likely limited in high-pressure PECS. Kim et al. proposed slow-heating PECS for densifying Al 2 O 3 with less grain growth [57]. PECS with slow healing rate is available for not only Al 2 O 3 but also MgAl 2 O 4 [85]. Kim studied kinetics of densification and grain growth with stress rate in the point of view on "dynamic grain growth" [86]. He mentioned that slow stress rate in PECS is preferred in order to densification of Al 2 O 3 with less grain growth. On the other hand, Makino and his colleagues reported that transparent polycrystalline Al 2 O 3 can successfully obtained by PECS with fast heating rate such as 200 K/min [73]. However transparency of the sample with fast heating rate was not good in homogeneity. In order to densify Al 2 O 3 without significant grain growth, influences of heating rate is still in discussion. Goto and his colleagues reported PECS of transparent Lu 2 O 3 with two-step pressure profile [87]. Lu 2 O 3 is one of the candidates on laser host materials for high-power and ultra-short pulse lasers. However it is difficult for densification by conventional sintering. Taking account of advanced studies on transparent oxides given by Kim and Goto, a sintering profile is very important even in a process of PECS. Thus PECS provides transparent polycrystalline oxides. Besides the oxides described here, there are many examples of transparent oxides sintered by using PECS. Table 1 shows a variety of transparent polycrystalline oxides prepared by using PECS. Two-step PECS for transparent polycrystalline alumina The authors study PECS with two-step temperature profile, that is, two-step PECS (referred as to TS-PECS), in order to fabricate transparent oxide ceramics with fine grains [74,75]. Figure 3 shows the sintering profile of TS-PECS with other PECS techniques. TS-PECS can provide well-transparent oxides with shorter sintering period in comparison with slow-heating PECS. Figure 4 shows appearance, fracture surface and density of polycrystalline Al 2 O 3 prepared by using TS-PECS with 1 st different temperature for 60 min and 1200°C for 20 min under 100 MPa in vacuum. A sample prepared by slow-heating PECS at 1200°C is shown for comparison. Importance of the 1 st step temperature can be understood in Figure 4. The sample sintered at 1000 o C in the 1 st step has high transparency and less grain growth. The meaning of the 1 st step is densification without significant grain growth. Sintering at 1000 o C can provide densification without grain growth, however, full densification cannot be achieved. In order to reach to the full densification of the sample, the 2 nd step with higher sintering temperature is necessary. This is caused by the existence of macroscopic defects as large as a few tens micrometers. Figure 6 shows an optical microscopic image of the inside of the transparent Al 2 O 3 prepared by TS-PECS. Many black dots are observed in the sample. Figure 7 In particular e agglomeration of the initial particles is very important in even PECS for structural ceramics and transparent ceramics
3,992.8
2015-04-01T00:00:00.000
[ "Materials Science" ]
Engineering osmolysis susceptibility in Cupriavidus necator and Escherichia coli for recovery of intracellular products Background Intracellular biomacromolecules, such as industrial enzymes and biopolymers, represent an important class of bio-derived products obtained from bacterial hosts. A common key step in the downstream separation of these biomolecules is lysis of the bacterial cell wall to effect release of cytoplasmic contents. Cell lysis is typically achieved either through mechanical disruption or reagent-based methods, which introduce issues of energy demand, material needs, high costs, and scaling problems. Osmolysis, a cell lysis method that relies on hypoosmotic downshock upon resuspension of cells in distilled water, has been applied for bioseparation of intracellular products from extreme halophiles and mammalian cells. However, most industrial bacterial strains are non-halotolerant and relatively resistant to hypoosmotic cell lysis. Results To overcome this limitation, we developed two strategies to increase the susceptibility of non-halotolerant hosts to osmolysis using Cupriavidus necator, a strain often used in electromicrobial production, as a prototypical strain. In one strategy, C. necator was evolved to increase its halotolerance from 1.5% to 3.25% (w/v) NaCl through adaptive laboratory evolution, and genes potentially responsible for this phenotypic change were identified by whole genome sequencing. The evolved halotolerant strain experienced an osmolytic efficiency of 47% in distilled water following growth in 3% (w/v) NaCl. In a second strategy, the cells were made susceptible to osmolysis by knocking out the large-conductance mechanosensitive channel (mscL) gene in C. necator. When these strategies were combined by knocking out the mscL gene from the evolved halotolerant strain, greater than 90% osmolytic efficiency was observed upon osmotic downshock. A modified version of this strategy was applied to E. coli BL21 by deleting the mscL and mscS (small-conductance mechanosensitive channel) genes. When grown in medium with 4% NaCl and subsequently resuspended in distilled water, this engineered strain experienced 75% cell lysis, although decreases in cell growth rate due to higher salt concentrations were observed. Conclusions Our strategy is shown to be a simple and effective way to lyse cells for the purification of intracellular biomacromolecules and may be applicable in many bacteria used for bioproduction. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-023-02064-8. Supplementary Tables *Whole genome sequences of both the parent strain H16 and the evolved strain ht030b were obtained to identify the mutations that arose throughout the ALE. Both genomes were first mapped to the reference C. necator H16 genome obtained by Little et al, 1 and differences between each of the two genomes we sequenced and the reference genomes were identified. Several of these variations were found in both the parent strain and evolved strain, indicating these mutations were not acquired throughout the ALE. Five variations (meeting quality control criteria described in the Methods) were unique to either the parent or evolved strain. Four variations (relative to the reference genome) were found in ht030b, while one was found in the unevolved H16 strain. This is denoted by the column labeled "SNP Present In". Figure 2B in the main text compares the growth of wild-type C. necator H16 with the adapted halotolerant strain ht030b. In that experiment, 4 replicate cultures each of H16 and ht030b were grown in a 24-well plate and grown overnight at 30 ⁰ C, with starting optical densities (A600nm) of 0.01. H16 demonstrated no visible growth, whereas ht030b exhibited exponential growth with a specific growth rate of 0.16 h -1 . However, we have found that growth of H16 in high salt conditions appears dependent on the starting optical density of the culture and other culturing conditions. A similar experiment was therefore performed in 50-mL volumes in 250 mL baffled shake flasks. Both H16 and ht030b were seeded to starting optical densities of ~0.05. Fig. S1 Growth curve of H16 (blue circles) and ht030b (red diamonds) in LB containing 3.25% NaCl in 50-mL cultures in shake flasks, seeded at an optical density of A600nm=0.05. Although wild-type H16 did grow slightly in LB containing 32.5 g/L NaCl (final concentration) in this experiment, the evolved strain still grew significantly faster. Calculated specific growth rates were 0.18 h -1 for ht030b and 0.08 h -1 for H16. In addition to the higher starting cell concentration, it is likely that greater oxygen mass transfer was achieved in flasks compared to that in 24-well plates. The growth of H16 is somewhat dependent on the culturing conditions when growing in LB at elevated salt concentrations. However, in all cases, the evolved strain ht030b grew significantly better than the wild-type strain in high salt concentrations. Fig. S2 Growth Curve of C. necator H16 (blue circles) and H16 ΔmscL (red diamonds) Growth curves were measured for both wild-type Cupriavidus necator H16 and C. necator ΔmscL in LB medium. Overnight cultures of both strains were inoculated to an initial cell density with A600=0.015 in 50-mL cultures in shake flasks. Cultures were grown at 30 ⁰C shaking at 200 rpm for 9 hours, with absorbance measurements (600 nm) taken every 90 minutes. The growth curves of the two strains in LB were not significantly different. The measured growth rate of the wild-type strain (0.45 ± 0.01 h -1 ) was just slightly higher than the growth rate of the ΔmscL strain (0.43 ± 0.01 h -1 ). Although there was an observable difference between the growth of the two strains, this result was not statistically significant (p>0.07). Therefore, we conclude that the absence of the mscL gene does not significantly affect the growth rate of C. necator, and that the mscL gene is not required for normal functioning of the cell. The maximum salt concentration tolerated by both wild-type C. necator H16 and evolved strain ht030b was determined for both heterotrophic growth (LB) and organoautotrophic growth (M9 formate). To test salt tolerance for heterotrophic growth, both strains were inoculated in 50-mL tubes containing 10 mL LB with variable salt concentrations to a starting OD of 0.001. As the measured average growth rate of C. necator H16 was 0.45 h -1 , we defined the salt tolerance as the maximum salt concentration for which the average growth rate over a 24-hour period exceeded 0.225 h -1 (half of normal growth rate). This corresponded to an optical density of over 0.22 after a 24-hour period. Supplementary Note 3: Effect of salt concentration on growth of C. necator H16 and C. necator ht030b in LB and M9 Formate The NaCl concentrations tolerated by H16 and ht030b were 16.3 and 29.4 g/L, respectively. For convenience, NaCl concentrations of 15 g/L and 30 g/L were used for H16 and ht030b respectively for osmolysis experiments of those two strains. To test NaCl tolerance under formatotrophic growth, various salt concentrations were added to M9 formate (note: these represent the amount of salt added to M9 medium, which already contains various amounts of certain salts, rather than the final salt concentration; final osmolarities are taken into account in the data shown in Figures 3B and 4B in the main text). Both strains (H16 and ht030b) were then inoculated in 50-mL tubes containing 10 mL of formate media to a starting OD of 0.02. Formatotrophic growth in defined medium was significantly slower than heterotrophic growth in rich medium. Optical densities were measured after 48 hours. The optical density threshold for maximum tolerated salt concentration was 0.077, which is half of the measured OD of ht030b after 48 hours in M9 formate with no added salt. The NaCl concentrations tolerated by H16 and ht030b in M9 formate were 6 g/L and 15 g/L respectively. Therefore, M9 formate with 6 g/L added was used as the growth medium for the experiments described in Figure 3B. For the experiments described in Figure 4B, M9 formate with 16 g/L was used. As shown in Fig. S3D, the drop in cell growth when the added salt concentration is raised from 15 g/L to 18 g/L is fairly small. M9 formate with 16 g/L NaCl added has an osmolarity of 0.834 OsM, which is roughly equivalent to that of a 2.5% NaCl solution. Because osmolysis experiments were performed with salt solutions in 0.5% (w/v) increments, this was a more convenient starting solution from a practical standpoint. Fig S4. Overview of RFP-based cell lysis assay developed. (A) Schematic overview of RFP assay as described in methods. Well-mixed red fluorescence measurements (585 nm excitation/ 620 nm emission) were performed on the well-mixed sample, representing the total RFP content, and from the supernatant following centrifugation, representing the released RFP content. Cell lysis fraction was taken to be the ratio of released RFP to total RFP. (B) Representative linear range validation that was replicated in each experiment to verify that RFP concentration was proportional to fluorescence intensity. (C) Fluorescence intensity measurements of identical RFP-expressing cell samples in various solutions, demonstrating that the fluorescence intensity is not sensitive to the various environments encountered in the assay. Supplementary Note 4: RFP-Based Cell Lysis Assay Diagram and Measurement Notes In each osmolysis experiment relying on the RFP-based cell lysis assay described in the main text, samples were verified to ensure they fell within the linear range. Cells expressing RFP following the wash step in the osmolysis protocol were concentrated or diluted such that they were 30%, 60%, 90%, 120%, or 150% of the original cell density. Volumes equivalent to the volume measured in the experiment (usually 150 μL for experiments using C. necator and 50 μL for experiments using E. coli) were aliquoted into a 96-well plate and red fluorescence was measured (same excitation/emission values as described in main text methods). If the standard curve was linear, and all samples measured fell within the linear range, then the osmolysis measurements were considered valid. A representative standard curve is shown in Figure S4B. If needed, samples were further diluted in water such that they did fall within this linear range. Our assay relies on the assumption that the fluorescent signal is a function only of the concentration of RFP in the sample (i.e., that neither the solvent nor the presence/absence of cells significantly affects the fluorescence measurement). To verify this was always the case, fluorescence measurements were taken on three types of samples encountered throughout the experiments. All samples were prepared from equal volumes of the same culture, and therefore began with same amount of RFP. One sample was resuspended in an aqueous salt solution, and therefore nearly all of the RFP remained within the cell. One sample was resuspended in B-PER™ (a commercial bacterial lysis reagent) and therefore cell membranes were lysed and nearly all the RFP was in solution. In the final sample, cells were resuspended in B-PER™ but were then centrifuged, such that RFP was present in a supernatant free of cell debris. As seen in Fig. S4C, all three samples have nearly identical fluorescence values, within 3% of each other. Therefore, we are confident in assuming that neither the solvent nor the location of RFP with respect to cell biomass significantly impacts fluorescence measurement, and therefore our assay is valid in comparing RFP concentration in the various fractions. As described in the main text, the growth rate of E. coli BL21 ΔmscL ΔmscS was measured to demonstrate a tradeoff between the microbial growth rate and osmolysis efficiency. Growth curves were determined for this strain in LB supplemented with NaCl (if necessary) to final concentrations of 0.5%, 1%, 2%, 3%, and 4% (w/v). Cultures were grown in 50-mL volumes in 250-mL baffled shake flasks at 37 ⁰C, starting at an optical density of 0.01. Absorbance measurements were taken every half hour for cultures grown in 0.5%, 1%, and 2% salt and every hour for cultures grown in 3% and 4% salt. Specific growth rates were calculated from the slope of the line of a semilog plot for the range in which the log of absorbance was linear with respect to time. Fig. S6 Effect of addition of freeze-thaw step (yellow) with osmolysis for BL21 and BL21 ΔmscL ΔmscS compared to cells only subjected to osmotic downshock (blue). The effect the adding a freeze-thaw step to osmolysis was determined for BL21 cells grown in LB with 2% NaCl (w/v). The procedure was the same as for other osmolysis experiments with minor modifications. Cells were grown, RFP was expressed, and cells were washed as they were in other BL21 osmolysis experiments. For trials labelled "No Freeze-Thaw" samples were resuspended in distilled water and incubated for 30 min at 30 ºC as was normally done. For samples treated with a freeze-thaw step, however, cells were resuspended in distilled water, placed in a freezer set at −20 ºC for twenty minutes, and then thawed in a heat block set at 37 ºC for ten minutes. Samples from the well-mixed culture and supernatant were taken and measured as they were in previous experiments. Adding a freeze-thaw step significantly enhances the cell lysis efficiency in BL21 ΔmscL ΔmscS cells. The highest cell lysis (22%) is observed for BL21 ΔmscL ΔmscS cells that are subjected to freeze-thaw, which is roughly 5-fold higher than lysis of BL21 ΔmscL ΔmscS without a freezethaw step and 15-fold higher than lysis of BL21 with a freeze-thaw. This improvement indicates that even higher cell lysis efficiencies may be obtained by combining osmolysis with other methods of cell lysis. Experiments described in Fig. 5A of the main text were repeated exactly, except with cells grown in LB containing 3% NaCl. Note the considerable difference between osmolytic efficiencies of cells grown in 3% and 4% NaCl. This also allows direct comparison of osmolysis between C. necator ht030b and BL21 (as well as their ΔmscL variants), as they were both grown in 3% NaCl. The percent cell lysis in distilled water following growth in 3% NaCl LB was >90% for ht030b ΔmscL and 14% for BL21 ΔmscL.
3,199
2023-04-12T00:00:00.000
[ "Biology", "Engineering" ]
Higgs inflation and quantum gravity: An exact renormalisation group approach We use the Wilsonian functional Renormalisation Group (RG) to study quantum corrections for the Higgs inflationary action including the effect of gravitons, and analyse the leading-order quantum gravitational corrections to the Higgs' quartic coupling, as well as its non-minimal coupling to gravity and Newton's constant, at the inflationary regime and beyond. We explain how within this framework the effect of Higgs and graviton loops can be sufficiently suppressed during inflation, and we also place a bound on the corresponding value of the infrared RG cut-off scale during inflation. Finally, we briefly discuss the potential embedding of the model within the scenario of Asymptotic Safety, while all main equations are explicitly presented. Introduction The discovery of the Higgs boson at the Large Hadron Collider at CERN [1] marked a new era for particle physics, fitting the last missing piece of the Standard Model (SM). The Higgs particle fits into the theoretical framework of Electroweak (EW) interactions, the theory describing the unification of the electromagnetic and weak forces, and is the first fundamental scalar particle ever observed in Nature. It is the latter fact which makes its discovery of particular significance for cosmology too. In fact, scalar particles have been often hypothesised in cosmology to explain observations associated with the physics of the early or the late-time universe, and particularly in the physics of inflation, the speculated rapid expansion of the universe shortly after the Big Bang. Higgs inflation [2,3] is the theory which assumes that the SM Higgs particle is responsible for the dynamics of the primordial inflationary period. The idea is attractive for more than one reasons. First of all, because it does not invoke any new, hypothetical particle into the theory, but builds up on the known field content of the SM. What is more, it provides -1 - JCAP02(2016)048 with the opportunity of placing constraints on the parameters of the SM at high energies, much higher than the energies current particle accelerators can reach, through the observations of the Cosmic Microwave Background (CMB) radiation. In particular, the parameters of the Higgs potential, such as the quartic Higgs coupling λ measured at the EW scale, have to be extrapolated up to inflationary scales using the appropriate Renormalisation Group (RG) equations. Together with the Starobinsky model, Higgs inflation is one of the most successful models according to the recent Planck-satellite data [4]. Both models achieve inflation through a modification of the standard curvature sector of General Relativity (GR), and in fact, they are related through a conformal redefinition of the metric field, with the respective Einsteinframe potentials exhibiting striking similarities [5,6]. However, this correspondence concerns the classical dynamics of the theories, and the quantum equivalence is more delicate and involved. For examples of an off-shell quantum inequivalence between the two frames we refer to refs. [7,8]. The quantum, scalar and tensor fluctuations of the Higgs coupled to gravity during inflation provide tight constraints on the model's parameters at inflationary scales. In particular, the amplitude of the yet unobserved tensor fluctuations are of the order of the scalar potential, ∼ U (φ)/M 4 p 0 , which for large field values is controlled by the quartic coupling, i.e U (φ) ∼ λφ 4 . Extrapolating the SM RG equations up to inflationary scales, the value of λ (∼ 10 −1 − 10 −2 ) cannot provide the necessary suppression, predicting a tensor spectrum incompatibly large with CMB observations. This problem is circumvented with the addition of a non-minimal coupling between the Higgs and curvature in the action, through a term of the form ξφ 2 R, with ξ a dimensionless coupling controlling the strength of the interaction. This modification changes the amplitude of the inflationary effective potential to U (φ)/M 4 p 0 ∼ λ/ξ 2 , and assuming that ξ is sufficiently large, agreement with observations can be established. In particular, it turns out that in principle ξ ∼ 10 3 − 10 4 , but lower values might be possible in very special cases like the possibility of inflation happening at the critical point [9]. The non-minimal coupling ξ is the only free coupling in the theory to be fixed by cosmological observations, since the value of the quartic coupling λ is predicted by the SM equations, modulo uncertainties in the value of the top-quark mass. At the energy scale where Higgs inflation occurs the effect of quantum-gravitational dynamics cannot be in principle neglected, however during inflation the expectation is that they are sufficiently small. The simple argument behind this assumption is that the large value of ξ is expected to provide a sufficiently high suppression of the quantum corrections due to Higgs and graviton loops during inflation, since the respective propagators receive a suppression by factors of 1/ξ, remembering that the two fields are kinetically mixed in the Jordan-frame action. The aim of the current work is to explicitly study what the Wilsonian functional RG predicts for the quantum corrections of the Higgs non-minimally coupled to gravity at the inflationary regime and beyond, including the effect of gravitons. The formalism employs the Wilsonian idea of calculating quantum corrections, based on an infrared RG scale k. As we will see, provided the RG scale is consistently chosen, quantum corrections during inflation can be sufficiently suppressed. Since the framework extends in principle to the non-perturbative regime, we will finally briefly discuss the potential embedding of the model within the Asymptotic Safety (AS) scenario for quantum gravity. We structure the paper as follows: in section 2 we very briefly review previous results in the literature and motivate our analysis, while section 3 lays down the equations governing -2 - JCAP02(2016)048 the classical inflationary dynamics for the theory appropriately adopted to our setup. In section 4 we calculate the RG flow equation and beta functions describing the renormalisation of Newton's constant, as well as the Higgs' quartic and non-minimal coupling, including quantum gravitational corrections up to leading order using the framework of the Wilsonian functional RG. In section 6 we use the previously derived equations to analyse the quantum dynamics during inflation, while sections 6 and 7 investigate the regime beyond inflation in this context, and the possible connection with the scenario of Asymptotic Safety respectively. Some issues related to the dependence on the choice of gauge and regulator are discussed in section 8. We conclude in section 9, while explicit intermediate calculations are kept for the appendix. A very brief review of quantum effects during Higgs inflation The value of the essential couplings of a theory is dictated by experiment at a particular physical scale. As discussed in the Introduction, for a successful Higgs inflation, the nonminimal coupling has to be set to a quite large value, ξ ∼ 10 3 − 10 4 . The important question which arises is how stable the couplings' values are under quantum corrections; in particular, within inflation the latter could in principle spoil the flatness of the effective potential. Within an effective field theory approach the term ξφ 2 R makes perfect sense as part of a leading-order operator expansion, while operators of mass dimension higher than four are usually related to the violation of tree-level unitarity. The scale at which this is expected to occur has been calculated in ref. [3], where after expanding the action around a flat spacetime, and identifying the potentially dangerous operators, it was found to be Λ ∼ √ ξφ. In ref. [3] it is argued that its particular value poses no danger for the model. Quantum corrections for the system of a scalar (non-minimally) coupled to gravity have been studied in various settings in [10][11][12][13][14][15][16], while the particular case of Higgs inflation has been studied in refs. [17][18][19] employing semi-classical, effective-action methods at 1-loop, as well as in refs. [20][21][22][23][24][25] in a standard perturbative context. Ref. [26] studied the Higgsinflationary action within the approach of the Vilkovisky effective action, including the effect of gravitons, however the running and dynamics of the couplings was not considered there. 1 Gravity is well known to be perturbatively non-renormalisable, however, it is a wellworking quantum effective theory for energies below the Planck scale. Although strong quantum-gravity effects are usually assumed to manifest themselves at the Planck scale, their effect can potentially be important at energies as low as the GUT scale. For Higgs inflation it is expected that for ξ 1, the large effective Planck mass will provide an 1/ξ-suppression to graviton and quantum loops, as it is argued in [10,[17][18][19][20]. Within the Wilsonian implementation of the functional RG we will employ here, the effect of gravitons will be explicitly accounted for, while the regularisation scheme used, based on an infrared sliding RG scale k, is able to capture all types of divergences (power law and logarithmic ones) in the effective action. In this context, for energies below the Planck scale, the usual concepts of effective field theories apply, however, the assumption of Asymptotic Safety allows the extension to the deep UV where the relevance/irrelevance of different operators becomes a prediction of the theory. The Higgs as the inflaton In this section we will review the classical dynamics of the model adopted to our context, and introduce the relevant characteristic energy scales involved. Considering an excitation φ of the SM Higgs field around its classical, vacuum expectation value (v.e.v) and rotating to the unitary gauge the corresponding lagrangian can be written as the sum of the following three pieces, The first two terms describe the Higgs sector, while the third one is the gauge part describing the field strengths for the U(1) Y and SU(2) sectors, associated with the photon (A µ ) and the three vector bosons W ± and Z respectively. The fermionic part describes the kinetic terms for the fermionic degrees of freedom, while the Yukawa term describes the interactions between the Higgs and the other Standard Model (SM) fields through the usual Yukawa couplings. The Higgs potential is defined as and for m 2 > 0 (m 2 < 0) we are in the symmetric (broken) phase, while v represents the vacuum energy. The index k implies that the corresponding quantity is scale dependent, running under the Renormalisation Group (RG) scale k. We will make this notion more precise in section 4. The coupling of the sclar φ to gravity will be described by the following action with V (φ) given in (3.2), while for Higgs inflation the function f is defined through Notice that the Planck mass is allowed to run with energy, and is related to Newton's coupling through M 2 p k ≡ m 2 p k /(8π) = 1/(8πG k ). For a usual slow-roll inflationary phase to take place, the Higgs potential is required to be sufficiently flat, in which case the field starts from an unstable vacuum phase, and after a period of slow roll, it evolves towards its true minimum. It is instructive to transform to the Einstein frame, where the fields' kinetic terms diagonalise, defining the conformal transformation to a new metricg αβ asg with M 2 p 0 ≡ m 2 p 0 /(8π) = 1/(8πG 0 ), the Planck mass as measured at solar-system scales. The following field redefinition will yield a canonically normalised scalar χ, where in the last step we used (3.4) to substitute for f , and defined the following useful quantitiesφ The quantity x ≡ x(k) modifies most of the standard inflationary relations and has to be evaluated for the value of the coupling G(k) during inflation, i.e G(k = k inflation ). For x(k) = 1, one recovers the standard results. It turns out from the analysis of section 5 that during the inflationary regime it will be x(k) 1 (G k G 0 = constant) to very good accuracy. The Einstein-frame action reads as with U defined as (3.9) The potential U depends implicitly on the Einstein-frame scalar χ, a choice which is convenient for the calculation of inflationary observables. Inflation will occur at sufficiently high energies, where 8πG k ξ k φ 2 ≡ xξ kφ 2 1, and V (φ) (λ k /4)φ 4 . In this regime, (3.6) can be integrated to give χ(φ) 3 2 M p 0 · log(1 + ξ kφ 2 x), leading to the following explicit (3.10) For χ/M p 0 1 the potential approaches a constant value U (χ) M 2 p 0 · λ k 4ξ 2 k corresponding to the slow-roll regime. Varying the action with respect to the metric, and evaluating on a flat, Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime, in the slow-roll regime the Friedman equation becomes with the Hubble parameter H defined as H(t) ≡ȧ(t)/a(t), a(t) being the scale factor, t the cosmic time in the Einstein frame and x(k) defined in (3.7). The slow-parameter is defined in the standard way as with , ≡ ∂/∂φ, while the number of e-foldings N between φ i → φ f is given by For slow-roll inflation it is 1, which implies that inflation starts for field values around φ M p 0 / √ ξ, where for simplicity we set x(k) = 1. To find the starting value of the field φ i , we evaluate expression (3.13) at the required number of e-foldings N = N 0 , before the end -5 - JCAP02(2016)048 of inflation, while the condition 1 in (3.12) will yield the value of φ = φ f at the end of inflation respectively. We find that, The vacuum fluctuations of the inflaton produce a spectrum of scalar and tensor perturbations, which' amplitudes evaluated at horizon crossing at the pivot scale k pivot = 0.002 Mpc −1 read as [28] with the field value in the last relation assumed to be φ = φ i . With the aid of (3.14), and assuming the observed value for the amplitude of the scalar fluctuations evaluated at horison crossing, P S = P S(obs.) as required from CMB observations, we can work out the relation between the couplings λ and ξ during inflation as 3 The coupling ξ in (3.17) depends on the cosmological parameters such as the number of e-foldings and the amplitude of scalar fluctuations, but also on SM parameters such as the top quark/Higgs mass at the EW scale which enter implicitly through the coupling λ, so that we can write ξ inflation ≡ ξ[N 0 , P S(obs.) ; λ (EW) , M t(EW) , · · · ], (3.18) with the index (EW) standing for the value at the EW scale. A typical value for the coupling λ at inflationary scales is ∼ 10 −2 yielding ξ ∼ 10 3 (see appendix C for a realistic evaluation). In overall, the classical dynamics of the model define two characteristic scales, the typical value of the (Jordan-frame) scalar field at the end of inflation, φ f ∼ M p 0 / √ ξ, and the Hubble scale during inflation, H ∼ M p 0 /ξ. These in turn define three characteristic energy regimes. In the particular setup of this work, there is yet one more scale, the sliding RG cut-off k, representing the typical energy (coarse-graining) scale of the system. Its value and connection with the standard scales during inflation will be discussed in section 5. The setup Our final aim is to understand what the Wilsonian functional RG predicts for the quantum (gravitational) corrections for the model, under certain assumption which we describe below. We will first introduce the basic concepts and tools needed for the subsequent analysis, and also remind that explicit calculations and formulae are presented in the appendix. 2 Notice that these are the field values in the Jordan frame. The corresponding ones in the Einstein frame have to be translated through χ = χ(φ) given a little before (3.10). 3 Unless otherwise stated, we will be assuming N0 = 55. JCAP02(2016)048 Let us start with some theory and its bare action S[ϕ A ] which depends on a set of fields {ϕ A }, with A, B generalised field/spacetime indices. Formally, the construction of the associated effective action starts with the generating functional of the connected Green's functions, It is well known that the 1-loop corrections of the theory are intimately connected to the (Eucledian) effective action Γ through the following relation The quantity S (2) stands for the inverse bare propagator defined as S (2) ≡ δ 2 S /δΦ A δΦ B , and possible index structure is understood, while "Tr" stands for summation over spacetime, internal indices and momenta. The trace over momenta of the kinetic operators leads to an in principle divergent result which requires some sort of regularisation. There are different types of regularisation schemes, each with its own advantages and disadvantages, two of the most popular being dimensional regularisation and a physical cut-off respectively. The Wilsonian approach suggests a continuous integrating out of momenta, shell-by-shell in momentum space. The functional RG we will employ here, implements this idea by invoking an IR regulator, denoted as R k , in turn built out of an infrared, dimensionfull cut-off k. Its generic form is constrained by certain conditions, see [31][32][33][34] for a discussion. Above ideas lead to the concept of the Wilsonian, or average effective action Γ k [Φ A ] defined as By construction, the regulator R k employs an infrared regularisation, suppressing fluctuations with momenta p 2 < k 2 , while integrating out those with p 2 > k 2 . We will get back to the particular choice of the regulator R k later. In view of (4.4), the suppression term ∆S k amounts to the modification of the full inverse propagators Γ (2) according to Γ (2) → Γ (2) + R k , and it is understood that R k should carry the same tensor structure with Γ (2) . The cut-off scale k is interpreted as the typical energy scale, or equivalently, 1/k defines the typical physical lengthscale of the system one is interested in. It can be then shown that the average effective action (4.4) satisfies an Exact Renormalisation Group Equation (ERGE) [35,36] with ∂ t ≡ k∂ k ≡ k∂/∂k. This last equation will play an important role for our quantum analysis. For Γ (2) → S (2) equation (4.5) connects with the standard 1-loop result (4.3), JCAP02(2016)048 its applicability though extends beyond the perturbative regime. Exact solutions within a gravitational context are almost impossible, and some sort of approximation has to be invoked. Notice also that, equation (4.5) is an in principle off-shell equation, which makes any results derived from it dependent on the gauge, while the use of approximations like truncated actions leads to a dependence on the regularisation scheme. In the context of scalar-tensor theories another subtlety arises regarding the choice of the conformal frame, with off-shell corrections not in principle expected to match in different frames, as explicitly shown in refs. [7,8]. Let us summarise the basic assumptions for the quantum analysis as follows: 1. We will assume that the effective action takes the form suggested by (3.3), ignoring higher-order operators, while the calculation will be performed in the Jordan frame. Notice that below, we might sometimes drop the index "k" for the running couplings for simplicity. 2. The usual background field method for the decomposition of the fields into a background and fluctuating part in a Euclidean signature will be employed. For the backgroundvalue of φ we will assume that ∂ µφ = 0, while the background spacetime will be a Euclidean de Sitter. The trace over momenta in (4.5) will be performed with the use of a heat kernel expansion. 3. We will consider the quantisation of the gravity-scalar sector only, hence Yukawa and gauge interactions will not be accounted for in the calculation. We will also assume that the quartic coupling λ retains a positive value, since the possible instability of the Higgs potential poses an important problem which deserves its own study. We briefly discuss this issue in appendix C numerically solving the 1-loop SM RG equations. 4. We will perform the calculation in the popular choice of the de Donder gauge which significantly simplifies the technical analysis. We comment on the gauge and regulator choice in section 8. The calculation We can now start with the calculation within the ERGE. Our goal is to evaluate (4.5) for the action ansatz of (3.3) and under the assumptions described earlier. 5 The gravity sector has the usual diffeomorphism gauge symmetry, which we will fix through the introduction of a gauge-fixing term. The Wick-rotated and gauge-fixed effective action ansatz then reads as The terms S GF and S ghost stand for the gauge-fixing and ghost sector respectively, while C µ ,C ν denote appropriate ghost and anti-ghost fields. We define them explicitly below. The indices k remind us that the quantities stand for the renormalised ones, running under the RG scale k. The gauge-fixing term is defined as follows JCAP02 (2016)048 which depends on the two real parameters α and β. Two of the most popular choices in the literature correspond to α = β = 1 (de Donder gauge), and α → 0 (Landau-type gauge) respectively. For our analysis, we will choose the first with α = β = 1, which simplifies the calculation. Now, the introduction of the gauge-fixing term requires the introduction of appropriate ghost and anti-ghost fields which can be calculated by replacing the gauge vectors u µ in the gauge transformation of the combined metric L u (g µν ) = L u (ḡ µν + h µν ) = u ρ ∂ ρ g µν +∂ (µ u ρ g ν)ρ , by the ghost C µ . Then, following the standard Fadeev-Poppov procedure the ghost term can then be shown to take the form with C µ ,C µ denoting the ghost and anti-ghost fields respectively, and ≡ḡ µν∇ µ∇ν . Expanding the effective action (4.6) up to second order in the field's fluctuations under (4.9) we calculate the Hessians Γ (2) k , which' inversion yields the different propagator entries appearing in (4.5) (or (4.3)). The explicit expressions are given in appendix A. To this end, we employ the background field method by considering the following split between a background piece (denoted by an overbar) and a fluctuating part as 6 withḡ µν describing the background spacetime metric and the trace-free (denoted with a hat) and trace components of the metric fluctuation satisfyingḡ µνĥ µν = 0, h ≡ḡ µν h µν . Derivatives will be constructed with the background metric field, and we shall drop the overbar from them for notational convenience. We notice that the fluctuating fields h µν and δφ are assumed to be the corresponding average fields, i.e h µν (x) ≡ h µν (x) , δφ(x) ≡ δφ(x) . Ideally, one would like to keep the background field variables unspecified, however this can be technically unpractical and lead to results of very high complexity; we refer to [38][39][40] for recent discussions within a functional RG context. In this work, we will assume the family of constant backgrounds of a four-dimensional Eucledian de Sitter (S 4 ) with R,φ = constant. (4.10) In the quadratic part of the expanded action, the different interaction vertices appearing are the effective graviton and Higgs self interactions, as well as the momentum-dependent cross-vertex between the scalar and the metric, due to the non-minimal coupling (see appendix A). On S 4 , the kinetic part of it consists of a minimal operator which is regularised with the introduction of the one-parameter regulator R k ≡ R k (− ; r), through the modification [41] Γ This way, the eigenvalues of − less than k 2 are suppressed, while integrated out otherwise. As the cut-off is continuously moved, the integrating out of modes is performed shell-by-shell in momenta [31][32][33][34]. As regards the particular choice of regulator function, we choose an 1-parameter version of the optimised regulator [42] R k (− ) ≡ r · k 2 − (− ) · Θ r · k 2 − (− ) , which will allow for an explicit computation of the the momentum integrals. The real and positive parameter r defines a family of regulator functions, with the JCAP02(2016)048 standard, "optimal" case corresponding to r = 1. It will serve as a book-keeping parameter which we will use as a test of the regulator-dependence of the main results. The sum over the eigenvalues of the operators appearing in the 1-loop-type trace on the right-hand side of the ERGE is traced by means of an asymptotic heat kernel expansion. Assume an operator ∆ = − δ A B + U A B , with ≡ḡ µν∇ µ∇ν , δ A B the identity matrix in field space, and U A B a potential-type term depending on the background value of the fields and their derivatives. Then, in four dimensions the heat-kernel expansion of ∆ reads Tre −s∆ = 1 4πs 2ˆd 4 x ḡ tra 0 + tra 2 s + tra 4 s 2 + . . . , (4.12) where the parameter s is assumed to be sufficiently small, and tr sums over internal indices. The coefficients a i depend on the background geometry, with each term in the expansion capturing different types of divergences, in particular quartic (a 0 ), quadratic (a 2 ) and logarithmic (a 4 ) divergences respectively [41,[43][44][45]. Formally the expansion (4.12) is valid as long asR/k 2 < 1, i.e capturing fluctuations with wavelengths smaller than the radius of curvature. Evaluating the trace in the ERGE (4.5), the flow equation for Γ k turns out to organise in the following form with primes here denoting derivatives with respect toφ ≡ φ/k, ∂ t ≡ k∂ k and V is the volume of S 4 . The dimensionless quantities F depend non-trivially on the fields, couplings, and regulator parameter F = F[R, φ; f, V ; r], with their form explicitly given in (A.15) of the appendix. As can be seen from their explicit expressions, the functionals F depend up to second order derivatives of f and V with respect to φ, as expected. The flow described by (4.13) is particularly involved, however, its 1-loop expression simplifies considerably, and is also explicitly presented in appendix A.2. From (4.13), expanding aroundR/k 2 = 0, φ/k =φ * , and projecting out on the different operators in the effective action one gets the flow of the two scalar potentials as, with F f and F V corresponding to the projection of F on the curvature and scalar potential operators respectively. In turn, projecting out on the individual operators in f and V one can extract the running of the individual coupling constants. Notice that the evaluation of the ERGE generates higher-order terms in curvature/scalar field which we neglect in view of our original action ansatz. The structure of the beta functions During slow-roll inflation, the scalar field acquires a large vacuum energy, φ = φ * M p 0 / √ ξ, and we therefore consider an expansion of V around this v.e.v as with v k representing a cosmological constant-type term. The function f will have the form of (3.3). At this stage it is convenient to introduce dimensionless fields and couplings, measured in units of the cut-off k,φ * k ≡ φ * (k)/k,g i ≡ g i (k)/k n , (4.16) -10 - JCAP02(2016)048 where n is the coupling's canonical dimension. Under the ansatz (4.15), from the flow equation (4.14) one can extract an autonomous system of (non-perturbative) beta functions where the coefficients B (n) and exponents m i can be read off from (B.1)-(B.6), together with the definitions The way we split the contributions in the beta functions (4.18) is such that the terms β (0) i reduce to the standard perturbative results in the limitφ * → 0, while β (grav.) i conventionally denote the gravitational corrections to them. This is only conventional, since during inflation the v.e.vφ * is in fact related to the non-minimal coupling to gravity ξ. The quantity Ω appears as a result of the kinetic mixing between the graviton and scalar in the action, and becomes Ω = 1 forφ * → 0, but for sufficiently largeφ * and ξ it provides a sufficiently high suppression to the different terms in (4.18). The origin of the non-standard terms ∼Gφ * ξ is also similar; these terms appear after expanding the non-trivial propagator entries in the ERGE around the v.e.v of the scalar under the particular ansatz for f and V ((3.3) and (4.15) respectively), and obviously, they vanish forφ * → 0. These terms are an immediate result of the scalar's non-zero v.e.v., introducing appropriate threshold effects; it is for v.e.v values much larger than the cut-off scale k, and opposite otherwise. The first case is expected to occur during inflation. The actual estimate of the value ofφ * ≡ φ * /k depends on the estimate of the cut-off k for the energy regime of interest. This will be discussed explicitly in sections 5 and 6. In general, for G,φ * → 0 one recovers the standard, perturbative expressions for the beta functions. JCAP02(2016)048 The beta functions for a non-zero v.e.vφ * = 0, according to (4.18) (see also (B.1)-(B.6) of the appendix), read as with Ω and µ defined in (4.19). From β ξ and β λ we can also derive an expression for the fractional running of the amplitude of scalar fluctuations ∼ λ/ξ 2 . Keeping only the β The RG equations for φ * and v in (4.15) can be found in (B.5) and (B.6) of the appendix. Forφ * , G → 0, the terms ∝ λ, ξ·λ on the r.h.s. of the beta function for ξ, equation (4.22), are in qualitative agreement with those found in the context of the conformal anomaly [46], and they tend to increase ξ with the cut-off scale, with ξ admitting the usual logarithmic running. In a similar way, the beta function for λ, equation (4.23), consists of the standard λ 2 -term leading to logarithmic running and an irrelevant Landau pole at very high energies. Let us briefly comment on the renormalisation of Newton's coupling. For the purpose of this discussion we re-write (4.21) as . A negative anomalous dimension η G will tend to reduce G and eventually lead it to a UV fixed point as k → ∞, where η G = −2. This lies in the heart of Asymptotic Safety which we discuss in section 7. On the other hand, η G > 0 will have the opposite effect leading the coupling to increasingly large values with increasing k. This is an unwanted behaviour if the theory is to posses a well-behaved high-energy regime. Quantum dynamics during inflation In the RG equations (4.21)-(4.24) the threshold effects due to the v.e.v of φ appear through φ * ≡ φ * /k. Depending on its value, we distinguish the large-and small-field regime wherẽ φ * 1 andφ * 1 respectively. The first case is expected to occur during inflation, remembering that the scalar acquires a large v.e.v with To estimate the value ofφ * , one needs an estimate of the infrared cut-off k during inflation. An important point to make is that the prescription for the interpretation of the infrared cut-off k in this context depends on the particular physical setup. In general, k represents the coarse-graining scale, or the typical energy scale of the physical system (see [47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66]). Quantum fluctuations during inflation are of the order of the cosmic horizon JCAP02(2016)048 H −1 , suggesting the coarse-graining scale to be of the same order, i.e k ∼ H. This is the choice employed in [47,48,61]. The covariant form of this identification, k 2 ∼ R, has been also a popular choice employed in studying the RG-improvement of gravitational actions in a cosmological context in [57,62,66,67]. Let us remind ourselves that the asymptotic expansion (4.12) applies for sufficiently small curvature scales withR/k 2 < 1. This fact, together with the slow-roll estimateR/M 2 p 0 ∼ λ/ξ 2 suggests the bound This in turn places a bound on the value ofφ * assuming φ Given the above estimates, for the dimensionless product Gφ 2 * which appears in the beta functions at non-zero v.e.v, it follows that where we assumed that G M 2 p 0 at energies well below the Planck mass. We can now get an estimate of the different terms in the equations (4.21)-(4.23). We remind that the explicit expressions are given in (B.1)-(B.3) of the appendix. As an overall remark, notice the appearance of powers of ξφ * in the respective numerators, which can in principle acquire large values. Let us start with the quantity Ω which appears in the denominators and is defined in (4.19). In the regimeφ * , ξ 1 it can be approximated as Ω 72πξ 2 Gφ 2 * ∼ 72πξ, (5.5) where we used (5.4). We now look at the beta function for G. Evaluating its denominator using (5.5), it turns out it is of the order ∼ 10 4 ξ 2 . Its numerator consists of three different terms apart from the canonical one, a quadratic, cubic and quartic term in G respectively. In view of (5.5) one finds for the overall coefficient of each of them in orders of magnitude that, ∼ 10 −7 · G 2 , ∼ 10 −6 · G 3 and ∼ 10 −3 · G 4 respectively. Since G 1, the beta function will be dominated by the canonical term = −2 G leading to where we eliminated the arbitrary renormalisation scale k 0 by using the measured value G = 1/M 2 p 0 , and also used (5.2). Therefore, G becomes constant and equal to its classical value. In section 3, most of the standard inflationary relations in the Einstein frame where modified by the quantity x(k) ≡ G(k)/G 0 . Above result implies that x(k) 1, recovering the standard classical inflationary equations. The RG equations for ξ and λ (B.2)-(B.3) also assume a non-trivial form. From above considerations, the denominator of β ξ is of the order ∼ 10 10 ξ 3 , while the linear term in G in its numerator picks up a very large coefficient of the order ∼ 10 4 · ξ 4 G, however, when the latter is combined with the denominator it yields the overall estimate of ∼ 10 −6 · ξ G ∼ 10 −6 λ/ξ, JCAP02(2016)048 using (5.6). In a similar way of thinking, for the second-order term in G one can estimate ∼ 10 −4 · ξ 2 G 2φ2 * , which in view of (5.4) and (5.6) makes it of the order ∼ 10 −4 λ/ξ, while for the cubic term in G it turns out that it is of similar order, ∼ 10 −3 λ/ξ. In β λ , the second and higher-order terms in G in its numerator appear coupled to large powers of ξ, e.g ∼ ξ 5 G 2 (5.6). When the suppression coming from the denominator is taken into account, the quadratic term yields ∼ 10 −5 ξ 2 G 2 ∼ 10 −5 λ 2 /ξ 2 , and similar estimates result for the rest of the corresponding G-terms in β λ . As regards the running of the amplitude λ/ξ 2 , using (4.24), one can see that it will also receive a suppression which will be at least of the order ∼ λ 2 /ξ 4 . To summarise, the RG equations for a non-trivial v.e.v acquire a very involved, nontrivial form. The threshold effects from a sufficiently large v.e.v of the scalar in combination with the sufficiently low value of the cut-off k, act so as the terms from the gravitational sector in the RG equations receive a suppression in the sense discussed above. Above analysis also indicated a lower bound for the infrared, sliding RG scale k, presented in (5.2). A more precise estimate would require a detailed study of the field's dynamics and structure of the effective potential, which we will not pursue here. In the next section we will discuss the RG dynamics for the other limiting case, whereφ * 1. The post-inflationary regime We are now interested in the regime whereφ * is sufficiently small, This occurs whenever the scalar has rolled down to a lower v.e.v φ * with respect to some fixed energy scale (e.g after inflation), or as the cut-off scale k increases while φ * remains sufficiently small. In the limit φ * , v → 0 the exact beta function for G acquires a simple form. From the flow equation (A.12) it follows, Notice that for G 1, ξ 1, the anomalous dimension η G acquires a large and positive value, signalling a potentially singular behaviour in the running of G, however this is harmless for sufficiently low cut-off scales. If we expand η G for G 1 to linear order, we arrive at the previously found 1-loop equation but withφ * → 0, It exhibits two fixed points for G, the trivial (Gaussian) one with G = 0, and a non-trivial fixed point at JCAP02(2016)048 Since we are below the Planck mass, it is enough for our purpose to present the respective equations under the 1-loop approximation Note that the gravitational corrections enter with a positive sign. In view of (5.6), the terms ∼ ξ 3 G and ∼ λξ 2 G in above equations are of order λξ and λ 2 at the scale given by (5.2). As the cut-off is decreased though, they tend to decrease sufficiently fast as G ≪ 1. We can derive approximate, analytic solutions for equations (6.3)-(6.6) in the regime where ξ 1, and assuming that initially G is sufficiently small, so that the equations are dominated by the standard terms. This allows us to set G = 0 in (6.5) and (6.6). Under these assumptions, and for ξ 1, (6.5) and (6.6) yield the familiar solutions 8 with λ 0 = λ(k = k 0 ) and ξ 0 = ξ(k = k 0 ), and k 0 an arbitrary energy scale. Now, looking at equation (6.3) one can see that for ξ 1, the coefficient of the leadingorder correction in G is c 7ξ/(12π) > 0. If we neglect the logarithmic running of ξ and assume that ξ ξ 0 ≡ constant in (6.3), then for c 7ξ 0 /(12π) we can solve (6.3) for G(k), We have traded the arbitrary constants in the solution for the renormalised values G R = G(k = k R ) with k R /k 0 1. For k 2 /M p 0 1 the solution (6.8) decreases quadratically, entering deep into the classical regime with G = 1/M 2 p 0 . Solution (6.8) suggests that as we raise the cut-off from smaller to higher values, its denominator becomes zero at Of course, by the moment G 1, the approximate solutions (6.7), (6.8) are not valid anymore. The above diverging behaviour is unphysical and cannot exist in reality, as it would suggest that gravity becomes strongly coupled at a scale below the Planck mass. Most importantly, the scale defined through (6.9) is beyond the lower bound on the inflationary cut-off scale, given in (5.2), which implies that by that moment the scalar should have acquired a sufficiently large v.e.v, preventing the coupling to hit the pole. Notice that the scale (6.9) coincides with the lower value of φ * during inflation, implying that at the scale (6.9) φ * /k ∼ O(1); beyond this scale,φ could become sufficiently small, turning off the suppression of gravitational effects as described earlier. In this sense, (5.2) also provides an extreme upper bound for the infrared RG scale at inflation k ∼ k inflation . Within the context of Higgs inflation, AS could provide a fundamental framework towards a UV completion, and more solid ground for the behaviour of quantum gravitational corrections at very high energies. To calculate the fixed-point structure of the action (3.3) one needs the full, non-perturbative set of beta functions, which correspond to the solutions of the flow equation (4.14) with the potentials f and V given by (3.4) and (7.1) below. In this section we will set r = 1 for the regulator parameter, and will work in the deep UV regime where k → ∞, φ * /k → 0. In this regime, we expand around φ * = 0 as This is the well-known Gaussian-matter fixed point (GMFP), due to the vanishing of the matter interactions, and has been previously studied in a similar setup [13,101]. The attractivity properties in the vicinity of the UV fixed point reveal the relevance/irrelevance of the couplings in the UV. To find out, we calculate the linearised RG flow around (7.2) from which we can straightforwardly extract the associated eigenvalues. In particular, it turns out that λũ , G = −0.243 ± 4.024i, λ ξ,m 2 = −2.43 ± 4.024i, λ λ = 4.462. (7.3) The eigenvalues associated with the vacuum energy and Newton's coupling form a complex conjugate pair, with a negative (attractive) real part, and a similar situation occurs for ξ and m 2 . Notice that the non-minimal coupling ξ is relevant, while the quartic one, λ (marginal in power counting), becomes irrelevant. The connection with AS would require that the initial conditions along the RG flow set at the end of inflation reach the UV fixed point when evolved under the RG flow. Sufficiently close to the fixed point, for the stability of Newton's coupling, according to the discussion around 4.25 and from equation (6.2) respectively, we see that a necessary condition is 10 ξ < 55 14 3.93. 2) with the regulator parameter r (see also section 4) with respect to its optimal value, i.eũ fp (r)/ũ fp (r = 1) (Blue, dashed) and G fp (r)/ G fp (r = 1) (Red, continuous). Of course, the pole (6.9) should be also avoided in evolving from sufficiently low scales, but this is what one would expect to happen taking into account the running of the v.e.v φ * , remembering that (6.9) corresponds to the vacuum case. A study of this issue would require a detailed numerical study of the complete set of beta functions, which we leave for a future work. A comment on the gauge and regulator dependence The use of a truncated theory space in combination with working off-shell introduces a dependence on the regulator and gauge choice respectively. 11 The explicit dependence on the regulator parameter r significantly increases the complexity of the equations, so we only explicitly discuss its influence on the renormalisation of G at leading order and on the UV fixed-point values respectively, for the vacuum case. The same is true for the gauge parameter, and below we will briefly discuss the case of the Landau gauge. The leading order correction in the equation for the renormalisation of G, equation (6.3), has been crucial for the earlier analysis. With an unspecified regulator parameter (r) it reads The ξ-independent terms in c give a negative contribution for all r > 0, while for ξ 1, r would also have to be also very large to make the contribution of ξ unnoticeable. One is here reminded that, r = 1 corresponds to the "optimal" value of the regulator function [42] (see section 4), and one should not expect large deviations from it. It is also interesting to notice that the UV fixed point exists as long as r ∼ O(1), in particular 0.33 r 3.6 (see also figure 1). The beta functions presented here have been also calculated in [101] using a different field decomposition and in the Landau gauge (α = 0). Let us write here the results found JCAP02(2016)048 there for G at 1-loop, and we have performed a similar check for the beta functions for ξ and λ. Notice that the order of magnitude and signs of the coefficients in (8.2) are in agreement with the ones presented here. Summary We employed the functional RG to study quantum corrections for the Higgs inflationary action during inflation and beyond, including the effect of gravitons. The formalism employs the Wilsonian approach to the RG, which provides an effective description of the physical system from small to large scales, as the infrared RG scale is moved in a continuous way. What is more, its extension to the non-perturbative realm allows for a connection with the Asymptotic Safety scenario for quantum gravity. Within this framework, we evaluated the exact RG flow for the Higgs-gravity effective action, and explicitly studied the resulting RG equations including the leading-order gravitational corrections at 1-loop, under the particular assumptions described in section 4.1 (see appendices A.1 and B for explicit expressions). In particular, the calculation was performed in the Jordan frame and for the background of a Euclidean de Sitter. During inflation, the corrections coming from the gravitational sector acquired a nontrivial form, with the new terms generated under the RG due to the scalar's non-zero v.e.v φ * ∼ M p 0 / √ ξ, introducing appropriate threshold effects which allowed for a suppression to the running of the couplings such as the quartic interaction λ, non-minimal coupling ξ and Newton's coupling, in the sense explained in section 5. In particular, in this regime, Newton's G presented a negligible running, reducing to its constant, classical value. The sliding RG scale k within this framework is interpreted as the typical energy or coarse-graining scale of the system. The consistency of the approach placed a lower bound on its value during inflation, suggesting it to be of the order ∼ √ λM p 0 /ξ (see the discussion in section 5), which lies a few orders of magnitude below the Planck scale. As long as the v.e.v of the scalar dropped to sufficiently low values after inflation, with gravity entering the deep classical regime at lower cut-off scales, the RG equations acquired their standard perturbative form, allowing for a connection with the low-energy regime. What is more, as discussed in section 7, at arbitrarily high energies, the theory possesses the well known Gaussian-matter UV fixed point, which could provide a connection of the model with the scenario of Asymptotic Safety. In particular, the RG dynamics would be expected to drive the large initial value for ξ to smaller values at higher energies, eventually reaching its fixed-point at ξ = 0. To conclude, Higgs inflation could provide with a promising framework for the early universe and a natural extension of the standard model of particle physics. The investigation of its connection with the physics of higher energies and a potential UV completion, including gravity, are natural questions to ask. The issue of the possible instability of the Higgs potential due to the influence of gauge/Yukawa couplings has not been considered in this work, and its study poses a challenging issue within this context. What is more, in view of our original action ansatz, the higher-order operators generated under the RG procedure were neglected, and their study could provide further insights about the model, such as the issue of unitarity violation. An analysis of the structure of the RG dynamics beyond 1-loop and the connection with Asymptotic Safety is yet another challenging task. From the discussion JCAP02(2016)048 of sections 6 and 7 it turns out that in this direction, a consistent study of the full system of RG equations taking into account the running of the scalar's v.e.v is required. We hope that this work will motivate further studies of the model within the functional RG and/or Asymptotic Safety. A Evaluation of the ERGE Here we present more explicit steps for the calculation of section 4. Our starting point is the action Under the field expansion of the metric and scalar field around a constant background (ḡ µν ,φ), as g µν =ḡ µν + h µν , φ =φ + δφ we expand up to second-order as Notice that derivatives and curvature tensors in (A.2) are built out ofḡ µν . From (A.2) it is a straightforward excercise to extract the individual entries of Γ A B , corresponding to the different vertices. They read as Similarly as before, for the gauge-fixing and ghost operators respectively we have JCAP02(2016)048 Under the trace expansion (4.9) and the background choice of a Euclidean sphere where R αβγδ = (R/12) · (g αγ g βδ − g βγ g αδ ), the different inverse propagator entries take a simpler form which schematically reads as The regulators which will serve as to cut-off the eigenvalues of the Laplacian which' value is less than the infrared cut-off k are appropriately defined as Under the modification of the Hessians, , the regulators R k will combine with the associated laplacian operators, which corresponds to the choice of a Type 1a cut-off. With above relations and definitions, the calculation of the trace integral in the ERGE (4.5) reduces to the evaluation of the trace over momenta where it is understood that the first term stands for a matrix inverse, and the dot corresponds to a matrix multiplication respectively. Defining P k (− ) ≡ − + R k (− ), the trace can be evaluated as [41,80,114] Tr with the definition of the functionals Q 2−i ands z ≡ − . The function g denotes either g ≡ R k or g ≡ k∂ k R k , whileg stands for the anti-Laplace transform of g. a 2i (− ) correspond to the heat kernel coefficients of the Laplacian. For more explicit details we refer to refs. [32,41,44,76,80,114]. A.1 The flow of the effective action The flow of the effective action according to the ERGE organises itself in the following way, with primes here denoting derivatives with respect toφ ≡ φ/k and ∂ t ≡ k∂ k . It is convenient to work with the dimensionless quantities measured in units of the cut-off k, Introducing the convenient quantities σ ≡ V /f and ω ≡ (3 +R)f + V , the individual terms appearing in the flow equation (A.12) are defined as follows, Equation (A.12) describes the change of Γ k under an infinitesimal change of the RG scale k. As expected, the flow of the effective action depends only up to second derivatives with respect to the scalar φ, and up to first-order derivatives with respect to the RG scale k. Notice the RG derivatives on the right-hand side which reflect the RG-improvement beyond the 1-loop level. A similar flow equation has been previously derived in [101] using a different field decomposition and evaluated in the Landau gauge. A.2 Flow of the scalar potentials V and f at 1-loop It is instructive to evaluate the 1-loop approximation of the flow equation, which corresponds to switching off the RG derivatives on the right-hand side of (A.12), see also (4.3). In what follows, primes will denote derivatives with respect toφ. Then, the flow of each potential is described by the following equations with the anomalous dimensions of the potentials f and V respectively taking the following form Above matrices are defined as C Initial conditions at the electroweak scale For completeness, here we report the 1-loop SM equations of ref. [20] used to calculate the initial conditions for the SM couplings at the inflationary scale. The couplings of interest are {λ, y t , g s , g EW , g EM }, corresponding to the quartic Higgs, the top-Yukawa, the strong, SU(2) L and U(1) Y gauge couplings respectively. The initial conditions we use at the topquark mass scale are given by the following relations [115]
12,101.6
2015-12-18T00:00:00.000
[ "Physics" ]
IoT-Cloud Empowered Aerial Scene Classification for Unmanned Aerial Vehicles : Recent trends in communication technologies and unmanned aerial vehicles (UAVs) find its application in several areas such as healthcare, surveil-lance, transportation, etc. Besides, the integration of Internet of things (IoT) with cloud computing environment offers several benefits for the UAV communication. At the same time, aerial scene classification is one of the major research areas in UAV-enabled MEC systems. In UAV aerial imagery, efficient image representation is crucial for the purpose of scene classification. The existing scene classification techniques generate mid-level image features with limited representation capabilities that often end up in producing average results. Therefore, the current research work introduces a new DL-enabled aerial scene classification model for UAV-enabled MEC systems. The presented model enables the UAVs to capture aerial images which are then transmitted to MEC for further processing. Next, Capsule Network (CapsNet)-based feature extraction technique is applied to derive a set of useful feature vectors from the aerial image. It is important to have an appropriate hyperparameter tuning strategy, since manual parameter tuning of DL model tend to produce several configuration errors. In order to achieve this and to determine the hyperparameters of CapsNet model, Shuffled Shepherd Optimization(SSO) algorithm is implemented. Finally, Backpropagation Neural Network (BPNN) classification model is applied to determine the appropriate class labels of aerial images. The performance of SSO-CapsNet model was validated against two openly-accessible datasets namely, UC Merced (UCM) Land Use dataset and WHU-RS dataset. The proposed SSO-CapsNet model outperformed the existing state-of-the-art methods and achieved maximum accuracy of 0.983, precision of 0.985, recall of 0.982, and F-score of 0.983. Introduction In recent days, Internet of Things (IoT) become a hot research topic and received huge attention among researchers to offer enormous services and applications. At the same time, the cloud computing (CC) technologies offer several benefits to support IoT applications and offer several benefits such as low latency, location aware, scalability, etc. [1]. At the same time, Unmanned Aerial Vehicle (UAV) technology has been significantly developed and used for many applications. UAVs can provide fast, cost-effective, and safe deployments for many civil and military applications [2]. Fig. 1 shows the architecture of Unmanned Aerial Vehicles (UAV). The popularity of independent UAVs and its applications, involving search and rescue operations, surveillance, and infrastructure observance in the recent years, is tremendous. Though land cover classification is an essential UAV application, it is complex to construct whollyindependent methods. Object identification processes are extremely integrated due to which it is difficult to reduce its cost demands. The movement of UAVs create multiple hindrances to the generated images in terms of clarity i.e., blurred images, and noise since the onboard cameras often generate low resolution images. In most of the UAV applications, it is difficult to perform the identification process because of the need for realistic efficiency. Various researches have been conducted on UAVs and its associated challenges such as tracking and detecting specific objects, types of vehicles, landmarks, land sites, and persons (involving pedestrian motion). But only a few studies considered multiple object identification [3] due to the fact that multiple targeted object identification is essential for most of the UAV applications. The occurrence of a break in application requirements and practical capability might be a result of two critical limitations: 1) it is difficult to build and store numerous methods to target the objects; and 2) high computation strength is required for technical object identification in case of individual objects. When aerial image scenes are acquired, it undergoes aerial image classification. The images are categorized into sub-regions by covering several grounded objects and a variety of lands covering different semantic classes. Thus, aerial image classification is an important process for several real-world applications like computer cartography, urban planning, remote sensor, and resource management [4]. Generally, some of the identical object classes or land cover varieties are allocated in a pool of scenes. For example, commercial and residential are the two main classes of scenes which may include roads, buildings, and trees. However, these two classes have variances in spatial sharing and density of three class labels. Thus, in aerial scenes, classification is performed depending on structural and spatial pattern complications which is a challenging issue to overcome [5]. The common method is to construct a holistic scene demonstration for scene classification. Among the remote sensing studies, Bag of Visual Words (BoVW) is a familiar technique for scene classification. This technique was developed to investigate the text that implements a document via frequency of words. In order to identify the image via occurrences of 'visual words', local feature quantization is generated whereas BOW technique is utilized by clustering method. BoVW method is a form of BoW technique used for image analysis whereas all the images are determined as visual words from visual dictionary through the histogram of the former [6]. Deep Learning (DL) method [7] is highly beneficial in resolving conventional challenges such as object recognition and detection, Natural Language Processing (NLP), speech identification, and a number of such real-world applications. It is highly efficient than the usual processes and it also gained much attention in academia and industries. This technique attempts to acquire general hierarchical feature learning in terms of various abstract stages. UAV images are processed in real-time environment through two distinct ways namely, onboard processing of images with a GPU board and computation offloading through the transfer of DL algorithm processing from UAV to MEC. But there are several issues observed in the design of UAV-enabled MEC system. The current research work presents an efficient DL-enabled aerial scene classification model for UAV-enabled MEC systems. The presented model allows the UAVs to capture aerial images and then forward the images to MEC for further processing. In addition, Capsule Network (CapsNet)-based feature extractor is applied to derive a set of useful feature vectors from the aerial image. Moreover, for hyperparameter optimization of CapsNet model, Shuffled Shepherd Optimization (SSO) algorithm is executed. Finally, Backpropagation Neural Network (BPNN) classification model is applied in the determination of appropriate class labels of aerial images. The presented SSO-CapsNet model was validated for its effectiveness against two openly accessible datasets. Literature Review Deep Convolutional Neural Network (CNN) [8] is one of the Deep Learning techniques which is familiar and gaining popularity in various identification and detection processes, since it produces optimum outcomes for regular datasets. In image classification, CNN achieves the highest accuracy and is the most preferred technique nowadays. For industrial usage, it is difficult to adjust the traditional Deep CNN (DCNN) due to the complications involved in fine tuning the hyperparameter manually and trade-off between computation cost and accurate classification. Several studies have attempted to reduce the computation cost incurred in its execution [9]. When using UAV aerial scene classification, the complication involved in traditional CNN gets reduced [10]. A particular type of CNN structure is chosen to decrease the search space and this lesser search space is made with the knowledge of experts. Zhang et al. [11] utilized a so-called standard NN sparse autoencoder (AE) to train a group of chosen image patches and the model was tested by saliency degree to extract the local features. Coates et al. [12] improved the conventional Unsupervised Feature Learning (UFL) pipeline by feature learning. The acquaintance of CNN seems to be beneficial in various applications. In the study conducted by Lecun et al. [13], the CNN model was trained by backpropagation (BP) method and the study obtained adequate efficiency in character identification. In recent times, CNN is often utilized in computer vision research works. However, it is complicate to train deep CNN due to the possession of numerous features that are frequently utilized in particular process and the presence of low number of trained instances. The study was designed to extract the intermittent feature from DCNN. This model undergoes training on sufficiently large scale datasets such as ImageNet, that are utilized for a wider view of visible identification processes such as scene classification, object recognition, and image recovery. Cimpoi et al. [14] achieved an optimum outcome when investigating the texture by pooling CNN features acquired from convolutional layer and fisher coding procedure. Research studies are still being conducted using CNN in UAV scene classification. In the literature [15], a pretrained CNN was employed and tuned completely on scene dataset demonstrating excellent classification outcome. But the pretrained CNN method was transferred to scene dataset due to the lack of trained models. In the study conducted earlier [16], the widespread possibility of CNN features, acquired from fully connected layer, underwent testing. In this study, the aerial images were categorized and the optimum outcomes were achieved over comparative techniques in open-source scene datasets. Although various techniques have been proposed for UAV image classification in the literature, there is a need exists to improve its class efficiency. Simultaneously, few techniques have provided optimum outcomes on specific datasets and were never employed on large datasets. Thus, the current research work develops a new advanced DL-based UAV image classifier. The Proposed SSO-CapsNet Model The working principle of the presented SSO-CapsNet model is illustrated in Fig. 2. As shown in the figure, UAV captures the aerial images which are then processed in MEC. The captured aerial images are then fed into CapsNet-based feature extractor to derive an effective set of feature vectors. Followed by, hyperparameter tuning of the CapsNet model is performed using the SSO algorithm. Finally, BPNN model is applied to allocate the class labels of the applied aerial test images. The detailed operations of these sub-processes are explained in the succeeding sections. Capsule Network (Capsnet) Based Feature Extraction CapsNets [17] is developed as an alternate model for CNNs. Being equivariant, the capsules are composed of a network of neurons that fetch in and yield out the vectors in line with scalar value of the CNNs. In CapsNet model, all the capsules are composed of a set of neurons with its output demonstrating various properties of similar features. It gives the benefit of identifying the entire set of entities through initial identification of their parts. Capsule outcome is made up of probability in which the feature encoder exists by capsules and the group of vector values is generally named after 'instantiation parameters.' It can be defined as the probability of existence of capsule's features to ensure network invariability. These instantiation parameters are utilized in the representation of network equivariance based on its capability for recognizing pose, texture, and deformation. Invariance is an asset of methods which makes the latter remain unchanged though the input value changes. This is called 'translational invariance' which is a peculiar characteristic of CNNs. For sample, when CNN detects the face, regardless of the position of eye, it stands still until it identifies the face. But, equivariance makes sure that the spatial position of features, proceeding to the face, is taken into account. Thus, in terms of outcome, equivariance does not consider the occurrence of an eye in image, but considers its position only in the image. Equivalences are the required properties for CapsNets. The three commonly available operations for capsule execution are discussed here. They are transformation of AE, vector capsule depending on dynamic routing, and matrix capsule depending on Expectation-Maximization (EM) routing. Fig. 3 shows the structure of CapsNet model. Transformation of Auto-Encoders An initial CapsNet is published with the transformation of AEs. It is constructed to emphasize the capability of network in recognizing the pose. The aim is not to identify an object from the images, but to take the image and their pose as input and output respectively, to form a similar image from original pose. An output vector of capsules, from this initial execution, is composed of output values. Further, one of the signified outcomes lies in these probabilities in which the feature exists through the rest of representative instantiating parameters. The capsules are ordered in various levels: the lower level l is named after initial capsule whereas the upper level l + 1 is named after secondary capsule. Lower level capsule removes the 'pose' parameter in pixel intensities, since it has the ability to initiate a part-whole hierarchy [18]. This part-whole hierarchy is an advantage in CapsNets model since it identifies the parts and is developed to identify the whole set of entities too. In order to realize this, this feature is signified by lower level capsule which needs to have correct spatial connection. Previously, it activated higher-level capsules at level, l + 1. For instance, assume that eyes and mouth are signified by lower level capsule. Then, each one can forecast to pose the higher-level capsules which signify a face in case of predictions being accepted. In order to describe the basis of initial-level capsule, ANN is learned to change the pixel intensity for pose parameter. In a simple method, 2D images, capsule by x and y with its positions, and its only pose output are utilized. Once the learning process is over, the network takes an image and there is a need arise to shift x and y. Then the output of an image remains the identified shift in pose. In order to prevent the influence of inactive capsule from affecting the output of 'generation unit,' the capsule output is multiplied by probabilities, p. Dynamic Routing Between Capsules The next level of changes in CapsNets is determined by the capsules which are nothing but a set of neurons with instantiation parameter. These changes are even signified by activity vector, whereas the length of vector signifies the probability in which the feature exists. The enhancement with a detailed prior execution exhibits that there is no need of information in the input [19]. The networks are composed of three layers namely, Convolutional (Conv) layer, Primary Capsule (PC) layer, and Class capsule layer. PC layer is the initial capsule layer which is only next to undetermined number of capsules' layer. The final capsules' layer is named after Class capsules layer. Feature extraction process from an image is completed by Conv. layer and the output is fed to PC layer. In all the capsules, i (where 1 ≤ i ≤ N) in layer l takes the activity vector u i ∈ R into account for encoding spatial data in the procedure of instantiation parameter. The output vector u i of i th lower-level capsules are then fed to every capsule from next layer, l + 1. The j th capsule at layer l + 1 is obtained i.e., u i and their product is defined with equivalent weight matrix i.e., W ij . The resultant vectorû j|i is the capsule i at level l's change of entities which is signified by capsule j at level l + 1. In the prediction vector of PC,û j|i refers to PC whereas i corresponds to the class capsule, j. The product of prediction vectors and coupling coefficient, which together signifies the agreement between the capsules, is performed to obtain a single PC i's forecast for class capsule, j. When the agreement is higher, both the capsules are related together. Thus, in the outcome, the coupling coefficient is first increased which is then decreased. The weighted sum (s j ) of every individual PC forecasts to the class capsule, j is computed to achieve the candidates' squashed function, (v j ). The squashed function makes sure that the length of output in capsules lies between 0 and 1 as probability. v j in one capsule layer is sent to next layer capsules and processed in a similar manner. The coupling coefficient c ij makes sure that the forecast of i in level l is connected to j in layer l + 1. In all the iterations, c ij is upgraded by determining the dot product ofû j|i and v j . To be specific, the vector values connected to all capsules are observed as mere segments of two numbers; the probabilities signify the presence of feature which the capsules tend to encapsulate and a group of instantiation parameters that assist in the clarification of consistency among the layers. Thus, a related path by agreement stems in detail that if lower-level capsule decides the higher level layer capsules, it is 'construct a part whole' connection referring to the relevance of path. Matrix Capsules with EM Routing On the contrary to utilization of vector outputs, the literature [20] presented the illustration of input and output of capsule as matrices. It is essential to decrease the size of transformation matrices between capsule and matrix. Further, it is developed by n elements rather than n 2 when utilizing vectors. Dynamic routing by agreement is exchanged with EM technique. This dynamic routing is cosine between two pose vectors. Also, the probability of existence of entity, even illustrated by capsule, is exchanged with a parameter a, rather than the length of vectors. In the capsule i at level L and capsule j at level L + 1, these values refer to the trainable transformation weight matrix i.e., W ij . EM mechanism ensures that the shift matrix of capsule i is changed by transformation weight matrix W ij to cast the vote to shift the matrix of capsule, j at level L + 1. Vote is an artefact of output matrix M i and transformation matrix W ij [20]. The poses and activations of every L + 1 level layer are established by entering V ij and a i as non-linear EM routing techniques. During an iteration, EM upgrades the means, variances, and activation probabilities of layer L + 1 capsules with the assignment probability between lower and higher level capsules. Hyperparameter Optimization In order to tune the hyperparameters involved in CapsNet model effectively, SSO algorithm is applied and thereby the classification performance is enhanced. SSO algorithm offers several benefits such as maximum accuracy, convergence rate, and reduced parameter dependency. It is based on the herd performance of shepherds. Humans have to learn this phenomenon through long-term observation so as to utilize animal capabilities and attain the objectives [21]. Shepherds try to steer their herd in a right way. To resolve this, they generally set animals such as horse or herding dog for the herd. These animals are utilized to manage the herd through their herding behaviour. They further guard the herd animals from wild animals and theft. This performance is the fundamental information to follow the SSO technique. Step 1: Initialization SSO begins with an arbitrarily-created primary Member Of Community (MOC) for search space as given herewith. where rand refers to arbitrary vector by all components created between 0 and 1; Here, MOC min and MOC max denote the lower and upper bounds of design variables; m implies the amount of communities, and n defines the count of members going to all the communities. In this regard, it is supposed that the entire number of communities is attained as [21] follows. Step 2: Shuffling process In this method, initial m refers to the members of communities which are chosen depending on their objective function values. These are arbitrarily located values in the first column of Multi-Community (MC) matrix (Eq. (7)) which are otherwise, the initial member of all the communities. Then, to create the 2 nd column of MC, next m members are selected alike the preceding step which are arbitrarily located in the column. These procedures are carried out for n times independently, until the MC matrix gets molded as given herewith. It is worth mentioning that all the rows of MC refer to the members of all the communities. This phenomenon ensures that the members of initial column of MC are optimal members, compared to all other communities. Moreover, the member's place in final column is the bad agent in all communities. Step 3: Movement of Community Member The unique step size of the movement in all the communities is calculated depending on two vectors. Initial vector (i.e., stepsize Worse i,j ) showcases the capability to visit new regions of search space (diversification approach). In contrast, the 2 nd vector (i.e., stepsize Better i,j ) refers to the ability of exploring those search space areas (intensification method) which are nearby and already visited. The mathematical equation for step size is given herewith [21]: 1, 2, . . . , m and j = 1, 2 stepsize Better where rand 1 and rand 2 represent the arbitrary vectors with all the even components created between 0 and 1; MOC i,b (chosen horses) and MOC i,w (chosen sheep's) are optimal and worst members with respect to objective function value and is related to MOC i,j (shepherd). It is worth to mention that the initial member of ith community (MOC i,1 ) mostly prefer itself rather than is equivalent to zero. Furthermore, α and β imply the factors which control exploration as well as exploitation correspondingly. These aspects are determined as follows. It is obvious that the iteration number t and β increase whereas α value decreases correspondingly. Thus the outcome and exploration rate decreases whereas the exploitation rate increases [22]. Step 4: Update the position of each community member Based on the prior step, the new location of MOC i,j is computed utilizing Eq. (13). Next, the location of MOC i,j is upgraded, when it could not find the worst old objective function value [22]: Step 5: Checking termination conditions Later, the count of iterations is set as the end condition (Max-iteration), then the optimization procedure is finished. Afterwards, it goes to step 2 for a new round of iteration. BPNN Based Image Classification At the final stage, the extracted feature vectors from hyperparameter-tuned CapsNet are fed into BPNN model to perform the classification. BPNN is a multi-layer network which has a set of input, hidden, and output layers. All the layers contain a number of neurons. To adjust the weight and bias in neurons, BPNN uses error BP function. It is beneficial in a gradient-descent feature and this technique is developed as an efficient function estimate technique [23]. Classical BPNN has a number of m inputs and n outputs. In feedforward network, all the neurons from the next layer act as input in every neuron for the outputs from final layer. Afterwards, the output is fed as input for the next neuron layer. In one neuron j, assume n refers to the number of neurons in final layer; o i refers to the output of ith neuron; w i represents the equivalent weight for o i and θ j implies the bias of neurons j. Then, the neurons j compute the input for sigmoid function, I j utilizing the equation [23]. Assume o j indicates the output of neuron, j which is expressed as follows When the neuron j implies the output layer, BPNN begins the BP level. Assume t j refers to encoder target output. This technique calculates the output error Err j to neuron j in the output layer with the help of following equation. Assume k signifies the amount of neurons in next layer; w p refers to weight; and Err p defines the neuron error, p in next layer. The error Err j of jth neurons is expressed as follows Assume η indicates the rate of learning. Neuron j tunes its weight w j and bias θ j with the help of [23]. If BPNN finishes tuning the network by one trained sample, it begins with a second trained sample with the output of first trained sample as an input to train the second sample. For executing the classifier, BPNN requires only the execution of the feedforward network. The outputs at the output layer are the final classifier outcome. Experimental Validation The proposed SSO-CapsNet model was simulated using Python 3.6.5 tool. It was validated using two datasets namely, UCM and WHU-RS datasets. UCM dataset is composed of a largesized aerial image under 21 classes. Every class holds a total of 100 images with an identical size of 256*256 pixels. WHU-RS dataset includes a total of 950 images with an identical size of 600*600 pixels. These images undergo uniform distribution under a set of 19 classes. Few sample test images are shown in Fig. 4. Conclusion The current study developed a new DL-enabled aerial scene classification model for UAVenabled MEC systems i.e., SSO-CapsNet model. The presented model allows the UAVs to capture aerial images and send it to MEC for further processing. At MEC, the captured aerial images are fed into CapsNet-based feature extractor to derive an effective set of feature vectors. Followed by, SSO algorithm is used to fine tune the hyperparameters of CapsNet model. The application of SSO algorithm helps in effectively tuning the hyperparameters. Thus, the accuracy of overall aerial image scene classification is enhanced. Finally, BPNN model is applied to allocate the class labels of the applied aerial test images. The simulation results of the proposed SSO-CapsNet model were validated against the benchmark UCM and WHU-RS datasets. The obtained experimental values inferred that the SSO-CapsNet model outperformed other classifiers and accomplished the maximum accuracy of 0.983, precision of 0.985, recall of 0.982, and F-score of 0.983. In future, SSO-CapsNet model can be implemented in handling various input sizes with multiple scaling. Further, the model can be assessed on big datasets such as NWPU-resic45 for its performance. Funding Statement: The authors received no specific funding for this study. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
5,917.8
2022-01-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Fracture Assessment of Notched Severely Plastic Deformed 7075 Al Alloy under Mode I Loading Equal channel angular extrusion is an extrusion to refine the microstructure of metals and alloys. This paper presents for the first time the experimental results on effects of the equal channel angular pressing (ECAP) on tensile mechanical properties and ductile fracture strength of 7075 Al alloy. The 7075 Al was subjected to equal channel angular pressing after solution treatment, and significant grain refinement and enhancement in tensile strength and hardness are obtained after equal channel angular pressing. Then, the ductile static strength of ECAP 7075 Al alloy weakened by U-notch is investigated under mode I loading. The criterion based on the averaged value of the strain energy density over a control volume at the notch edge in combination with the equivalent material concept is applied to assess the static strength of specimens. A sound agreement between experimental data and the results obtained from strain energy density criterion is found. INTRODUCTION Aluminum alloys are widely used in engineering applications. There is, nevertheless, much significance to improve mechanical properties of the alloys using engineering processes. Although mechanical properties of all crystalline materials are determined by several factors, the average grain size plays a very significant, and often a dominant role. In order to decrease the grain size of a material using metal-working procedure, it is necessary to impose an exceptionally high strain in order to introduce a high density of dislocations subsequently rearranged to form an array of grain boundaries. In practice, because of the limitations of conventional metal-working procedures on overall imposed strains, attentions have been devoted to techniques based on application of severe plastic deformation [1]. Equal channel angular pressing (ECAP) as a severe plastic deformation process, produces significant plastic strain into materials without reducing the cross sectional area. The sample, in the form of a rod or bar, is placed so that the sample can be pressed through the die using a plunger. The nature of the imposed deformation is simple shear which occurs as the sample passes through the die. As shown schematically in Fig. 1, the theoretical shear zone is shown between two elements within the sample numbered 1 and 2, and these elements are transposed by shear as depicted in the lower part of the diagram [1]. Despite the introduction of a very intense strain as the sample passes through the shear plane, the sample ultimately emerges from the die without experiencing any change in the cross-sectional dimensions. ECAP process also introduces nonequilibrium condition in the microstructure of the alloys such as high dislocation density and large number of low angle grain boundaries [2]. Although, equal channel angular pressing as a method of severe plastic deformation improves strength with decreasing grain size and increasing dislocation density, but the elongation to failure associated with the movement of dislocations decreased after the first pass. Al alloys are widely used in various industrial and scientific engineering as a structural and/or functional material. In most of industrial products, some notches of U-or V-shapes are viewed as desirable entities under different loadings. The notches are weak points that may generate cracks and lead to fracture. Several criteria are present in the literature for fracture assessment of engineering notched components. The failure criterion proposed by Novozhilov [3] and developed by Seweryn [4] called as theory of critical distances suggests that failure occurs when the average normal stress along the characteristic length scale denoted by d 0 equals a material dependent stress at failure without the presence of a notch. The successful application of theory of critical distances on the notched components is widely investigated in [5,6]. Leguillon [7,8] proposed a criterion for the failure initiation at a sharp V-notch based on a combination of the Griffith energy criterion for a crack, and the strength criterion for a straight edge. Neuber [9] first suggested the idea of linking the stress averaging to the fictitious notch rounding approach and other researchers investigated the influence of plane stress and plane strain conditions on the application of the fictitious notch rounding approach and in particular on the calculation of the multi-axiality factor s. Marsavina et al. [10] investigated the dynamic and static fracture toughness of polyurethane rigid foams and in another work [11], four fracture criteria (maximum circumferential tensile stress, minimum strain energy density, maximum energy release rate, equivalent stress intensity factor) were applied to evaluate the mixed mode fracture of polyurethane foams using asymmetric semicircular specimens. In particular, the authors showed that the equivalent stress intensity factor criterion predicted well the mixed mode fracture more precisely. The other worth mentioning approach is based on the cohesive zone model. The major advantages of cohesive zone model over the conventional methods in fracture mechanics like those including linear elastic fracture mechanics, crack tip open displacement, etc. is that it is able to adequately predict the behavior of uncracked structures, including those with blunt notches. Moreover, the size of the non-linear zone has not to be negligible in comparison with other dimensions of the cracked geometry in cohesive zone model. This approach was first proposed for concrete and later successfully extended to brittle or quasi-brittle failure of a large bulk of materials and in particular poly-methyl-methacrylate (PMMA) specimens tested at room and low temperature [12]. In those works, both sharp and blunt U-and V-notches were considered. A recent approach, based on the strain energy density (SED), was proposed and successfully applied for the fracture assessment of notched components. The strain energy density approach is based on the evaluation of the averaged strain energy density over a control volume [13,14]. The criterion was applied to assess the fracture behaviour of different materials. Radaj [15,16] made a review on the local strain energy density concept and its relation to the J-integral and peak stress method. Recently, the strain energy density criterion has been developed to fracture assessment of notched functionally graded materials numerically and experimentally [17][18][19][20][21][22][23][24][25][26]. Although the strain energy density criterion can be used to predict the brittle or quasi-brittle fracture of notched components, any applied materials must be typically engineered as ductile structures like those of metallic materials, including steel, aluminum and titanium alloys, as well as polymers or polymer-based composites, most of which may contain notches of various shapes [27]. Therefore, we need elastic-plastic analysis for fracture assessment of ductile materials. Two of the first attempts to utilize elastic analysis instead of elastic-plastic one has been made by Glinka and Molski [28] and Glinka [29]. They made use of the strain energy density approach to determine the elastic-plastic stress distribution around some notched components. In their works, first, the elastic stress concentration factor has been used to formulate the elastic stresses at the notch tip and then, the strain energy density at the notch tip has been equated for elastic and elastic-plastic components with the aim to determine the stress distribution in the notched component made of ductile material. With the aim of applying the strain energy density to nonlinear elastic conditions, but keeping its simple linear elastic formulation, the approache will be combined with the equivalent material concept. Based on the equivalent material concept, the strain energy density criterion of the area under the stress-strain curve in uniaxial tension is assumed to be the same for ductile and virtual brittle materials with similar moduli of elasticity and fracture toughness [30,31]. In this study, the alloy is subjected to equal channel angular pressing after solution treatment. Experimental results on effects of equal channel angular pressing on microstructure and mechanical properties (yield stress, ultimate stress and hardness) of finegrained 7075 Al alloy are also presented. Also the well-known strain energy density criterion was employed in combination with the equivalent material concept to predict the fracture behavior of U-notched ductile ECAP 7075Al components without the need for performing an elastic-plastic analysis. Moreover, a new set of experimental data was provided by U-notched specimens made of ECAP 7075Al with different values of notch depth and notch radius which can be useful to engineers engaged with the static strength analysis of ECAP 7075Al components. Our experimental data showed good agreement with those obtained via strain energy density approach. Material Initial rods of 7075 Al alloy were immediately pressed through an ECAP die after solution treatment at 490°C for 4 hours and water quenching. The ECAP die was consisted of channels with equal cross sections having intersection angle of Φ = 90° and outer angle of Ψ = 20° so as to introduce the effective strain of approximately 1 on a single pass. The ECAP process was carried out through only one pass at room temperature with pressing speed of 1 mm/s. The ECAP samples were then subjected to natural aging. In order to study effect of equal channel angular pressing on strength and ductility, hardness and tensile tests were also carried out. Tensile tests were performed at room temperature according to ASTM E8, using a tensile-compression testing machine operating at a constant rate of crosshead displacement with an initial strain rate of 4  10 −3 1/s. Microstructural examination and fracture surface characterization of the tensile test segmented specimens were performed by optical microscope and scanning electron microscope respectively. In longitudinal direction, shear lines can be detected. Comparing the micrographs of transverse directi- on, remarkable grain size reduction is obvious as a consequence of ECAP process. Figure 3a shows H V hardness values of the samples at every stage of the process, while Fig. 3b shows H V variation of ECAP samples versus natural aging time. Significant enhancement in hardness up to 165 H V resulted immediately after a single pass of equal channel angular pressing. The increase in hardness for the ECAP specimen can be attributed to strain hardening effects including increase in dislocation density and grain refinement and also dynamic aging (strain induced aging) which are in agreement with [32]. Furthermore, since the alloy was in super saturated solidsolution state before equal channel angular pressing, the hardness increased to about 195 H V after aging at room temperature for three months, due to age hardening effect. Comprehensive description on influences of equal channel angular pressing and aging on mechanical properties of 7075 Al alloy has been presented in previous paper [33]. Hardness and Tensile Strength Stress-strain behavior of the alloy before and after ECAP and natural aged for three months alloy are shown in Fig. 4. Remarkable increase in strength was obtained after a single ECAP pass (increase by 100% in yield stress from 225 to 450 MPa). Ultimate stresses of the post-ECAP and the as-received specimens are 340 and 530 MPa, respectively. Along with the improvement of strength, the ductility decreased after equal channel angular pressing from 15% to 12%. Decrease of the ductility is attributed to relatively small strain hardening after yielding in the ECAP processed material [34]. Comparing the results with strength of pre-ECAP annealed specimens, it can be indicated that the strength of the pre-ECAP solutionised specimen after only one pass of equal channel angular pressing is significantly greater than the strength of pre-ECAP annealed specimen after 4 passes, where the ultimate tensile stress and the ductility have reported to be 425 MPa and 7.9%, respectively [34]. Figure 5 shows the detail of tensile test specimens. Table 1 represents mechanical properties of 7075 Al alloy under different conditions. Three Point Bending Tests In this study, the specimens drawn from the ingots were characterized by 40 mm in length, 10 mm in width and 5 mm in the thickness direction as shown in Fig. 6. The effects of the notch tip radius ρ and notch depth a on the fractures of the specimens under the mode I loading was investigated. Two values of the notch radius ( = 0.2 and 0.5 mm) and two values of the notch depth (a = 1 and 2 mm) were considered for the test specimens. The tests were carried out by a ZWICK 1494 testing device under displacement control with a constant displacement rate of 0.05 mm/min. A U-notched specimen during the three point bending loading and a broken specimen within the machine are shown in Fig. 7. The values of the critical load of each experimental test (F exp1 and F exp2 ) are presented in Table 2. The Averaged Strain Energy Density Criterion The averaged strain energy density criterion was employed to predict the fracture loads of U-notched specimens under mode I. Based on this criterion, brittle or quasi-brittle fracture occurs when the averaged value of strain energy density over a well-defined control volume reaches a critical value W c . For brittle materials, W c can be evaluated as follow [14]: For a U-notch under mode I loading, the control volume assumes the crescent shape which is centered with respect to the notch bisector line. R c is the critical length which is measured along the notch bisector line as shown in Fig. 8. The critical length R c can be evaluated as follow under plane strain conditions [13,14]: where K Ic is the fracture toughness,  ut is the ultimate tensile stress and  is the Poisson's ratio. The outer radius of the control volume is equal to R c + /2 and  is the notch tip radius. In this paper, linear elastic analyses were performed to evaluate fracture loads. Two procedures were employed for prediction. First, by neglecting the plastic behavior of the material, fracture loads were predicted by simply using the mentioned criterion assuming the material is ideally brittle. Second, the idea of equivalent material concept [30,31] was used to define virtual brittle materials. Considering the equi- F exp2 ) and theoretical ones. F th1 is obtained assuming the material is ideally brittle, F th2 is obtained by using equivalent material concept valent brittle materials instead of ductile ones, fracture loads were predicted by the averaged strain energy density criterion. Equivalent Brittle Materials In this paper, the equivalent material concept as a novel concept was employed to equate the ductile and virtual brittle materials from the strain energy density viewpoint. The equivalent material concept provides an imaginary consideration of a virtual brittle instead of a ductile material to investigate a linear elastic rather than an elastic-plastic behavior in fractures. The simple criteria for brittle fractures could be ultimately utilized in the study of the fracture phenomenon occurring to ductile materials. Based on the equivalent material concept, the strain energy density values of the existing virtual brittle and ductile materials with similar moduli of elasticity could be assumed to be the same. Strain energy density actually represents the strain energy absorbed by the unit volume of a material. The following equation can be written for a ductile material un- (3) where  and  indicate the plastic stress and strain, respectively. The parameters of K and n demonstrate the strain-hardening coefficient and exponent, which depend on the material properties, respectively. Figure 9 depicts a schematic representation of a tensile stress-strain curve for a typical ductile material, in which E, Y, u, and  f denote the elastic modulus, yield strength, ultimate tensile strength, and strain at rupture, respectively. The total strain energy density (SED) can be expressed in the following general elastic-plastic form: As defined in the equivalent material concept, the equivalent virtual brittle material has the same values of E and K Ic respectively representing the elastic modulus and plane-strain fracture toughness, but an undetermined value of the ultimate tensile strength. In Fig. 10, a typical uniaxial stress-strain curve is schematically shown for the virtual brittle material, in which the parameters of  f * and  f * stand for the strain (final fracture) occurring with the crack initiation due to the brittleness and the ultimate tensile strength, respectively. Upon the crack initiation, the strain energy density for this material can be calculated as follows: Assuming the equality of the SED values for both the virtual brittle and real ductile materials according to the equivalent material concept, we have: The parameter of  f * presented in Eq. (7) can be used together with the material fracture toughness (K Ic or K I ) as the two necessary inputs of different brittle fracture criteria for predicting crack initiation from notches in ductile structures subjected to tension (pure mode I loading condition) [30,31,35]. Figure 11 shows the stress-strain curves of ductile and the equivalent brittle materials. The tensile strength of as received and ECAP aluminum were determined to be 989 and 1283 MPa, respectively. Finite Element Analysis In order to obtain the averaged value of strain energy density over the control volume, some finite element analyses were carried out using ABAQUS software version 6.11 under plane strain conditions and linear elastic hypothesis with eight-node elements. Figure 12 shows a sample maximal principal stress and SED contour lines. The critical fracture load could be evaluated by determining the SED mean value over the control volume using the following expression [28]: where F cr indicate critical fracture loads, W cr is the critical strain energy density, F ap is the applied load that in computer simulation, is voluntary less than the F ut (ultimate force) and in this work was considered 405 N, and ap W is the strain energy density averaged over the control volume relevant to, ap W is equal W ap /V, W ap (strain energy density over the control volume) and V (control volume) were calculated with finite element method in Abaqus software. In Table 2, the theoretical values of fracture loads are summarized. As can be seen from the table, for such a ductile material, the averaged strain energy density is able to predict the fracture loads with a moderate accuracy by neglecting the plastic behavior of the material. However, such an assumption causes losing well prediction of the trend of fracture loads with respect to the notch tip radius. In contrast, using the equivalent material concept results in a good accuracy of the prediction of fracture loads and the trends. Moreover, Table 2 shows that these is a significant increase in critical fracture load for the ECAP alloy comparing to the as-received alloy. CONCLUSION In the present work, the strain energy density averaged over a well-defined control volume ahead at the notch edge in combination with the equivalent material concept was utilized to obtain the critical fracture loads of the U-notched specimens made of 7075 ECAP Al under mode I loading condition. The main results of this investigation are summarized as follows: Processing a solutionised aluminum alloy 7075 specimen by equal channel angular pressing, even by a single pass, significantly improves the hardness. This finding may have important practical significance because of its advantage in industrial applications with considerable saving in time. The yield strength of solutionised specimen increased from 240 to 451 MPa after equal channel angular pressing and two months natural aging. The ductility decreased significantly after equal channel angular pressing and two months natural aging. The strain energy density criterion in combination with the equivalent material concept model provided a suitable approach to the prediction of the ductile fracture behavior of ECAP Al. The limited average deviation (7.8%) between the theoretical and experimental values based on the critical fracture loads indicated the model accuracy. The critical fracture load for the ECAP alloy comparing to the as-received alloy increases about 30%.
4,505.2
2020-11-01T00:00:00.000
[ "Materials Science" ]
Research of localization algorithm based on weighted Voronoi diagrams for wireless sensor network Wireless sensor network (WSN) is formed by a large number of cheap sensors, which are communicated by an ad hoc wireless network to collect information of sensed objects of a certain area. The acquired information is useful only when the locations of sensors and objects are known. Therefore, localization is one of the most important technologies of WSN. In this paper, weighted Voronoi diagram-based localization scheme (W-VBLS) is proposed to extend Voronoi diagram-based localization scheme (VBLS). In this scheme, firstly, a node estimates the distances according to the strength of its received signal strength indicator (RSSI) from neighbor beacons and divides three beacons into groups, whose distances are similar. Secondly, by a triangle, formed by the node and two beacons of a group, a weighted bisector can be calculated out. Thirdly, an estimated position of the node with the biggest RSSI value as weight can be calculated out by three bisectors of the same group. Finally, the position of the node is calculated out by the weighted average of all estimated positions. The simulation shows that compared with centroid and VBLS, W-VBLS has higher positioning accuracy and lower computation complexity. Introduction Wireless sensor network (WSN) is a self-organizing distributed network system including plenty of tiny sensor nodes with the ability to communicate and calculate in a specific monitoring area. In the wireless sensor network, the node position information plays a very important role in monitoring activity. The monitoring information without location message is meaningless. Therefore, the research of wireless sensor network positioning technology is the key technology of WSN [1]. Literature [11] used Voronoi diagrams in wireless sensor node localization. In this algorithm, the midperpendiculars between each beacon node and its neighbor beacon node composed the Voronoi region boundaries. According to the properties of the Voronoi diagrams, we can see that the node to be located is in its nearest beacon node Voronoi region. Therefore, in literature [11], the algorithm weighted all the nodes within this region firstly and obtained all the beacon nodes' Voronoi regions in order, then added different weight values to the obtained regions, and finally obtained the centroid of the largest weight value region as the estimated coordinate of the node to be located. However, algorithm [11] using the midperpendicular of the beacon nodes as Voronoi diagram region boundaries could not reflect the relationship between the RSSI signal strength and the distance among the nodes. Therefore, in order to improve localization accuracy and reduce complexity of the algorithm, we improved the localization algorithm based on Voronoi diagrams. In the improved algorithm, we selected two beacon nodes' weighted bisector as the region boundaries, then calculated the two weighted bisector intersection coordinates as estimate coordinates, and finally we regarded the weighted average values of all the estimate coordinates as the final estimate coordinates of the node to be located. Positioning algorithm based on Voronoi Voronoi diagram (Figure 1) refers to a point set in a given space, P = {P 1 , P 2 , P 3 ,⋯P n }, 3 ≤ n < ∞. The plane is divided by the Voronoi diagram as follows: In the previous formula, let x be any point in the plane and d(x, P i ) be the Euclidean distance [11] between x and the certain point P i . In wireless sensor networks, because RSSI signal values between nodes are inversely proportional to the square of their distances, then according to this property and the definition of Voronoi diagrams, we can describe WSN node localization as follows. 1. Let P 1 , P 2 , P 3 ,⋯,, P n be beacon nodes in wireless sensor network area and S be node to be located. 2. We can suppose beacon nodes P 1 , P 2 , P 3 ,⋯, P j communicate with the node S, and the node S receives RSSI signal strength of the beacon node according to size of RSSI P 1 > RSSI P 2 > ⋯; > RSSI P j . 3. According to the properties of the Voronoi diagrams, we can see that the unknown node S is in the Voronoi region of beacon node P 1 . We compute P 1 's Voronoi regions and add the weight values RSSI P 1 to all the nodes within this area. 4. Remove the node P 1 , compute the Voronoi region of P 2 , and assign node weight value RSSI P 2 . In this way, eventually, we can obtain all beacon nodes of the Voronoi regions. 5. We can find the region with a maximum weight value and get the gravity coordinate as the calculation coordinate of the unknown node S. Positioning algorithm based on weighted Voronoi In Voronoi diagram algorithm, the Voronoi region of beacon node P i is made of midperpendiculars of this beacon node and the beacon nodes around it. However, in fact, if the nodes have the same centroid and the greater the intensity of the RSSI signal the node received, the smaller the distance between nodes will be. Therefore, we can appropriately adjust the Voronoi region of this node through the signal value. Thus, we can improve the localization accuracy and reduce computational complexity. Algorithm basic ideas We presume the node to be located S can receive the signal from beacon nodes P 1 , P 2 , P 3 ,⋯, P n . When S has the distance of d i , d j (suppose d i > d j ) to any two beacon nodes P i , P j , the node to be located S and P i , P j will form a triangle SP i P j . Let node S be in the line of P i P j 's weight bisector; we can select this line as the region boundaries of beacon nodes P i and P j . Then, we can select beacons P m , P n and repeat the above method. We will get a more accurate Voronoi region at last. Because of the assumption d i > d j , we know ∠ SP i P j > ∠ SP j P i in the triangle SP i P j . For calculating the straight line equation L of weight bisector, we need to get the slope k of L and the intersection coordinates P between P i P j and L. With the properties of the straight slope, the slope of L is the opposite to the reciprocal slope of bottom edge P i P j , that is k L ¼ − . We use the following three cases to seek the intersection coordinate P(x 0 , y 0 ): 1. ∠ SP i P j is an acute angle (Figure 2). We calculate the next formula first. We know s 1 and s 2 are both greater than zero from the cosine law. Then, we can choose proportionality coefficient l ¼ s 2 s 1 as the specific value of j P i P → j and j PP j → j, that, is Because we have calculated the positions of s 1 and s 2 and the positions of P 1 and P 2 have been known, we can get the coordinate P(x 0 , y 0 ). Figure 1 Voronoi. Taking k L that we have obtained and P(x 0 , y 0 ) into the equation y − y 0 = k(x − x 0 ), we can receive the equation as follows: 2. ∠ SP i P j is a right angle (Figure 3). Straight line L is P i S in the triangle, so straight line L's slope k L is still k L ¼ − ; the intersection of L and bottom edge P i P j is P i (x i , y i ). Then, we can get the equation of straight line L: 3. ∠ SP i P j is an obtuse angle ( Figure 4). We still calculate At this time, s 1 > 0, s 2 < 0; then, the proportionality coefficient is l ¼ − s 2 s 1 . In a similar way, we can get the coordinates P(x 0 , y 0 ). Then, get the equation of L. Algorithmic process Localization algorithm based on weighted Voronoi diagrams works as follows: 1. The node to be located S broadcasts around the information Request with requesting location. 2. All beacon nodes that received the Request return the information Reply which contains its own location. 3. After node S receives all the information, we sort the beacon nodes from big to small according to the signal intensity. We assume that the sorted order of beacon nodes is P 1 , P 2 , P 3 ,⋯,P k . 4. The bottom edge heights of SP 1 P 2 , SP 2 P 3 ,⋯, SP k − 1 P k form the equations L 1 , L 2 ,⋯,, L k − 1 . 5. Next, we can get the intersection Q 1 of L 1 and L 2 , Q 2 of L 2 and L 3 ,⋯, and Q k − 2 of L k − 2 and L k − 1 . Then, we attach the RSSI signal values of P 1 , P 2 ,⋯ P k − 2 to the nodes Q 1 , Q 2 ,⋯,Q k − 2 as weighted values. 6. Calculate the weighted average coordinates from node Q 1 to node Q k − 2 . 4 Experiment and performance analysis of the positioning algorithm In this section, we make the simulation analysis on performance comparison among the new algorithm, weighted centroid algorithm (W-Centroid) and Voronoi diagrambased localization scheme (VBLS) algorithm by MATLAB 7.0 (The MathWorks, Inc., Natick, MA, USA). There are 25 beacon nodes distributed randomly in the region of 100 m × 100 m, among which Shadowing model is adopted to realize the communication. In the previous equation, P r (d 0 ) and d 0 represent reference energy and reference distance, respectively. β represents path loss coefficient (general value is 2~4), and X db is a Gaussian variable that has an average value of zero. Figure 5 describes the relation between localization accuracy of the three algorithms and communication radius. From Figure 5, we can see that weighted Voronoi diagram-based localization scheme (W-VBLS) algorithm's relative errors decrease gradually with the increase of communication radius. When the communication radius is greater than 45 m, it is essentially flat with VBLS errors. As communication radius increases, the beacon nodes involved in the localization increases, the unknown node can gain more location information, and localization errors decrease. Figure 6 depicts the relation between localization accuracy of the three algorithms and the number of nodes. As can be seen from Figure 6, with the number of beacon nodes increasing, the localization accuracy improves gradually. When beacon node increases to 25, localization accuracy has few changes. In order to locate, the VBLS localization algorithm must have at least four nodes, while the W-VBLS algorithm only needs three beacon nodes. Therefore, in case the beacon nodes are sparse, W-VBLS significantly has higher positioning accuracy than the other two algorithms. Figure 7 depicts the relationship between the positioning accuracy of the three algorithms and noise. W-Centrold adopts the connectivity among nodes to positioning, while VBLS and W-VBLS positioning base on the size of the RSSI signal. In case the noise increases, W-Centrold positioning algorithm only has small fluctuations, while VBLS and W-VBLS will have fluctuations with noise increasing.
2,724.4
2014-03-31T00:00:00.000
[ "Computer Science" ]
Kinetics of rare events for non-Markovian stationary processes and application to polymer dynamics How much time does it take for a fluctuating system, such as a polymer chain, to reach a target configuration that is rarely visited -- typically because of a high energy cost ? This question generally amounts to the determination of the first-passage time statistics to a target zone in phase space with lower occupation probability. Here, we present an analytical method to determine the mean first-passage time of a generic non-Markovian random walker to a rarely visited threshold, which goes beyond existing weak-noise theories. We apply our method to polymer systems, to determine (i) the first time for a flexible polymer to reach a large extension, and (ii) the first closure time of a stiff inextensible wormlike chain. Our results are in excellent agreement with numerical simulations and provide explicit asymptotic laws for the mean first-passage times to rarely visited configurations. FIG. 1: What is the mean first time to reach a rare configuration ? This Letter investigates this question in the case of (a) an attached flexible polymer, for which we compute the time that a large extension is reached, and (b) a stiff wormlike chain, for which we compute the time that the extremities get into contact. positions x i (i is the bead index) follows from force balance [38] where thermal forces obey f i (t)f j (t ) = 2k B T γδ(t − t )δ ij . We denote by l 0 = k B T /k the typical bond length, and τ 0 = γ/k the typical relaxation time of a single bond. The first monomer is fixed, x 1 = 0, and we study the mean time τ that the other polymer end r(t) = x N (t) reaches a threshold value z [ Fig. 1(a)]. The energy at fixed z is given by U = kz 2 /(2N ), and we assume U k B T , so that first-passage events to z are rare. Figure 4(a) shows the mean FPT obtained from simulations results of Ref. [35] and existing analytical approximations for a fixed and relatively high value of the energy cost U 18k B T . Substantial disagreement that increases with N is found, be it for adiabatic approximations [35,52], effective one dimensional descriptions [39,40] and even the rigorous weak noise approach (T → 0 at fixed N , see Refs. [30,35] and SI [49]). This shows the necessity to take into account the collective dynamics of all monomers to calculate the mean FPT, which is the main purpose of this work. In fact, the non-Markovian theory that we introduce in this paper shows an excellent agreement with simulations [ Fig. 4(a)], which holds for a broad range of values of the energy barrier [ Fig. 4(b)]. General expressions for the mean FPT. -We now consider the more general problem of the FPT of a stochastic (one-dimensional) variable r(t) to a rarely visited threshold z. We assume that r(t) is non-smooth [10], meaning that ṙ 2 = ∞, as is the case for overdamped processes. We denote p(r, t) the probability density distribution of r at time t, starting from a given initial position r 0 that will be proved to be irrelevant. We also assume that r(t) is stationary at long times, p(r, t) → t→∞ p s (r), where the stationary distribution p s (r) is reached after a finite correlation time t c . With these hypotheses, the following exact expression can be obtained [15]: where p π (r, t) the probability density of r at a time t after the first passage. Now, in the case of targets that are only rarely visited, we stress the following key points: (i) as long as r 0 is not in the close vicinity of z, p(z, t) is exponentially small (with noise intensity) at all times, and (ii) the probability p π (z, t) to revisit the target after a time t is exponentially small at long times, but finite at times that immediately follow a FPT event, when r is still close to z. The integral (2) is dominated by this short time contribution, where p π can be replaced by its value p ∞ π (z, t) obtained by considering the linearized dynamics around the target point. Hence, the mean FPT to a rare configuration is asymptotically (rare event limit) given by Since p ∞ π (z, t) is a return probability for a particle submitted to a constant force in infinite space, it vanishes fast enough at long times so that the expression (3) is defined without any ambiguity. Note that for rare events the initial distribution of r has typically been forgotten long before the FPT, and thus does not influence τ . The above equation suggests a two-step strategy to obtain τ . The first step consists in characterizing the static quantity p s (z) ; for equilibrium systems one obtains p s (z) ∝ e −U (z)/k B T and in particular τ follows an Ahrrenius-like law [54]. The second step consists in analyzing the dynamics of r(t) in the vicinity of the target z to deduce p ∞ π (z, t). To proceed further, we assume that the dynamics of r(t) near z is Gaussian, which is valid in the vicinity of the most probable configuration. We denote by m s (t) and ψ(t), respectively, the mean and the variance of r(0) − r(t) when the initial state is the stationary distribution conditional to r(0) = z. We adapt the theory of Ref. [15] (restricted to unbiased dynamics), based on the hypothesis that the trajectories followed by the random walker in the future of the √ 3N , corresponding to a fixed energy cost U = 18.4kBT . Symbols: simulations of Ref. [35]. Different curves correspond to different theories, obtained (from top to bottom) via a mapping over 1D dynamics (Milner-McLeish reptation theory [39,40], upper dashed line), the Minimal Action Path method [35], the pseudo-Markovian (Wilemski-Fixman [52]) approximation, the non-Markovian theory (this work, black thick line), asymptotic expansions of the Non-Markovian theory [dashed blue line, Eq. (9), this work], and the weak-noise result T → 0, fixed N [30,35]. Details on all theories can be found in SI [49]. (b) Mean FPT in rescaled variables, with supplementary simulation data of Ref. [35] (symbols). Lines share the same color code as in (a). (c) Rescaled average trajectory µ(t) in the future of the FPT for a scale invariant process with H = 1/4. The dashed red line would be the future trajectory by assuming equilibrium at initial time. FPT display Gaussian statistics. Defining the average future trajectory as r(t + FPT) = z − µ(t) and approximating the variance in the future of the FPT by ψ(t), we can write the so-far unknown quantity p ∞ π (z, τ ) as The average future trajectory µ(t) itself satisfies the self-consistent integral equation (see SI [49]) We note that our theory holds for general non-equilibrium systems. Here we focus on equilibrium ones, in which case the fluctuation-dissipation theorem imposes where F = −∂ z U (z). For the Markovian (diffusive) case with ψ(t) ∝ t, there is an obvious solution µ(t) = m s (t). For non-Markovian variables, this relation does not hold and the future trajectory µ(t) reflects the state of the non-reactive degrees of freedom at the FPT. Finally, Eqs. (3),(4),(5) fully define the mean FPT to a rare configuration for general non Markovian processes that are locally Gaussian. In the case of a biased anomalous dynamics with ψ(t) = κt 2H , where 0 < H < 1, Eq. (5) predicts that µ takes the scaling form and the mean FPT reads This formula provides an explicit asymptotic relation for the mean FPT, as a function of the subdiffusion coefficient κ, the local force F and the temperature k B T , and A H depends only on H (f is defined in SI [49]). Of note, this result (8) is consistent with the scaling proposed in Ref. [56] for processes that are Gaussian (not only locally). In addition, it agrees with the more recent derivation of the prefactor for this scaling based on a perturbative scheme [57] in ε ≡ H − 1/2 (see SI [49]). We now discuss applications of these general results. Application to the kinetics of large extension for a flexible chain.-Let us come back to the above example of an attached flexible chain. It is well known that the dynamics of the ends is either diffusive, ψ(t) = 2D 0 t for t τ 0 , or subdiffusive, ψ(t) = κt 1/2 with κ = 4k B T /(πγk) 1/2 when τ 0 t t c , where t c = N 2 τ 0 is the correlation time. The mean FPT is controlled either by the short time diffusive regime (H = 1/2) or by the intermediate subdiffusive where we have included the (asymptotically exact) next-to-leading order expansion in the large z limit (which coincides with the weakly non-Markovian limit, see SI [49]). This expression incorporates non-Markovian effects that were neglected in Ref. [35]. Here we have used the value A 1/4 = 2.0, which we obtained by numerically solving Eq. (5). This value is about 8 times smaller than in the pseudo-Markovian approximation (where µ m s , leading to with A WF 1/4 = 16). Here, the memory effects are nearly of one order of magnitude for the mean FPT and are thus strong. This originates from the qualitative difference between the short time behaviour of the trajectory after the first passage µ(t) ∼ t 1/4 and that of m s (t) ∼ t 1/2 (following stationary state with z = r) [ Fig. 4(c)]. At short times µ(t) can therefore be infinitely larger than m s (t), which means that local equilibrium assumptions are inaccurate in this situation. All data of the mean FPT can be collapsed on a single master curve depending only on z/l 0 N , with asymptotics given by Eq. (9). This is done in Fig. 4(b), where we see that the simulation data closely follow (but are slightly larger than) our theoretical predictions. Finally, our theory provides an accurate description of the kinetics with which a flexible polymer reaches a large extension. The closure time of a stiff wormlike chain. We now consider a thin inextensible elastic rod with bending rigidity κ b . In the stiff limit, where the persistence length l p = κ b /k B T is much larger than the contour length L, closure events are rare since they require overcoming a large bending energy barrier. Here we calculate the closure time τ defined as the mean time for the end-to-end distance r to reach a value a L. We assume the dynamics to be described by the resistive force theory, in which viscous forces apply locally on the filament with friction coefficients per unit length ζ ⊥ , ζ (respectively in the parallel and perpendicular directions) [58,59]. We furthermore assume that no force and no torque are exerted at the chain ends. Determining the closure time [Eq. (3)] first requires to calculate p s (r), which is an equilibrium (static) statistical mechanics problem which has been studied at length by a variety of analytical and numerical methods [60][61][62][63][64]. It is also needed to characterize the dynamics at the early times following a closure event. Such dynamics necessarily occurs at the vicinity of the close configurations of minimal bending energy. Of note, lateral fluctuations are of the order of ⊥ (t) ∝ t 1/4 [59,65] which is small at short times. This key remark implies that the essential of the dynamics after closure takes place near the extremities, where the chain can be considered as close to a straight rod. We can then calculate analytically the evolution of the end-to-end vector when initial conditions are closed equilibrium configurations. Characterizing this dynamics in the reference frame {e i } defined by the configuration at closure [see Fig.1(b)] as r ee (t) = r ee (0) + 3 i=1 X i (t)e i , we obtain (see SI [49]): where α is half the opening angle of the most probable closed configurations (Fig.1), and the force is F = 21.55κ b /L 2 = −U (0), with U (a) the energy cost to form a closed configuration. The stationary dynamics around a closed configuration is thus a three-dimensional biased anisotropic subdiffusion. Note that X 1 (t) and Var(X 1 ) are again linked by the ratio F/2k B T , which is consistent with the fluctuation-dissipation theorem. A first estimate of the closure time can be obtained by assuming p π (t) p(a, t|a, 0) (pseudo Markov approximation). This can be readily calculated from the Gaussian dynamics specified by Eq. (10), leading to where Φ is a scaling function calculated in SI [49] represented on Fig. 3 (black line). This figure also displays the simulation data of Ref. [48], which collapse as in Eq. (11) onto a curve which is close to Φ for small arguments. We stress that there is no fitting parameter in the theory. However, there is a difference of a factor of about two between theory and numerics for larger capture radius, suggesting that non-Markovian effects are significant in the regime a L 2 /l p , which we investigate now (while still keeping the small capture radius condition a L). In this case the dynamics needs to be characterized only at time scales where the return probability is not exponentially small, i.e. such that X 1 (t) 2 is smaller than X 2 1 (t) . For these time scales, X 1 ∼ L 2 /l p is still much smaller than a. This implies that the end-to-end distance is approximated at linear order as r = [(a+X 1 ) 2 +X 2 2 +X 2 3 ] 1/2 a+X 1 and is thus equivalent to a one dimensional Gaussian variable. The mean closure time can be obtained by applying the formalism presented above with H = 3/8. We obtain τ p s (a) 0.0023 ζ ⊥ L 10/3 Here the value of the prefactor was obtained with A 3/8 = 2.1 which is 1.6 times smaller than its estimate in the pseudo-Markovian (Wilemski-Fixman) approximation A WF 3/8 = 3.39. This explains why the pseudo Markovian theory overestimates the simulation data. In the opposite limit a L 2 /l p , the pseudo-Markovian expression (11) becomes and it can be shown that this result can be found by setting F = 0, i.e. by analyzing a symmetric anisotropic three dimensional subdiffusive walk. In a recent work [43] for a similar (but isotropic) subdiffusive process, it was shown that memory effects led to a slight reduction (15%) of the mean FPT. We expect a similar for the mean closure time, as confirmed by the comparison with numerical simulations in Fig.3. Conclusion.-In this Letter, we have introduced theoretical tools to determine the mean FPT to rarely visited configurations for generic non-Markovian processes. We have derived explicit asymptotic expressions for the closure kinetics of a stiff wormlike chain, and for the mean FPT to a large extension of a flexible chain. As demonstrated by the example of wormlike chain closure, the dynamics needs to be Gaussian only locally (in the vicinity of the target) to apply our theory. This approach shows quantitatively the importance of memory effects on mean FPTs, and thereby significantly improves existing theories, whether based on a weak-noise limit, a mapping on one dimensional problems or pseudo-Markovian (adiabatic) approximations. Our approach is not limited to polymers, and can apply to generic complex physical systems, where the dynamics of a reaction coordinate is coupled to many other degrees of freedom. Supplemental Material This Supplemental Material presents details of calculations supporting the main text. We provide • details on the equations of the non-Markovian theory for rare first passage times (Appendix A), • details on asymptotically exact results for weakly non-Markovian processes (Appendix B), • details for the first passage problem of the flexible chain, and how existing results in the literature can be understood within our formalism, leading to the curves that are presented in Fig. 2 (Appendix C), • calculation details for the problem of wormlike chain closure (Appendix D). Appendix A: Non-Markovian theory for first passage times in the rare event limit General non-Markovian theory We consider a general one-dimensional stochastic variable r(t) evolving in continuous time t. We denote p(r, t) the probability density distribution of r at time t. We assume that r(t) is non-smooth (this means that ṙ 2 = ∞: the stochastic trajectories are not derivable, as is the case for the overdamped processes we have in mind). We also assume that r(t) is stationary at long times, p(r, t) → t→∞ p s (r), where the stationary distribution p s (r) is reached after a finite correlation time t c . We aim to calculate the statistics of the first time that r(t) reaches a threshold z, which can be reached only with a high energy cost. We start by writing the general equation where f (τ ) is the First passage time (FPT) distribution and p(z, t|FPT = τ )dr is the probability of observing r ∈ [z, z + dr] at t given that the FPT is τ . We substract p s (r) on both sides of Eq. (A1) and integrate over time where we note that all the above integrals exist, since propagators converge exponentially fast towards p s , due to the hypothesis of finite correlation time. Next, we write the following identities: where at each line all integrals are well defined because of the convergence of the propagators towards p s (z). Note that we have used the definition of p π as the probability density of r at a time t after the FPT, so that We also note that with τ the Mean First Passage Time to the target. Eq. (A2) becomes This exact relation [15] is the starting point of our analysis, for any non-smooth stochastic process p(r) that becomes stationary at long times. Now, in the case of targets that are only rarely visited, we note that probability density p π (z, τ ) to revisit the target after a time τ is largest at the times that immediately follow the FPT events (when r is still close to z) and becomes exponentially small (with k B T ) for larger times. Furthermore, if the starting point r 0 is not too close to the target, the probability density p(z, t) to reach the target, starting from the initial state, will be exponentially small at all times. These remarks suggests that we can evaluate the integrals in Eq. (A5) by replacing p π by its value calculated by linearizing the dynamics around the most probable state, in which case we denote it as p ∞ π , and that we can neglect the second term p(z, t), leading to which gives the mean FPT in the limit of rare events. The fact that the mean FPT does not depend on the initial distribution p 0 (r) is consistent with the intuition that the mean FPT to a rare configuration is in general much longer than the correlation time, so that the initial distribution has typically been forgot long before the FPT. Note that when the dynamics is linearized around z, r(t) is equivalent to a particle submitted to a constant force in infinite space, so that p ∞ π (z, t) vanishes for large times, and the integral (A6) is defined without ambiguity. The rare event limit is similar to the large volume approximation for symmetric (non-compact) random walks in confinement [8,15]. Physically, one should include an upper cutoff in the integral (A6) of the order of the correlation time, where the approximation of linearized dynamics does not hold anymore. However, since at such times p π is already exponentially small there is no need to introduce explicitly this cutoff. Note also that, at long times, both terms p π (z, t) and p(z, t) approach the stationary probability p s (z) and compensate each other, which is a supplementary argument to take only short times into account in Eq. (A6). This fact can be illustrated by considering the simple case that r(t) is a one-dimensional Brownian motion, with diffusion coefficient D and friction coefficient γ = k B T /D in a potential U (r). Since this process is memoryless, p ∞ π (t) is directly given by the propagator p ∞ (z, t|z, 0) calculated by considering the linearized dynamics around z, for which which presents an exponentially fast decay when t → ∞. Here, F = −∂ r U | r=z is the local potential slope. Inserting this value into Eq. (A6) leads to τ p s (z) γ |F | (1D diffusive particle in a potential) This expression is consistent with existing results for this Markovian case which was early studied by Kramers [29], see Ref. [19]. Coming back to the non-Markovian case, we are left to evaluate the probability to return to the target at a time t after the FPT. To characterize it, we adopt the strategy of Ref. [15] where it was shown that memory effects can be taken into account by characterizing the trajectories in the future of the FPT. We write the generalization of (A1) for 2 points, for any fixed t 1 > 0, where p(z, t; r 1 , t + t 1 ) is the joint probability density of r = z at time t and r = r 1 at t + t 1 ; for large times this quantity does not depend on t and thus defines the stationary probability p s (z, 0; r 1 , t 1 ) to observe z at an initial time and r 1 after a subsequent time t 1 . Taking the Laplace transform of Eq. (A9) and considering small values of the Laplace variable leads to In the limit of rare events, using the same assumptions as in Eq. (A6), we obtain τ p s (z, 0; r 1 , t 1 ) = ∞ 0 dτ p ∞ π (z, τ ; r 1 , τ + t 1 ). Integrating over r 1 leads to which generalizes Eq.(A6). Here, µ(τ + t 1 |τ ) is the average of z − r(τ + t 1 + FPT) given that r(τ + FPT) = z. We have also defined m s (t) so that so that r(t) r(0)=z = z − m s (t), where · · · r(0)=z is the equilibrium average conditional to r(0) = z. Note that this equation is still exact in the rare events limit for any non-smooth stochastic process. Let us now call ψ(t) the mean square Displacement function i.e., ψ(t) is the variance of r(t) − r(0) conditional to r(0) = z. An explicit equation that defines µ(t) in a self-consistent way can be found by assuming that the distribution of paths p π in the future of the FPT is Gaussian, and that its covariance in the future of the FPT is the stationary covariance, so that It is very important to note that µ(t) = m s (t) because the configurations of the monomers are not at equilibrium at the FPT instant. One exception is the case ψ(t) ∝ t (locally diffusive process), for which µ(t) = m s (t) is a trivial solution of Eq. (A13). The mean FPT can be found from Eq. (A6) in our Gaussian closure approximation as Finally, in the case of equilibrium systems, when the dynamics of r at the vicinity of z is Gaussian, m s (t) is linked to the MSD function by This can be shown by considering the description of the dynamics by a Generalized Langevin Equation where the force applied on the particle is F = −U (z), so that whereṙ(t) is the velocity, K a memory (friction) kernel and we have neglected inertia. Assuming the starting position to be equal to zero, this equation becomes in Laplace space sK(s)r(s) = F/s +g(s) (A17) so that the average trajectory reads (in Laplace space) In turn, the covariance of the trajectories reads (in Laplace space) Using Eq. (A16), we find Let us consider σ(t, t ) of the form The above equation is the Laplace transform of Eq. (A15), which is thus a consequence of the fluctuation dissipation theorem. Scale invariant processes and Pickands' constants Let us now illustrate our theory in the case where the local dynamics is locally a biaised anomalous diffusion, with ψ(t) = κt 2H , H being the Hurst exponent, 0 < H < 1. In this case, Eq. (A13) predicts that µ takes the scaling form and the mean FPT reads This formula provides an explicit asymptotic relation for the mean FPT, as a function of the subdiffusion coefficient κ, the local force F and the inverse temperature β. A H is a constant that only depends on H. In the mathematical literature [56], the case that r(t) is a one dimensional Gaussian process at all times (not only at the vicinity of the target) has been analyzed in the rare event limit. We find that for this subclass of processes our expression (A26) is compatible with the analysis of Ref. [56] if we identify where the so-called Pickands' constants H α depend only on α = 2H. Our theory suggests that Pickands' constants could be used to characterize the FPT kinetics of non-Gaussian processes. Our theory also provides approximates expressions for those constants, which are difficult to estimate numerically. Here we note that our theory reproduces exactly the recent exact perturbative results of Delorme et al. [57] (see next Section). Appendix B: Perturbation theory for weakly non-Markovian processes Exactness of the theory at first order Here we give an argument suggesting that our theory is exact at first non-trivial order for weakly non-Markovian processes. We consider the generalization of Eq. to any path y(τ ) starting at y(0) = 0, for which the following equation is exact in the rare event limit where the target position is here z = 0, P stat ([y(τ )]|y(0) = 0) is the stationary probability to follow the path [y(τ )] conditional to y(0) = 0, Π([y(τ )], t) is the probability to follow the path [y] in the future t of the FPT (ie, the probability that x(F P T + τ + t) = y(τ ) for all τ > 0). Let us define the functional which should vanish for all [k(τ )]. Let us evaluate this functional for the case that Π is a Gaussian distribution, of mean µ(t) and covariance γ(t, t ). In this case, using formulas of Gaussian integration, we get Here the stationary paths contional to y(0) = 0 are assumed to have a mean m s (t) and covariance σ s (t, t ), and the quantities A π and B π are defined as Following an equilibrium condition, the stationary covariance and the mean satisfies the property Consider the case of weakly non-Markovian processes, for which where ε → 0 is a small parameter, ψ 1 (t) is the deviation of the MSD at first order with respect to the diffusive (Markovian) case, and we have chosen our units so that the diffusion coefficient for ε = 0 is equal to 1/2. We also set the length scale so that k B T /F = 1. We now show that the functional F([k(τ )]) vanishes for all functions [k(τ )] at first order in ε if µ(t) and γ(t, t ) admit the expansion with appropriately chosen functions µ 1 (t) and γ 1 (t, t ). Note that µ 1 (t) is the deviation at first order with respect to m s (t). Indeed, the functional F([k(τ )]) vanishes at order ε 0 , and its expansion at order ε reads: We remark that the quadratic terms in k vanish if one imposes the covariance of paths in the future of the FPT to be equal to the stationary covariance, If we use this result, we see that the linear term in k in Eq. (B8) also vanishes if µ 1 satisfies the integral equation This means that our theory becomes exact at order ε if we impose µ 1 to be the solution of Eq. (B10). Explicit expressions at first order We now derive the explicit solution for µ 1 (and thus of the mean FPT) for weakly non-Markovian processes. We take the derivative of Eq. (B10) with respect to τ and obtain The formal solution of the above equation can be written in Laplace space as [51] [ whereK(s) = π/(s + 1/8) is the Laplace transform of K(t). We now use the Mellin-Bromwich formula and the definition of the Laplace transform to write where we have integrated once over t and used µ 1 (0) = 0. If we change the order of integration and change ω → −ω, we realize that where Θ is the Heaviside step function and W is the inverse Laplace transform ofW (s) = [sK(s)] −1 , so that Noting that W goes exponentially fast to a constant W (x → ∞) = W ∞ = 1/ √ 8π, and making sure that we do not separate illegally the two parts in the integral (B15), we obtain (B17) Using Eq. (B11) and g = G leads to We now integrate by parts over x (taking care of chosing primitives that vanish for x = 0, so that integrals exist): The reactive trajectory µ 1 can thus be obtained as integrals involving the MSD ψ 1 (t) at first order and simple functions. This explicit expression of µ 1 (t) can now be used to obtain an expansion of the mean FPT via the formula Applications of the perturbation theory Application: Pickand's constants at first order.-In the case of the baised subdiffusion, the MSD is ψ(t) = t 2H , with H = 1/2 + ε, so that the function ψ 1 is readily identified to be ψ 1 (t) = 2 t ln t. (B21) Inserting the expression into Eq. (B19,B20) we see that the mean FPT can be obtained by evaluating double and triple integrals. In this case, the normalized mean FPT is called A H in the main text and reads with γ e the Euler-Mascheroni constant. This result is compatible at first order in ε with the exact result of Ref [57] when we identify A H = 2 1/(2H) /H 2H . Large deviation of the flexible chain for very large extensions. A second application of the perturbation theory is the large deviation of the Rouse chain in the large force limit, i.e. F = kz/N k B T /l 0 . In this limit, only small times of the MSD function are relevant, which can be seen by defining the relevant length scale L s = k B T /F and time scale t s = L 2 s /2D, for which the rescaled MSD (C6) becomes Now, using 1/F 2 as our small parameter, and ψ 1 (t) = −t 2 /4, we obtain from Eqs. (B19,B20) so that, reestablishing homogeneity: and we thus obtain an explicit expression of the mean FPT which includes exact non-Markovian effects at leading order. Appendix C: The time to reach a large extension for a Gaussian flexible chain Characterization of the dynamics We now present calculation details for the first passage problem for the flexible phantom chain model. We remind here the equations for the dynamics of a chain of N monomers with positions x i (t) at time t. In the bead-spring model, with k the bond's stiffness, we have γ the friction coefficient on each monomer, and f i (t) stands for a centered Gaussian white noise, satisfying We define τ 0 = ζ/k and l 0 = k B T /k which are respectively the typical relaxation time and the typical length of one bond. In the following we use time and length units so that τ 0 = 1 = l 0 . Note that in the literature the typical unit length is the three-dimensional bond length b = √ 3l 0 [35]. Here we aim to calculate the average first time τ at which the last monomer x N reaches the threshold value z. We assume that z √ N so that reaching z is a rare event. Following a scaling argument by De Gennes, we note that the motion of the end-monomer at t involves a number n(t) ∝ √ t of monomers (this can be seen by considering the non-noisy terms of Eq. (C1) as a diffusion equation). Hence, after a fully extended configuration, in the rare event limit, the number of monomers involved in the dynamics of x N (t) is small compared to N and we can consider the chain to be infinite (but discrete). Now we set a new index m = N − n starting at zero for the free extremity, with y m = z − x N −m , and we consider now the average dynamics following an initial configuration which is at equilibrium conditional to x N = z, for which x n (0) = zn/N (i.e. y m (0) = mz/N ). In Laplace space, this dynamics reads The characteristic polynomial of this recurrence equation is P (r) = r 2 − (s + 2)r + 1 and has one negative and one positive root. Leaving aside the negative root term (unphysical because leading to infinite bond length for large m), and using the free end condition y 1 (t) = y 0 (t), we obtain for the end monomer Taking the inverse Laplace transform leads to where I n represents the modified Bessel function of the first kind, and the previously found relation between m s (t) and the MSD ψ(t) was used. We can check that in agreement with a diffusive behavior at short times and a subdiffusive behavior at larger ones, where the value of the transport coefficient agrees with that of a monomer attached to a semi-infinite chain, see e.g. Ref. [53]. Review of existing theories Here we review existing theoretical approaches to determine We explain how we obtain the curves on Fig.2 in the main text, which we reproduce here for clarity (Fig.4). Note that a lot of these approaches are detailed in Ref. [35] and are mentioned here for completeness. Historically, the present FPT problem was considered by Milner and McLeish for their theory of reptation in star polymer melts [39,40]. There they approximated the dynamics of the whole chain by an effective Markovian dynamics for an attached dimer, with an effective friction coefficient γ e . Once this approximation is made, the FPT problem can be solved exactly [1,11]. The result in the regime z √ N is given by Eq. (A8) In this approach, the effective friction has to be chosen. The choice of Milner and McLeish [39,40] is γ e = N γ/2, so that in our units we have Another important approach in reaction kinetics involving polymers is the Wilemski Fixman pseudo-Markovian approximation, where one assumes that all internal degrees of freedom are at equilibrium at the first passage. In the rare event limit, this can be implemented within our framework by setting µ(t) = m s (t), so that Another approach consists in calculating the fluctuations around minimal action paths [35]. Interestingly, this approach gives almost the same results as the Wilemski-Fixman approximation, meaning that both theories share similar hypotheses. Finally, in the weak noise limit (i.e., T → 0 at fixed γ, k, z and N ), one can derive an explicit expression for the mean FPT as a function of the local first and second derivatives of the multidimensional potential U ({x i }) = k(x i+1 − x i ) 2 /2 (with a similar status to the Langer's theory for the passage through a multidimensional saddle point). For the Rouse chain, this expression was simplified by Cao et al. [35] who showed that one obtains Eq. (C8) with γ e = γ, This result is exact in the limit of small noise, which here translates to z N l 0 , while rare events kinetics can be assumed as soon as z √ N l 0 . Note also that the same expression can be obtained by applying the general formula for the mean time out of non-characteristic domain [Eq. (10.117) in the book [30]] in the weak-noise limit. We characterize now the average motion following a closure event for the WLC. Such motion is localized near the closing configurations of minimal bending energy. Such configurations are planar, let us call their local orientation φ in this plane. They are obtained by minimizing the functional where F is the Lagrange multiplier associated to the constraint r ee = ae x . Other Lagrange multipliers for the directions y and z could be included but would vanish at the end of the calculation. Minimizing F leads to The full function φ * (s) is needed to compute the equilibrium probability p s (r), which has been characterized elsewhere. Here we focus on the first passage kinetics and we will only need the behavior of φ * near the ends. Using ∂ s φ * | s=0 = 0, we obtain where φ e = φ * (0) = π/2 − α is the initial angle in the closed configuration. Now, we consider the dynamics near a chain extremity when initial conditions are equilibrium closed configurations, first in the absence of noise. We quantify the lateral motion (with respect to the local orientation at the extremity at closure) by using the ansatz where u * (0) is the orientation of the optimal configuration at the extremity and n * (0) a unit vector perpendicular to u * (0). Inserting this ansatz into Eq. (D1) (with vanishing thermal forces) and projecting in the direction u * (0) we obtain σ = 0. Since the tension vanishes at the ends, we conclude that it is negligible near the ends, for s > 0. The dynamics in the lateral direction then reads Denoting by φ e the orientation of u * (0), we measure the small deviations of the orientation at later times by The local orientation is thus solution of and the initial and boundary conditions lead to We take the Laplace transform of Eq. (D9) with respect to the time t, with p the Laplace variable: The solution of this equation which satisfies all boundary conditions (and does not diverge exponentially at s → ∞) isφ The unit tangent vector u(s, t) reads u(s, t) = cos φe 1 + sin φe 2 u * (0) − (sin φ e )φ 1 e 1 + (cos φ e )φ 1 e 2 + O(φ 2 1 ) (D14) Using u = ∂ s r, we see that where x 1 = r · e 1 . Using the argument that for fixed t, the chain remains underformed for large s, we get Going to Laplace space and using (D13), we obtain Finally, inverting the Laplace transform leads to Identification of the subdiffusion coefficient Now, we add the fluctuations, so that Eq. (D7) becomes which is associated to the boundary conditions (D11). From the work presented in Ref. [43], we can extract where the notation Var(...) represents the variance. With X 1 (t) = sin φ e [r ⊥ 1 (L, t) − r ⊥ 1 (0, t)] and X 2 (t) = cos φ e [r ⊥ 1 (L, t) + r ⊥ 1 (0, t)], we see that X 1 , X 2 and X 3 are independent, and that Var(X 1 (t)) = κ(sin φ e ) 2 t 3/4 , Var(X 2 (t)) = κ(cos φ e ) 2 t 3/4 , Var(X 3 (t)) = κ t 3/4 , κ = 4 √ 2k B T Γ(7/4)κ 1/4 It is important to note that the relation holds, which is consistent with the interpretation of F = −∂ r E * (r) as an effective force applied from t > 0. As a supplementary check, we may compare our Eq. (D21) to other results from the literature. The subdiffusion coefficient is given for an interior monomer in Ref. [37] Var(r ⊥ (s = 0, t)) = In Ref. [43] it was shown that for exterior monomers the coefficient has to be multiplied by 4 [see Eq. (29) in Ref. [43]], but doing so leads to a subdiffusion coefficient that is twice smaller than in Eq. (D21). However, we think that this discrepancy comes from a typo in Ref. [37]. Indeed, there the authors show that the displacement of an interior monomer at small times satisfies where a factor of two comes from the fact that we evaluated the integral for s < s only. Performing the integral leads to which is two times larger than the result indicated after Eq. (10) in Ref [37]. Multiplying by 4 [43] we recover Eq. (D21), confirming the validity of our analysis. 2. FPT analysis in the regime a L 2 /lp In the limit a L 2 /l p (while still keeping the small capture radius condition a L), we remark that the MSD becomes comparable to X 1 (t) 2 at length scales of the order of L 2 /l p . Since we assume a L 2 /l p , we can thus assume the X i (t) are small compared to a at these relevance length scales, so that the end-to-end distance is r ee = [(a + X 1 ) 2 + X 2 2 + X 2 3 ] 1/2 a + X 1 . This means that we can consider the motion as being one-dimensional along the direction e 1 , and we can apply the formalism presented above with H = 3/8. We directly apply (A26), with a subdiffusion coefficient κ(sin φ e ) 2 , obtaining τ p s (a) = Our best estimate of the constant A H for H = 3/8 is A 3/8 = 2.1 (which is obtained by numerically solving (A25). This is about 1.6 times smaller than its estimate in the Wilemski-Fixman approximation A WF 3/8 = 3.39 . Here, we have defined λ = −∂rE * (r), where E * is the dimensionless minimal bending energy to form a loop of sizer = a/L (the true minimal bending energy is E b = κ b E * /L. Hence the above formula predicts the value of the kinetic prefactor as a function of the angle φ e and the local derivative of the minimal bending energy, such quantities are well known from the analysis of equilibrium configurations. We represent on Fig. 5 the value of this kinetic prefactor as a function of a/L. We see that it does not vary much as long as a < 0.6L (above this value we cannot really speak of closure anyway). Hence, we may approximate it by its value for a L, while still a L 2 /l p , for which we obtain with λ = 21.55 and ϕ e 2.281 (radians): We now derive the limiting behaviors of Φ. For smallã, the limiting value of Φ can be obtained by performing the change of variable X i → X i /a, X The opposite limit a L 2 /l p , the closure time in the Wilemski-Fixman approximation is given by Eq. (D31).
10,246.6
2020-02-04T00:00:00.000
[ "Physics" ]
Quantifying Uncertainty in Transdimensional Markov Chain Monte Carlo Using Discrete Markov Models Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities. Introduction Transdimensional Markov chain Monte Carlo (MCMC) methods provide an indispensable tool for the Bayesian analysis of models with varying dimensionality (Sisson, 2005).An important application is Bayesian model selection, where the aim is to estimate posterior model probabilities p(M i | x) for a set of models M i , i = 1, . . ., I given the data x (Kass and Raftery, 1995).In order to ensure that the Markov chain converges to the correct stationary distribution, transdimensional MCMC methods such as reversible jump MCMC (Green, 1995) or the product space approach (Carlin and Chib, 1995) match the dimensionality of parameter spaces across different models (e.g., by adding parameters and link functions).Transdimensional MCMC methods have proven to be very useful for the analysis of many statistical models including capture-recapture models (Arnold, Hayakawa, and Yip, 2010), generalized linear models (Forster, Gill, and Overstall, 2012), factor models (Lopes and West, 2004), and mixtures models (Frühwirth-Schnatter, 2001), and are widely used in substantive applications such as selection of phylogenetic trees (Opgen-Rhein, Fahrmeir, and Strimmer, 2005), gravitational wave detection in physics (Karnesis, 2014), or cognitive models in psychology (Lodewyckx et al., 2011). Crucially, transdimensional MCMC methods always include a discrete parameter Z with values in 1, . . ., I indexing the competing models.At iteration t = 1, . . ., T , posterior samples are obtained for the indicator variable z (t) and the model parameters, which are usually continuous and differ in dimensionality (for a review, see Sisson, 2005).For instance, a Gibbs sampling scheme can be adopted (Barker and Link, 2013), in which the indicator variable Z and the continuous model parameters are updated in alternating order.Such a sampler switches between models depending on the current values of the continuous parameters, and then updates these parameters in light of the current model M i conditionally on the value of z (t) = i (Barker and Link, 2013).Given convergence of the MCMC chain, the sequence z (t) follows a stationary distribution π = (π 1 , . . ., π I ).Due to the design of the sampler, this distribution is identical to the posterior model probabilities of interest, π i = p(M i | x) and, given uniform model priors p(M i ) = 1/I, also proportional to the marginal likelihoods p(x | M i ).Hence, transdimensional MCMC samplers can be used to directly estimate these posterior probabilities as the relative frequencies of samples z (t) falling into the I categories, πi = 1/T t I(z (t) = i), where I is the indicator function. Due to the ergodicity of the Markov chain, this estimator is ensured to be asymptotically unbiased (Green, 1995;Carlin and Chib, 1995). Usually, dependencies due to MCMC sampling are taken into account for continuous parameters (Cowles and Carlin, 1996).In contrast, however, the estimate π = (π 1 , . . ., πI ) based on the sequence of discrete samples z (t) is usually reported without quantifying estimation uncertainty.Often, the samples z (t) are correlated to a substantial, but unknown degree because of infrequent switching between models.This is illustrated in Figure 1, which shows a sequence of independent and correlated samples z (t) in Panels A and B, respectively.Inference about the stationary distribution π should be more reliable in the first case compared to the second case, in which the autocorrelation reduces the amount of information available about π.Moreover, the standard error SE(π i ) = πi (1 − πi )/T that assumes independent sampling will obviously underestimate the true variability of the estimate π (Green, 1995;Sisson, 2005).To obtain a measure of precision, Green (1995) proposed running several independent MCMC chains c = 1, . . ., C and computing the standard error of the estimate π(c) across these independent replications.However, for complex models, this method might require a substantial amount of additional computing time for burn-in and adaption and thus can be infeasible in practice. Assessing the precision of the estimate π, which depends on the autocorrelation of the sequence of discrete samples z (t) , is of major importance.In case of model selection, it must be ensured that the estimated posterior probabilities p(M i | x) are sufficiently precise for drawing substantive conclusions.This issue is especially important when estimating a ratio of marginal probabilities, that is, the Bayes factor 1961).More generally, it is often of interest to compute the effective sample size, that is, the number of independent samples that would provide the same amount of information as the given MCMC output.Besides providing an intuitive measure of precision, a minimum effective sample size can serve as a principled and theoretically justified stopping rule for MCMC sampling (Gong and Flegal, 2016).Software for convergence diagnostics such as the R package coda (Plummer et al., 2006) estimate the effective sample size for continuous parameters by computing the spectral density at zero (Heidelberger and Welch, 1981). In summary, transdimensional MCMC is a very important and popular method for Bayesian inference (Sisson, 2005).However, little attention has been paid to the analysis of the resulting MCMC output, which requires that one takes into account the autocorrelation as well as the discrete nature of the model indicator variable.As a solution, we propose to fit a discrete Markov model to the MCMC output z (t) to assess the precision of the estimated stationary distribution π.Note that our approach differs from diagnostics previously proposed to monitor convergence of transdimensional MCMC methods.These diagnostics usually compare the variance of z (t) and other continuous parameters across and within chains and models (Brooks and Giudici, 2000;Castelloe and Zimmerman, 2002), similar to the widely used potential scale reduction factor (Gelman and Rubin, 1992). Other methods estimate the convergence rate of transdimensional MCMC chains, rely on Kolmogorov-Smirnov or χ 2 tests (Brooks, Giudici, and Philippe, 2003), or use distancebased diagnostics (Sisson and Fan, 2007).However, none of these methods estimates the precision of the point estimate π. Posterior Distribution of Model Probabilities To estimate the fixed, but unknown stationary distribution π (e.g., the posterior model probabilities), we explicitly model the sequence of observed samples as a random variable Z (t) (t = 1, . . ., T ) by assuming that it has emerged from a discrete, homogeneous Markov chain M Markov .For this purpose, we only consider the marginal distribution of the discrete indicator variable and define the transition matrix P with the switching probabilities p ij = P (Z (t+1) = j | Z (t) = i) for all i, j = 1, . . ., I. Accordingly, P is a probability matrix with non-negative entries and rows summing to one, i.e.I j=1 p ij = 1.For this Markov model, the resulting probability distribution at iteration t is given by multiplying the transposed initial distribution π 0 by the transition matrix t times, P (Z (t) = i) = [π 0 P t ] i .Within the model M Markov , the transition matrix P is a free parameter that is to be estimated based on the sampled sequence z (t) . Due to the construction of the transdimensional MCMC sampler, Z (t) has π as a stationary distribution.Therefore, the transition matrix P in the Markov model M Markov satisfies the condition which implies that the probability vector π is the left eigenvector (normalized to sum to one; Anderson and Goodman, 1957) of the matrix P with eigenvalue one.This eigenvector exists if the Markov chain is finite and irreducible (Stewart, 2009, Ch. 9).Hence, given an estimate for P , we can directly compute π as the corresponding eigenvector. However, we are less interested in a point estimate π of the stationary distribution but rather in the precision of this estimate.For this purpose, the sequence z (t) is summarized by the sufficient statistic N (Anderson and Goodman, 1957), the observed matrix of transition frequencies, where n ij is the number of switches from z (t) = i to z (t+1) = j.Conditional on the MCMC output, the posterior distribution of P can be approximated by drawing r = 1, . . ., R samples according to By computing the (normalized) eigenvectors with eigenvalue one (Eq.1), posterior samples of the stationary distribution π are directly obtained.As a prior for the transition matrix P , we assume independent Dirichlet distributions with parameter ≥ 0 for each of the rows, Therefore, independent posterior samples P (r) can efficiently be drawn from the conjugate posterior distribution, With regard to the prior parameter , small values should be chosen to reduce its influence on the estimation of P .Note that this choice becomes less influential as the number of MCMC samples increases (especially if the row sums of N are large).Here, we use the prior = 1/I, which has an impact equivalent to one observation for each row of the observed transition matrix N .This prior has proved to be numerically robust in the two examples in Sections 4 and 5, and resulted in point estimates close to the default i.i.d. estimates. Alternatively, the improper prior = 0 can be used, which minimizes the impact of the prior on the estimated stationary distribution.Note that this prior also ensures that the results do not hinge on the set of models that could possibly be sampled, but were never actually observed in the sequence z (t) .For such unsampled models, the corresponding rows and columns of the observed transition matrix N are filled with zeros.With = 0, the relevant eigenvector of the posterior P | N is thus identical to that of a reduced matrix P | N that includes only the transitions for the subset of models sampled in z (t) .Moreover, if the set of competing models is large, N is likely to be a sparse matrix because many transitions will never occur.When choosing = 0, P (r) will also be a sparse matrix when sampling from the conjugate Dirichlet distribution D(n i1 , . . ., n iI ), which facilitates an efficient computation of the eigenvectors π (r) .However, in our simulations, this improper Dirichlet prior proved to be numerically unstable and resulted in more variable point estimates than the proper prior = 1/I. As a third alternative, the prior can be adapted to the structure of specific transdimensional MCMC implementations, which only implement switches to a small subset of the competing models.For instance, in variable selection, regression parameters are often added or removed one at a time, resulting in a birth-death process (Stephens, 2000).For these kinds of samplers, the Dirichlet parameters ij can be set to zero selectively.However, such adjustments will be dependent on the chosen MCMC sampling scheme.Therefore, we propose the weakly informative prior = 1/I as a default, which provides a good compromise of being very general and numerically robust, while having a small effect on the posterior. Precision Based on the posterior samples π (r) , it is straightforward to estimate the stationary distribution by the posterior mean π (alternatively, the median or mode may be used).More importantly, however, estimation uncertainty due to the transdimensional MCMC method can directly be assessed by plotting the estimated posterior densities for each π i .To quantify the precision of the estimate π, one can report posterior standard deviations or credibility intervals for the components πi .Note that these component-wise summary statistics are most useful if the number of models I is relatively small. For very large numbers of sampled models, the assessment of estimation uncertainty can be focused on the subset of models with the highest posterior model probabilities. Besides summarizing the estimated posterior model probabilities, estimation uncertainty for the k best-performing models can also be assessed by computing ranks for each of the posterior samples π (r) .Then, the variability of these model rankings across the R samples can be summarized, for instance, by the percentage of identical rank orders for the k best-performing models, or the percentages of how often each model is included within the subset of the k best-performing models (i.e., has a rank smaller or equal to k). In case of model selection, dispersion statistics such as the posterior standard deviation are also of interest with respect to the Bayes factor B ij (Kass and Raftery, 1995).To judge the estimation uncertainty for the Bayes factor, one can evaluate the corresponding posterior distribution by computing the derived quantities (given uniform prior model probabilities).Precision can also be assessed for model-averaging contexts when comparing subsets of models against each other (e.g., regression models including a specific effect vs. those not including it).Given such disjoint sets of model indices M s ⊂ {1, . . ., I}, the posterior probability for each subset of models is directly obtained by summing the posterior samples π (r) i for all i ∈ M s .Note that it is invalid to aggregate across model subsets before applying the proposed Markov approach because functions of discrete Markov chains (e.g., collapsing the I original states into a subset of S states) are not Markovian in general (Burke and Rosenblatt, 1958). Effective Sample Size Besides quantifying estimation uncertainty, the posterior samples π (r) can be used to compute the effective sample size for the transdimensional MCMC output.For this purpose, we consider the benchmark model M iid under the ideal scenario of drawing independent samples z(t) from the categorical distribution with probabilities π (which is equivalent to sampling from a multinomial distribution).For this model, we also assume a Dirichlet prior, but this time directly on the stationary distribution, π ∼ D(γ, . . ., γ) with a fixed parameter γ ≥ 0. Since the prior is conjugate, the posterior for the estimated distribution is given by based on the observed frequencies ñi = I(z (t) = i) = j n ij .By considering only the proportion that each model is visited, the transition frequencies are rendered irrelevant in this i.i.d.approach, thereby ignoring possible dependencies in the sample sequence z (t) . Given the output of a transdimensional MCMC chain, we can now compare the empirical posterior of π derived from the model M Markov against the theoretically expected posterior π under the model M iid to estimate the sample size T = i ñi .For this purpose, a Dirichlet distribution with parameters α 1 , . . ., α I is fitted to the samples π (r) , which can be accomplished by an efficient fixed-point iteration scheme (Minka, 2000).Second, a comparison of the estimated Dirichlet parameters with the conjugate posterior in Eq. 5 yields α i ≈ ñi + γ.Therefore, after subtracting the prior sample size I 2 of the I × I transition matrix P (Eq.3), the effective sample size under the assumption of independent sampling from a multinomial distribution can be estimated as To minimize the impact of the prior, one can assume the improper prior γ = 0 as a default. Importantly, this estimate takes the discreteness of the indicator variable Z into account and does not change under permutations of arbitrary, numerical values of the model indices. Remarks If output from multiple independent chains c = 1, . . ., C is available, the transition frequency matrices N (1) , . . ., N (C) can simply be summed before applying the method.This follows directly from Bayesian updating of the stationary distribution π.Essentially, each chain provides independent evidence for the posterior, which is reflected by using the sums ij for the conjugate Dirichlet prior in Eq. 4. Note that this feature can be used to compare the efficiency of many short versus few long MCMC chains. The proposed method bears a resemblance to the convergence diagnostic of Brooks, Giudici, and Philippe (2003).To assess the convergence rate of a transdimensional MCMC chain, Brooks, Giudici, and Philippe (2003) computed the second largest eigenvalue of the maximum-likelihood estimate of the transition matrix, pij = n ij / k n ik .Since P is a probability matrix, this eigenvalue must be smaller than one and provides a measure of the dependency of the samples z (t) . Note that the simplifying assumptions underlying our approach are not guaranteed to hold.Whereas samples of the full model space (z (t) , θ (t) ) necessarily follow a Markov process by construction, this does not imply that the samples z (t) follow a Markov chain marginally (Brooks, Giudici, and Roberts, 2003;Lodewyckx et al., 2011).Intuitively, this is due to the fact that the transition probabilities depend on the exact locations of the MCMC sampler in each of the models' parameter spaces.However, in Sections 4 and 5 we show in two empirical examples that the proposed simplification (i.e., fitting a Markov chain of order one) is sufficient to account for autocorrelations in the samples z (t) in practice. The proposed method can be applied irrespective of specific transdimensional MCMC implementations and requires only the sampled sequence z (t) of the discrete parameter or the matrix N with the observed frequency of transitions.In the R package MCMCprecision (Heck et al., 2017), we provide an implementation that relies on the efficient computation of eigenvectors in the C++ library Armadillo (Sanderson and Curtin, 2016), accessible in R via the package RcppArmadillo (Eddelbüttel and Sanderson, 2014).On a notebook with an Intel R i5-3320M processing unit, drawing R = 1, 000 samples from the posterior distribution for 10 (100) sampled models requires approximately 70 milliseconds (10 seconds). Illustration: Effect of Autocorrelation Before applying the proposed method to actual MCMC output, we first illustrate its use in an idealized setting.To investigate the effect of autocorrelation in the case of discrete parameters, we generate sequences z (t) of length T = 1, 000 from the Markov model M Markov for a given stationary distribution π = (.85,.13,.02) .To induce autocorrelation, we define a mixture process for each iteration t.With probability β, the discrete indicator variable will be identical to the current model, z t+1 = z t .In contrast, with probability 1 − β, the value z t+1 is sampled from the given stationary distribution π.Thereby, increasing values of β result in larger autocorrelation of the sequence z (t) . For varying levels of β = 0, 0.1, . . ., 0.8, we sampled 500 replications, applied the proposed method and computed the precision of the estimate πMarkov .Besides the posterior SD, we were interested in the coverage probability of the data-generating value π being in the 90% credibility interval.As a benchmark, we also computed the corresponding posterior SD under the (false) assumption that the samples z (t) were independently drawn by fitting the model M iid with the Dirichlet prior parameter γ = 0.In contrast, the model M iid does assume independence a priori.Thereby, the posterior uncertainty is independent of β.As a result of this, the 90% credibility interval is less likely to include the data-generating value π as shown in Panel B, whereas the Markov model provides an accurate description of the estimation uncertainty. Variable Selection in Logistic Regression In the following, we apply the proposed method to the problem of selecting variables in a logistic regression, an example introduced by Dellaportas, Forster, and Ntzoufras (2000) to highlight the implementation of transdimensional MCMC in BUGS (see also Dellaportas Forster, and Ntzoufras, 2002;Ntzoufras, 2002).Table 1 shows the frequencies of deaths and survivals conditional on severity and whether patients received treatment (i.e., antitoxin medication ;Healy, 1988).To emphasize the importance of considering estimation uncertainty for the posterior model probabilities, we compare the efficiency of two transdimensional MCMC approaches, which can both be implemented in JAGS (Plummer, 2003). The full logistic regression model assumes a Bernoulli distribution B of the survival frequencies x jl and a linear model on the logit-transformed survival probabilities p jl , where n jl are the total number of patients in condition jl and β the regression coefficient for the effect-coded variables a j , b l , and (ab) jl .Variable selection is required to choose between I = 5 models: the intercept-only model, the three main effect models A, B, and A+B, and the model AB that includes the interaction.For comparability, we use the same priors as Dellaportas, Forster, and Ntzoufras (2000) and assume a centered Gaussian prior with variance σ 2 = 8 for each regression parameter, β k ∼ N (0, 8).Moreover, the model probabilities were set to be uniform, p(M i ) = 1/5.Note that efficiency can be increased by selecting prior probabilities that result in approximately uniform posterior probabilities Lodewyckx et al., 2011).Moreover, nonuniform prior probabilities might be used to protect against multiple testing issues (i.e., Bayes multiplicity; Scott and Berger, 2010). One of the two implemented transdimensional MCMC approaches uses unconditional priors (Kuo and Mallick, 1998, KM98) and includes indicator variables γ ik ∈ {0, 1} for each regression coefficient β k in model M i .The parameter γ i determines which regression coefficients are included by removing some of the additive terms of the linear model in Equation 8. Details about the full and conditional posterior distributions are provided by Dellaportas, Forster, and Ntzoufras (2000, p. 7). As a second transdimensional MCMC approach, we implemented the method of Carlin and Chib (1995;CC95), which stacks up all model parameters into a new parameter θ = (z, β 1 , . . ., β I ), where β i is the vector of regression parameters of model M i .Thereby, this approach samples a total of 12 regression parameters along with the indicator variable z. Note that the method of Carlin and Chib (1995) uses pseudo-priors p( that do not influence the statistical inference about p(x | M i ) and p(β i | x, M i ).However, these pseudo-priors determine the conditional proposal probabilities p(z | x, β 1 , . . ., β I ) of switching between the models and thereby affect the efficiency of the MCMC chain. In substantive applications, these pseudo-priors should be chosen to match the posterior p(β i | M i ) in order to ensure high probabilities of switching between the models (cf.Carlin and Chib, 1995;Barker and Link, 2013).Here, however, we did not optimize the sampling scheme and use β ik | M i ∼ N (0, 8) for the pseudo-priors to illustrate that our method can correctly detect the lower precision resulting from this suboptimal choice. Figure 3 shows the estimated posterior distribution of the posterior model probabilities using one Markov chain with 21,000 iterations (including 1,000 burn-in samples).The vertical black lines show the correct reference values, approximated by eight independent chains with one million samples each.As expected, the assumption that z (t) are sampled independently results in overconfidence in the point estimates of the CC95 approach.For all models, the corresponding posterior distributions miss the correct value and do not identify this estimation uncertainty.In contrast, the proposed Markov approach results in a posterior distribution that covers the target values with sufficiently high probability. Moreover, the novel estimation method reveals that the KM98 implementation has a higher precision compared to the (intentionally not optimized) CC95 approach. To test the validity of the proposed method more rigorously, we replicated the previous analysis 100 times.Thereby, the estimated precision can be compared against the actual sampling variability of the estimated model probabilities.For both transdimensional MCMC methods, Table 2 shows the mean estimated model probabilities in percent.Across replications, the point estimates from the Markov and the i.i.d.approach were similar with mean absolute differences smaller than 0.02% and 0.49% for the KM98 and CC95 implementations, respectively.To judge whether the estimated precision (i.e., the mean posterior standard deviations SD i and SD M ) is valid, Table 2 shows the empirical SD of the estimates πMarkov across replications.The results clearly show that the assumption of independent samples z (t) leads to an overconfident assessment of the precision for the estimated model probabilities, whereas the proposed Markov approach provides a good estimate of the actual estimation uncertainty.Moreover, for the MCMC method by Carlin and Chib (1995), the larger SDs indicate a smaller efficiency compared to the unconditional prior approach by Kuo and Mallick (1998).This theoretically expected result is likely to be due to the suboptimal choice of pseudo-priors.However, note that this difference in efficiency may be overlooked when merely computing relative proportions based on the sampled indicator variable z (t) (i.e., when implicitly assuming independent samples). The higher efficiency of the KM98 approach becomes even clearer when assessing the mean effective sample size across replications, which was estimated to be 4,514 compared to only 163 for the CC95 method.Note that commonly used estimators of effective sample size (e.g., Plummer et al., 2006) depend on the exact numerical labels of the modelindicator variable Z.To illustrate this, Figure 4 shows the estimated sample size for all 120 permutations of the indices (1, . . ., 5) for a fixed sequence z (t) , computed by the spectral decomposition available in the R package coda (Plummer et al., 2006).This estimate varied (Plummer et al., 2006) for all permutations of the model indicator labels of the MCMC output z (t) (based on 20,000 samples of the method by Kuo and Mallick, 1998). considerably depending on the arbitrary labeling of the models.In contrast, the proposed Markov approach results in a well-defined, invariant estimate by explicitly accounting for the discreteness of Z. Finally, we show that the posterior samples π (t) of the model M Markov can directly be used to assess the uncertainty of Bayes factor estimates.For instance, substantive applications could be interested in testing whether to include the interaction term of condition (A) and treatment (B) in a logistic regression model.Given the output of a single MCMC run with 20,000 samples, Figure 5 shows the resulting posterior distribution of the Bayes factor B A+B,AB in favor for the absence of an interaction.Similar to the posterior model probabilities, the i.i.d.approach results in overconfidence regarding the estimate and most of the probability mass excludes the correct value 8.50 (approximated with a precision of SD = 0.014).In contrast, the Markov approach corrects for dependencies in the samples z (t) and includes the correct value.The same pattern emerged across 100 replications, that is, the mean estimated SD of the Bayes factor matched the corresponding empirical SD of the Bayes factor estimates (KM98: 0.40 vs. 0.47; CC95: 6.93 vs. 6.24).When using transdimensional MCMC, Bayes factors cannot be expected to be reliably estimated if models and Mallick, 1998). are never or very infrequently sampled (e.g., Model 1 in Table 2).For instance, the Bayes factor B A,B ≈ 44.4 was estimated very imprecisely even in the KM98 approach (mean SD = 7.5; empirical SD = 14.8).To obtain more precise Bayes factor estimates in the presence of infrequently sampled models, it is recommended to rerun the transdimensional MCMC chain including only the two relevant models of interest (Lodewyckx et al., 2011). 5. Log-Linear Models for a 2 6 Contingency Table The application of the proposed method is also feasible in realistic scenarios with hundreds of sampled models.To illustrate this, we reanalyzed the 2 6 complete contingency table by Edwards and Havránek (1985), which includes six risk factors for coronary heart disease (i.e., smoking, strenuous mental work, strenuous physical work, systolic blood pressure, ratio of α and β lipoproteins, and family anamnesis of coronary heart disease).We are interested in finding the most parsimonious log-linear model, which accounts for the cell frequencies y j of cell j (j = 1, . . ., 2 6 ) by assuming a Poisson distribution with mean µ j and log µ j = φ + x j β, where φ is the intercept, β the vector of regression parameters, and x j the (transposed) design vector, which selects the elements of β included for modeling cell j.Here, we consider the class of hierarchical log-linear models that only allow the inclusion of an interaction if the corresponding marginal effects and lower interaction terms are included in the model as well (e.g., Overstall and King, 2014a). To select between all 7.8 million possible hierarchical log-linear models (Dellaportas and Forster, 1999), we use the reversible jump algorithm proposed by Forster, Gill, and Overstall (2012), which is implemented in the R package conting (Overstall and King, 2014b).Assuming a unit information prior (Ntzoufras, Dellaportas, and Forster, 2003), we sampled 100,000 iterations, discarded 10,000 as burn-in, and applied the proposed Markov chain method by drawing 2,000 samples for the posterior model probabilities.To assess whether the estimated uncertainty accurately quantifies sampling variability, we ran 100 replications initialized with randomly chosen models. Table 3 shows the 10 models with the highest posterior probabilities.The relatively large posterior standard deviations of the estimated posterior model probabilities indicate that the samples z (t) are autocorrelated to a substantial degree, despite the large number of iterations.This is also reflected by the effective sample size, which was estimated to be T eff = 4, 484 on average (SD = 186), approximately 5% of the number of iterations after burn-in. Table 3 also shows means and standard deviations of the sampled rank R for the models with the highest posterior probability, indicating that estimation uncertainty (i.e., the posterior SD) increased for models with smaller posterior probabilities.Moreover, the proportion of posterior samples is shown for which the sampled rank R was identical to the rank across all replications (R = #) and smaller than or equal to 10 (R ≤ 10).According to these proportions, exact ranks were estimated precisely only for the two best models, whereas the set of the 10 models with highest posterior probabilities was relatively stable across posterior samples (with the exception of model 10).Importantly, the mean estimated probabilities P (R = #) and P (R ≤ 10) matched the corresponding empirical proportions across replications. Note that these results regarding estimation uncertainty are in line with our expectations -if models have small posterior probabilities, they are also sampled infrequently, which in turn results in estimation uncertainty.To quantify this variability, the proposed Markov chain approach provides an estimate for the achieved precision to assess the quality of the results and to find an appropriate stopping rule for MCMC sampling. Conclusion We proposed a novel approach for estimating the precision of transdimensional MCMC output.Essentially, a Markov model is fitted to the observed model-indicator variable z (t) to obtain posterior samples of the corresponding stationary distribution.We showed that the method corrects for autocorrelation in a given sequence z (t) and provides a good assessment of estimation uncertainty.Importantly, the method does not require output of multiple independent MCMC chains and thus reduces the computational costs for adaption and burn-in.Besides being useful for transdimensional MCMC output, the method provides an estimate of the precision and effective sample size of discrete parameters in MCMC samplers in general.Thereby, researchers can easily decide whether the obtained precision is sufficiently high for substantive applications of interest. Figure 1 : Figure 1: Illustration of the MCMC iterations of the model-index parameter z (t) .Whereas samples are independent multinomial samples in Panel A, samples are drawn from a Markov chain with positive autocorrelation in Panel B. Figure 2 : Figure 2: Estimation uncertainty for the stationary distribution π.(A) The Markovmethod (black dots) correctly indicates that estimation error of the posterior model probabilities increases as autocorrelation increases.When assuming i.i.d.samples (gray triangles), the estimated precision does not depend on the autocorrelation.(B) Proportion of 500 replications for which the 90% CI intervals include the data-generating stationary distribution π. Figure 2 Figure2shows the result of this simulation.In Panel A, the estimation uncertainty increases for larger values of β, thereby taking the increasing autocorrelation into account. Figure 3 : Figure 3: Posterior distribution of the posterior model probabilities π in the logistic regression example based on the Markov and the i.i.d.model.Vertical black lines show the target values (CC95 = Carlin and Chib, 1995; KM98 = Kuo and Mallick, 1998). Figure 4 : Figure4: Effective sample size as estimated by the spectral density at zero(Plummer et al., 2006) for all permutations of the model indicator labels of the MCMC output z (t) (based on 20,000 samples of the method byKuo and Mallick, 1998). Figure 5 : Figure 5: Posterior distribution for the Bayes factor in favor of Model A+B vs. AB.The vertical black line shows the target value (CC95 = Carlin and Chib, 1995; KM98 = Kuo and Mallick, 1998). Table 2 : Estimated posterior model probabilities in percent.Posterior model probability estimates π are shown in percent.Mean(π) and SD(π) were computed across 100 replications.As a measure for the estimated precision, means of the posterior SD are shown (SD i assumes independent sampling; SD M assumes a Markov chain model). Note: Table 3 : Models with the highest posterior probability for the 2 6 contingency table.Posterior model probabilities π are shown in percent.Mean(π) and SD(π) were computed across 100 replications.All models include the six main effects, A: smoking, B: strenuous mental work, C: strenuous physical work, D: systolic blood pressure, E: ratio of α and β lipoproteins, F: family anamnesis of coronary heart disease, and the first-order interactions AC, AD, AE, BC, and DE.
7,905.8
2017-01-17T00:00:00.000
[ "Computer Science" ]
Random Movements Generation in Western and Eastern Cultures Recent investigations document a space number association effect in decisional processes of left-to-right reading cultures. Here, we expanded on this issue by studying motor decisional processes in a group of bilingual Iranians (i.e., which read text from right to left and numbers from left to right) and a group of monolingual Australians, submitted to four different numerical cues (i.e., digits written in Arabic, digits written in East Arabic, English number words, Farsi number words). According to previous evidence, we found that both Arabic digits and English number words affect the performance of the Australian participants; on the contrary, no effect has been reported for all four codes in the performance of the Iranian participants. The current findings are discussed according to the absence of a consistence in the reading direction between numbers and words (i.e., Iranian participants) as well as the specific Inter Stimulus Interval (ISI) adopted for displaying all four codes. Introduction The SNARC (spatial-numerical association of response codes) effect reflects the finding that numbers are spatially coded along a mental line ranging from left to right, from lowest to highest, respectively (Dehaene, Bossini, & Giraux, 1993). A consistent number of studies refer to the scanning (i.e., oculomotor) strategies linked to reading habits for explaining the origin of this phenomenon. Scanning strategies might determine an overall visuospatial bias (Chokron & Imbert, 1993), which might make a left-to-right or a rightto-left SNARC effect more likely to be observed. The first evidence in support of this assumption is provided in the study by Dehaene et al. (1993). In this study, the authors performed nine experiments to examine how numerical magnitudes are associated with spatial response codes. The results of this work show that the SNARC effect did not vary with handedness, or hemispheric dominance, but was linked to the direction of writing. Specifically, a weaker SNARC was found in Iranian participants (i.e., which read words from right to left), which correlated to the number of years they spent in France. Subsequent studies have explored the SNARC effect with other populations characterized by no left to right reading habit such as Mainland Chinese and Taiwanese (Chan & Bergen, 2005), Chinese (Hung, Hung, Tzeng, & Wu, 2008), Russian-Hebrew (Shaki & Fischer, 2008), Palestinians and Israelis (Shaki, Fischer, & Petrusic, 2009). These studies suggest that different notations of the same quantity have flexible mappings within space, which is plausibly shaped by the dominant context in which the numerical notations appear. However, the study by Shaki et al. (2009) showed that the SNARC effect is only present when the reading directions for numbers and words are consistent. In fact, the authors found a SNARC effect only on Canadian (i.e., from left to right) and Palestinian (i.e., from right to left) groups while they failed in finding this effect in the Israeli group, where the reading direction of words and digits are inconsistent (i.e., words are read from right to left and numbers from left to right). SNARC seems also sensitive to finger counting strategies. In fact, several works (e.g., Fischer, 2008;Fischer & Brugger, 2011) have shown that sensorimotor practices, such as finger counting routines, do play a fundamental role in determining Spatial Numerical Associations. Interesting, there is evidence suggesting a difference in counting strategies between Western and Eastern populations. The recent study by Lindemann, Alipour, and Fischer (2011) showed that the 68% of the examined Western participants indicated to map the numbers 1 to 5 onto fingers of the left hand, whereas the 63.4% of the Middle Eastern participants reported an overall preference to start counting with the right hand. A space number association (SNA) like-effect has been documented also at decisional level. For example, Loetscher, Schwarz, Schubiger, and Brugger (2008) showed that lateral head turning, which is known to affect spatial attention (Schindler & Kerkhoff, 1997;Vicario, Martino, Pavone, & Fuggetta, 2011), influences numerical selection in Western participants asked to perform a Random Number Generation (RNG) task. Specifically, while facing left, participants produced relatively small numbers, whereas while facing right, they tended to produce larger numbers. Further work on Western participants (Daar & Pratt, 2008;Vicario, 2012;Shaki & Fischer, 2013) have documented a SNA like-effect even in motor decision-making tasks, such as the generation of random movement sequences. For example, Daar and Pratt (2008) showed that low digits biased the voluntary selection of typing with their left hand, whereas high digits biased the voluntary selection of typing with their right hand. Similar results were found (Vicario, 2012) by using a random movement generation (RMG) paradigm (Jahanshahi & Dirnberger, 1998). All these findings demonstrate that both task-irrelevant sensory-motor manipulation (Loetscher et al., 2008) as well as the exposure to numbers (Daar & Pratt, 2008;Vicario, 2012) predictably shift both abstract (i.e., numerical) and motor decisional processes, against the participants' efforts to be casual. In the current study, we investigated the effect of the reading direction habit, suggested by a particular numerical code, to the execution of a motor decisional task such as the RMG. In particular, we tested the RMG (i.e., finger movement selection) performance in a group of bilingual Iranians, which are natively used not only to digits written in East Arabic and Farsi language but also to both digits written in Arabic and English (i.e., as their second language). In fact, Iranian people habitually read Farsi texts from right to left, but digits written in East Arabic from left to right. The choice of testing RMG in Iranian participants also allowed to investigate any eventual effect related to their counting strategy (i.e., right to left), which is the most likely in this culture (see Lindemann et al., 2011). Moreover, our Iranian participants were familiar with Arabic numbers and fluent with the English language (see the "Participants" section for more details), implying some familiarity with the left to right reading. This is particularly relevant in the light of a recent study (Rashidi-Ranjbar, Goudarzvand, Jahangiri, Brugger, & Loetscher, 2014) showing that lateral head turning does not affect the random number selection in Iranian participants with a low familiarity with the left-to-right reading direction habit (i.e., participants were "beginners" for the English language). We also tested, with the same paradigm, a group of monolingual Western participants (i.e., Australians participants). See Figure 1 for details about the visual codes used in the study. Through this study, we wanted to explore some theoretical aspects not previously addressed: (a) investigate whether the familiarity with the left-to-right reading direction habit, as might be argued from the results provided by Dehaene et al. (1993), affects the RMG performance of Iranian participants exposed to Arabic digits; (b) investigate whether the exposure to English number words influences the RMG performance as well as the exposure to number digits (Daar & Pratt, 2008;Vicario, 2012); (c) investigate whether the right-to-left finger counting strategy, which is more likely in the Iranian sample (Lindemann et al., 2011), influences the RMG performance, with a particular regard for the exposure to Eastern numerical codes (i.e., East Arabic code and Farsi number code). This last issue appears particularly relevant given the adopted experimental paradigm (i.e., finger movement's selection). Participants Thirteen right-handed Iranian participants (6 men, 7 women, M age = 27.92 ± 2.46 years) and 17 right-handed Australian participants (control sample, 8 men, 9 women, M age = 23.5 ± 3.22) with normal or corrected vision participated in the research after providing written informed consent. Initially, it was planned to recruit an equal number of participants for both groups. However, this proved to be unachievable due to the limited availability of Iranian participants at the International School for Advanced Studies (Trieste, Italy). Most of the Iranian participants were PhD students. All participants were born in Iran from Iranian parents and had initially studied in their own country of origin, before living in Italy. All participants were familiar with Arabic numbers and fluent with the English language (the average number of years elapsed since they started studying English was 19.54 ± 2.50). Moreover, three participants had "beginner" knowledge of the Italian language. The control group was recruited at the University of Queensland, Brisbane (Australia), among undergraduate and graduate students. We did not adopt statistical procedures for determining the sample size of Australian participants. We decided to test 17 participants in agreement of the previous study by Vicario (2012), which has adopted the same task. Verbal informed consent was obtained from all participants. The investigation has been conducted according to the principles expressed in the Declaration of Helsinki. Procedure and Instruments Iranian participants were positioned 50 cm from an Olidata computer 21″ monitor configured at a refresh rate of 60 Hz. Australian participants were positioned 50 cm from a Dell computer 21″ monitor configured at a refresh rate of 60 Hz. Visual stimuli were presented in four separate blocks (counterbalanced design) and were each composed of six numerical codes (low numbers: 1-2; middle numbers: 4-6; high numbers: 8-9; size: 0.8° × 0.1°) written in Arabic, East Arabic, English, or Farsi. Visual stimuli were casually presented according to an Inter Stimulus Interval (ISI) of 300 ms. This ISI, which marked the temporal pace for the random finger movements selection task, was chosen according to the evidence provided in a recent work (Vicario, 2012), showing that numbers affect RMG performance only when the ISI is set at 300 ms. Participants were explicitly asked to synchronize their responses with the visual stimulus displacement. In particular, they were asked to respond to numerical cues presented on the computer screen by pressing one among eight keys of the keyboard (left keys: A, S, D, F; right keys: H, J, K, L) with one of their eight fingers (the index, the middle, the ring, and the pinkie of both left and right hands). Thus, the "go" signal to move a finger was represented by the numerical cue itself. The numerical cue disappeared once the participant pressed the selected key. Each block consisted of a total of 60 trials (10 per numerical cue) displayed on the center of the computer screen. The dependent variable was the frequency with which a finger movement selection was made following the presentation of low (1, 2), middle (4, 6), and high numbers (8,9). All participants were submitted to a training block of 30 trials before starting the experimental session. Data Analysis The amount of finger movements generated with the left and the right hand while looking at low, middle, and high numbers was analyzed via repeated-measures ANOVA. In particular, we were interested in investigating the effect of these four codes on the RMG performance of our participants. Thus, we performed a within-subjects analysis to address questions discussed in the "Introduction" section. In particular, the within-subjects analysis consisted of two separated ANOVA (one for each group), in which we compared the RMG performance for all types of visual stimulus. Thus, for each single ANOVA, we considered the following factors: the numerical magnitude (i.e., low, middle, high), the type of code (i.e., Arabic numbers, East Arabic numbers, English number words, Farsi number words), and the Response selection (left, right). Post hoc comparisons were performed using t test. For all tests, the level of statistical significance was set at p < .05. Data analysis was performed using Statistica software, version 8.0, StatSoft, Inc., Tulsa, OK, USA. Talking about the interaction factors, no significant results were reported for the Type of code × Numerical magnitude interaction factor, F(6, 12) = 0.0, p > .05, η 2 = −0.409, α = 2, the Response selection × Type of code interaction factor, F(3, 64) = 2.34, p > .05, η 2 = 0.099, α = 0.563. However, we detected a significant Numerical magnitude × Response selection interaction factor, F(2, 128) = 3.63, p = .029, η 2 = 0.053, α = 0.662. Post hoc analysis reveals that the right fingers were used more frequently than left fingers when looking at high (p < .001) and middle (p < .001) numbers while this difference was not significant for low numbers (p > .05). Finally, we detected a significant Numerical magnitude × Type of code × Response selection interaction factor, F(6, 128) = 2.58, p = .023, η 2 = 0.106, α = 0.830. Post hoc analysis shows that, in the Arabic digits block, left fingers were used more frequently than right fingers when looking at low quantity (p = .007); vice versa, right fingers were moved more frequently than left fingers when looking at high numbers (p = .007); a similar difference was reported also for the English number words block. In particular, left fingers were used more frequently than right fingers when looking at low quantity (p = .038); vice versa, right fingers were moved more frequently than left fingers when looking at high numbers (p = .038). No significant differences were detected for the other East Arabic and the Farsi number codes. See Figure 3 for details about the average movement frequency detected for all codes. Discussion Several works recognize a central role to the reading direction habit in the mental assessment of numerical magnitude. The SNARC effect has been considered the most relevant experimental evidence in support of this view. However, this role has been re-evaluated in the light of recent works (see Fischer & Brugger, 2011, for a discussion on this argument). For example, a recent study involving bilingual Russian-Hebrew readers (Fischer, Shaki, & Cruise, 2009) has shown that merely reading a single Cyrillic or Hebrew word, changed their SNA from 1 s to the next, clearly indicating that effects of reading are much more fragile than originally thought . In two recent studies (Daar & Pratt, 2008;Vicario, 2012), the authors investigated the random movement selection performance, respectively, in Canadian and Italian participants while looking at Arabic digits. The results showed that left hand/fingers were moved more frequently while looking at low with respect to high numerical digits. Vice versa, a higher frequency in moving right hand/fingers while looking at high with respect to low numerical digits was documented. In the current study, we used a RMG task to explore the effect of four different numerical codes (i.e., Western vs. Eastern) on the motor decisional processes of two groups of participants (i.e., Iranians and Australians) to address the following arguments: i. whether the RMG performance is affected by numerical words as well as by digits (Daar & Pratt, 2008;Vicario, 2012); ii. (for the Iranian participants only) whether the high familiarity with the Arabic number code and with a language characterized by a left-to-right reading direction (i.e., the English language) induces a SNA effect in the RMG performance while looking at Arabic numerical codes. This question is timely in the light of the results recently provided by Rashidi-Ranjbar et al. (2014) showing that the modulation of spatial attention focus, via lateral head turning, does not affect the RNG performance of Iranian participants with a low familiarity with the left-to-right reading habit, although previous evidence in Western populations (Loetscher et al., 2008); iii. investigate whether the right-to-left finger counting strategy, which is the most likely in Iranian sample (Lindemann et al., 2011), might exert some kind of influence in the current decision-making task, which involves finger movements. Results suggest, i. The interaction between numbers and motor decisional processes is specific for Western cultures and does not affect the performance of the examined Note. The graph plots the average number of finger movements on the exposure to high, middle, and low numbers. Vertical bars indicate the standard error. RMG = random movement generation. *Significant difference at p level < .05. Eastern culture (i.e., Iranians), despite their familiarity with the Arabic code and the English language. ii. Even numerical words affect RMG, although only in the Western sample. iii. The right-to-left finger counting strategy, which is the most common in the Iranian sample (Lindemann et al., 2011), does not affect performance in the execution of the current finger movements selection task. In detail, talking about the performance of the Australian sample, we found that digits written in Arabic code affect the RMG performance, like in previous works (Daar & Pratt, 2008;Vicario, 2012). In particular, we found that left hand fingers were moved more frequently while looking at low quantities and right hand fingers were moved more frequently while looking at high quantities. This provides support to the reliability of the current quantity/motor decision-making interaction across different Western cultures (i.e., Canadian people, Daar & Pratt, 2008;Italian people, Vicario, 2012; Australian people, in the current work). Moreover, the effect sizes (i.e., medium effect size) reported in the current work lead to argue that the current effect of number on RMG has a fair reliability. A novel result refers to the evidence of a similar effect on RMG performance in association to the exposure to English number words. This agrees with previous evidence showing visuo-motor effects of both digits and number words (Calabria & Rossetti, 2005) on the performance of healthy humans. No effects on the RMG performance were reported for the other two codes (i.e., digits written in East Arabic and Farsi number words). Finally, talking about the RMG performance of the Iranian participants, no significant effects have been reported in association to all four codes. This result leads to argue the following remarks: i. The familiarity with a left-to-right reading direction habit (like in the case of our Iranian participants) does not induce a SNA like-effect in the RMG, although previous evidence (Dehaene et al., 1993) shows that the familiarity with a left-to-right reading direction habit (like in the case of our Iranian participants) leads likely the occurrence of a SNARC effect; ii. the right-to-left counting strategy, which is the most likely in Iranian people (see Lindemann et al., 2011), results irrelevant in influencing the execution of the current finger movements selection task. Overall, the absence of a numbers/motor decisional processes interaction in the Iranian sample corroborates the reading direction inconsistency hypothesis proposed by Shaki et al. (2009) documenting no SNA effect in populations (i.e., Israelis) characterized by a reading direction inconsistency for word and digit numbers. In fact, a similar reading direction inconsistency is also reported for the Eastern codes used by Iranian populations (i.e., digits written in East Arabic code are read from left to right; number words written in Farsi are read from right to left). Alternatively, the absence of effects in the RMG performance of the Iranian sample could be due to the particular ISI (i.e., 300 ms) selected for presenting the four different numerical codes in the computer screen. In fact, this particular ISI was chosen according to the result provided by Vicario (2012) with Western participants (i.e., Italian participants). In particular, one could hypothesize that Western codes (i.e., Arabic numerical codes and English number words) did not affect RMG of Iranian participants because the processing of these two codes might have required a longer ISI to affect the RMG performance. This suggestion is motivated by the relatively lower familiarity of the Iranian participants with Western codes (their exposure was limited to a school context) compared with the Eastern codes. In this sense, we cannot exclude that the current ISI might not be appropriate to disclose any eventual effect played by the exposure to Western codes on the RMG performance in the Iranian sample. Future works devoted in the investigation of this research topic in right-to-left reading habit cultures might clarify this issue by using longer ISI. Moreover, it would be critical that the adoption of statistical procedures for determining, in advance, the size of the samples to be tested, to verify the research hypothesis. The absence of this procedure is currently a limit of our work. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research and/or authorship of this article.
4,625.6
2014-09-01T00:00:00.000
[ "Psychology", "Linguistics" ]
A Contention-Based Routing Protocol for VANET In VANETs, vehicles as nodes are self-organized and inter-communicated without centralized authority. The topology formed by vehicles changes quickly, which makes routing become instability. Position-based routing, compared with traditional routing, is more scalable and feasible. Thus it has been proven stabler for VANETs than conventional routing. However, the frequently changed topology and nodes density could break the path a packet is following. Thus designing a robust multi-hop routing in VANET is challenging. This paper proposes an enhanced position-based routing protocol called CBGR, which takes into account the velocity and direction of vehicles in VANET. Simulation results show that CBGR achieves a high level of routing performance in terms of hop counts, network latency and packet delivery ratio both in dense or sparse vehicular ad-hoc networks. Introduction Vehicular Ad hoc Network (VANETs) [1] is a means of inter-vehicle communication, using for safe driving, driver assistance and traffic management.In VANETs, vehicles as nodes are self-organized, they communication in a distributed fashion with no centralized authority.Moreover, the vehicles are move fast, the topology formed by them changes quickly.Highly dynamic topology makes data transmission less reliable, since the highly dynamic topology may result in network disconnect frequently.So the key problem of design a routing protocol is keeping it as stable as possible, trying to keep the link between two vehicles while they are transmitting information.Beside, nodes in VANETs are not subjected to storage and power limitation as in wireless sensor networks.Vehicles as node in VANET, are supposed to have ample computing power and energy.However, efficient multi-hop routing in VANET is challenging, due to the frequently changed topology.VANET routing protocols must operate reliably in scenarios embracing high speed nodes. Significant VANET protocols have been proposed, and they're classified as topologybased and position-based [2].Topology-based routing protocols perform packet forwarding, using links' information they collect from the network.They find out a path to the destination before relay the packet.Position-based (geographic) routings perform packet forwarding, using the position information of the ultimate destination and of the one-hop neighbors.Usually they are stateless, with no need to keep the link information about the whole network.An intermediate node forwards a packet to its neighbor with the closest distance to the position of the destination.These days, it is common that vehicles get a GPS unit on board, from where vehicles can get their location information.It has made position-based routing easy implemented and popular.Literatures have conducted evaluation study of topology-based routing protocols and position-based ones [3,4].Unable to cope up with nodes' high mobility and minimum transmission delay, topology-based routing protocols perform unacceptably in VANETs [5,6]. Position-based routing, as is used by protocols like Greedy Perimeter Stateless Routing (GPSR) [7] and Geographic Routing Protocol(GRP) [8], is well suited for highly dynamic environments such as VANET.Intermediate node in GPSR forwards a packet to a radio neighbor which is closest to the desination.This approcach is called greedy forwarding.It is assumeed that each node can determine its own position using a GPS.Nodes exchange their locations with neihbors, and obtain the position of destination by a location service.In some cases, there might be no nexthop when greedly forwarding, GPSR introduces a strategy, called perimeter routing.Many research [9][10][11][12] have improved GPSR, most of which worked on planarized graphs, which are required in perimeter mode.However these schemes always involve significant overheads, and thus reduce the efficiency of position-based routing. Node in GRP maintains a neighbor table by periodically broadcasting Hello message.It is assumed that each node can determine its own position using a GPS.The positions of other nodes are determined through flooding [13].The whole network is divided into quadrants with different levels.They are deployed in a hierarchy manner.Four low levels compose a high level, thus provide a distributed location service.This schedule makes it overcoming GPSR and many other position based protocol.GRP forwards packets in an greedy mode until it gets stuck in a local optimum, where there are no new nodes to forward the packet to.It uses a backtracking mechanism, where the packet is returned to the previous hop and a new next hop selection can be made. The aforementioned working schedule of GRP makes it a suitable routing protocol for VANET, but there are still some shortcomings.The packet forwarding algorithm is too simple, that it only makes use of the location information to forward the packets.The mobility of packet's source and destination are considered, while the intermediate node's movement isn't well considered when selecting next hop.As the intermediate node moving, the link between them will probably vanish, which result in the routing instability.This generates a routing rebuilt, which bring more network overhead and cost.Thus GRP may not perform as efficient as expected when deployed in VANET, where vehicles are always in a relatively high mobility. In the next section, a geographic routing protocol called CBRG is proposed, which improve the forwarding decisions through contention among nodes.CBGR is proposed based on GRP to optimize the efficiency, it redesigns the forwarding and backtracking algorithm of GRP, The purpose of CBGR is to improve the packet delivery ratios while reduce the network delay.CBGR keeps the advantages of GRP.Further more, it incorporates geographic location with node's mobile velocity and direction, to be the precondition for choosing the next hop.Simulation results show that CBGR achieves a high level of routing performance in terms of hop counts, network latency and packet delivery ratio both in dense or sparse vehicular ad-hoc networks. Contention-Based Geographic Routing (CBGR) 2.1.Contention-based Forwarding We notice that, Locations Services and Forwarding algorithm are the two key problems in routing protocols based on geographic location, which would affect the routing performances.The hierarchy location service employed in GRP overcomes many other routing protocols in using the destination zone instead of destination location.This makes GRP be one of the highest efficient position-based routing protocols, as it can decrease the location indeterminacy cause by the mobility of the destination.CBGR keeps this advantage, and redesigns the forwarding algorithm, imposing an additional contention in speed and direction.Meanwhile, CBGR propose a repair strategy when a packet gets stuck in a local optimum, while GRP simply gives the packet back to the previous forwarder. Contention Based on Speed Priority is introduced in CBGR.A node calculates the priority of its neighbor before forward it.We use the priority to select the most suitable node to relay, rather than make a pure greedy forwarding selection. Stability is an important requirement of routing in a wireless network.A high mobility may cause the route failure if we use pure greedy forwarding.To increase reliability, GRP reads the velocity of each node, and select reliable nodes with the lower velocity, abandoning others even if they are closer to the destination.On the other hand, nodes with lower velocity would make a smaller location changes while exchanging Hello message, this will prolong the neighbor table' lifetime. There are two approaches to get the neighbor's velocity.One can write its own speed in hello message, thereby the speed is known by others along with the broadcasting of hello message.Another way is that, each node can calculate the velocity of its neighbor by the following formula: Where node_lat, node_long and timestamp j are the latitude, longitude and time of the neighbor at time j respectively, and nbr_lat, nbr_long and timestamp I are the latitude, longitude and time of the neighbor at time i read from neighbor table.We add an additional field named velocity to the neighbor table.To minimize the computing cost when relay a packet, each node does this calculation when receiving the hello message, and updates the neighbor table.When forwarding packets, we use P v to decide the priority of the neighbors.Formula 2 define the velocity priority: Thus CBGR selects the next hop based on a contention of speed.The one of the lowest speed will get a highest velocity priority. Contention Base on Direction Because of the obstacles, nodes in different roads cannot communicate with each other unless one is an extension of the other.Consider scenario in Figure 1, Node N1 is at the crossing.At that moment, the current forwarder C will pick N 1 as the next hop with a different moving direction.However, since the node is always moving fast in VANET, N 1 may be out of C's radio range.In fact it would be better if C chooses N 2 at that time.So it's necessary for the neighbour to content base on the moving direction.For a long-lasting link, the one with the same direction as the current forwarder will be a good candidate. Where node_lat and node_long are the coordinates, and t1 and t2 are two different time points.In order to reduce the impact on relay packet, node calculates Dir when receiving the Hello message, and then updates its neighbor table.Let P d stand for the priority based on node moving direction, we define P d as the following formula: Node read P d from neighbor table to decide a better next-hop before forwarding. 2.2.Deal with Dead-end Problem Sometimes when greedy forwarding packets, it can lead to blocked routes where there are no next hops to forward the packet to.GRP adopts backtracking, which return the packet to the previous node where a new next hop selection can be made [8].This mechanism will probably increase the hops to the destination, therefore increase the transmission delay.The long-known dead-end problem is depicted in Figure 2. Figure 2.Node x's void with respect to D. The distance between D and x is the radius of the arc around node D. The arc around x stands for its radio range.Although there are two existing paths to D, x-y-z-D and x-w-v-D, x will choose neither of them using greedy forwarding [7].There needs to be some other mechanism for x to forward packets in this situation. Classical protocol GPSR uses perimeter forwarding to solve the local optimal.When there is no new node to relay in greedy mode, the long-known right hand rule is introduced to find a path around the void.Perimeter forwarding will come back to greedy forwarding as soon as a new node appears, which is geographically closer to the packet's destination than the node switch to perimeter mode.The shortcoming of perimeter forwarding is planar graphs are required.An algorithm for removing edges from the graph that are not part of the Relative Neighborhood Graph or Gabriel Graph are also required to yield a network with no crossing links [14,15].This will definitely increase the routing cost. GRP, as mentioned before, use a backtracking mechanism to return the packed to the previous node, where a new next hop selection using greedy mode can be made.Thus no new mechanism introduced to solve the problem.A node will eliminate local optimal neighbors one by one until a valid neighbor is found or the packet-life comes to the end.This trial and error method will probably increase the hop counts and the network latency. Figure 3.C is the local optimal node of s with destination D The intuition behind CBGR is even if we can't get any closer to the destination, we won't go far away from it.So we go back, but neither returns that much to the previous hop as in GRP, nor follow a fixed counter wise sequence around the void as in GPSR.We let the neighbors of the local optimal node content again 323 destination, the one which is furthest to C will win the contention.This is because there is a void ahead; the further the node is away from C, the easier it can bypass the void.See Figure 3 for an example.N 1 and N 2 lies on the arc with the same distance to node D. Note that in the figure, N 2 is more suitable than N 1 to be the next hop. 2.3.Combining All Contentions We now present the full CBGR routing algorithm, which combines all the above mentioned contentions together. Step 1.When a source node or an intermediate node sends packets, it checks in its neighbor table, to find out the entry of the packet's ultimate destination.If there is an entry, the packet will be delivered directly to the destination successfully, and CBGR ends.If there isn't any entry, the sender will select a node in its single-hop radio neighbor, which is geographic closest to the destination.This step loops and quits when the following two case show up: Case 1: if there are more than one nodes, which form a set called set_1, satisfy the rule, goes step 2; Case 2: if there are no nodes satisfy the rule, goes step 4; Step 2. Contend based on velocities.The sender calculate P v , using formula 2, for each candidates in set_1, chooses the one with highest P v as the next hop.The algorithm goes to step 1.If there are two nodes and above, which form a set called set_2, satisfy the rule, goes step 3; Step 3. Contend based on direction.The sender calculates P d, using formula 4, for each candidate in set_2, chooses the one with highest P d as the next hop, and the algorithm goes back to step 1; If there are still more than one nodes satisfy the rule, the sender picks one randomly, and the algorithm goes back to step 1. Step 4. Break through dead-end.The sender relays the pack, using schedule presented in section 3.3.If there is no neighbor except the one where the packet is from, the sender will carry the packed until there is a fresh hello message or expire TTL, then the algorithm goes back to step1. By competitions, node can pick up a better next-hop than GRP in vehicle ad hoc network.We must clarify there is rarely possibility for our algorithm degenerated to pure greedy without contention.We define the two nodes with the same distance from sender, if the difference is within 10 meters.So there will always be contentions, especially in crossroad, where the direction contention is highly required. Recall that all nodes maintain a neighbor table, which stores the locations, velocities and directions of their single-hop radio neighbors.This table provides all information required for CBGR's forwarding decisions, beyond the packets themselves.From this point of view, CBGR make a negligible space cost to improve the routing performance of GRP. Simulation Result In this section we will compare the performance of CBGR with GRP.The experiments were conducted on OPNet modeler.Medium access control (MAC) is IEEE 802.11g, with a radio range of 250m.The mobility traces were generated on an area of 3000m 1000m using VANetMobiSim [16], a vehicular traffic generator.The micro-mobility is controlled by IDM_IM [17].For each simulation run, 20% sender-receiver pairs from the network were randomly selected.Each pair exchange 20 packets over 5 seconds.We measured the packet delivery rate (PDR) versus the numbers of vehicles participate in (Figure 4).Each point in the graphs comes out from 10 independent runs.PDR shows good results for our approach compared to GRP.This is because CBGR selects the best suitable node, using the mobility information, to relay the packet to.We can see in the very left part, when network is relatively sparse, CBRG has overwhelming advantage over GRP in PDR, thanks for allowing carrying packet when there is no node to be delivered.The figure illustrates CBGR is stable with a high PDR above 90%, nevertheless the network is dense or sparse.Figure 5 shows the average number of hops for a packet during routing to its ultimate destination compare with GRP.The reduction in hop count for CBGR is due to not backtracking when come to the dead-end.CBGR keeps the progress the packet has made by last hop, while CBGR obliterate it. Figure 6 depicts the end-to-end delay.The shorter latency for GRP early in the simulation is the result of small PDR.Packets that do not get delivered to the destination do not contribute to the latency.Thus CBGR gets larger PDR.However it is better to receive something than nothing.When node count reaches more than 100, CBGR gets less latency because of the more reasonable choice of the intermediate nodes under comparable PDR with GRP.Recall that although the choice is based on some extra computing, we arrange the calculating at the time receiving a Hello message rather than the time relaying the packet.Thus the computation cost rarely contributes to the latency. Conclusion This work aims at improving the route stability in vehicular ad-hoc networks.The simulation results have proved the proposed algorithm adapts itself to VANET in varying node densities.In future, we plan to make the forwarding decision with an overlap of city map.That will make our algorithm perform better in various scenarios. Figure 1 . Figure 1.Junction nodes moving towards different direction  ISSN: 1693-6930 TELKOMNIKA Vol.14, No. 1, March 2016 : 319 -325 322 . The one closest to the destination got the highest prior.If C get more than one neighbors with the same closest distance to the TELKOMNIKA ISSN: 1693-6930  A Contention-Based Routing Protocol for VANET (Deling Huang)
3,893.4
2016-03-01T00:00:00.000
[ "Computer Science" ]
On Gower Similarity Coefficient and Missing Values The Gower similarity coefficient is a popular measure for comparing objects with possibly mixed-type attributes and missing values. One of its characteristics is that it calculates the coefficient value without considering attributes with missing values. In this article, we explore the properties of the coefficient in detail, including the consequences of omitting attributes with missing values. We also introduce strict lower and upper bounds on the actual similarity value on an attribute and strict lower and upper bounds on the actual value of the Gower similarity coefficient, derive a number of their properties and propose a new coefficient as a solution to the identified problem with the Gower similarity coefficient. INTRODUCTION HE Gower similarity coefficient is a popular measure for comparing objects with possibly mixed-type attributes (quantitative, qualitative and/or dichotomous) and missing values.One of its characteristics is that it calculates the coefficient value without considering attributes with missing values.The approach is easy and intuitive and finds many applications (see, e.g.[1], [2], [3], [5], [6], [8]).It is also considered as an easily extensible template of calculating (dis)similarities of objects with mixed-type attributes [2], [5], [7].However, as we show in the article, Gower similarity coefficient has some deficiencies.In particular, we show that in the case of objects with missing values, the coefficient may take a similarity value impossible to obtain with any replacement of missing values with values from the domains of attributes. T Our main contribution in the article includes: • Introduction of strict lower and upper bounds on the actual similarity value on an attribute and strict lower and upper bounds on the actual value of the Gower similarity coefficient, which are obtainable after replacing missing values with respective attribute domain values.  • Showing that in the case of a pair of objects one of which has missing value for at least one quantitative attribute, the Gower similarity coefficient may take an incorrect value, which will be less than the lower bound on the actual value of the Gower similarity coefficient. • Derivation of a number of properties of similarity value of objects on the attribute, the Gower similarity coefficient and the introduced bounds. • Proposing new similarity coefficient G' as a correction of the Gower similarity coefficient, which eliminates the problem found for quantitative attributes with missing values. The layout of the article is as follows: First, we recall the definitions of attribute value similarities, their weights and the Gower similarity coefficient, as well as introduce additional basic notions that are used throughout the article.Then, we show example objects for which the Gower similarity coefficient takes an incorrect value, caused by the occurrence of a missing value of a quantitative attribute for one of them.We also illustrate the consequences of the occurrence of missing values for qualitative and dichotomous attributes.Next, we introduce strict lower and upper bounds on the actual similarity value on an attribute and on the actual value of the Gower similarity coefficient, as well as derive a number of their properties.In addition, the coefficient G', being the modification of the Gower similarity coefficient, is proposed, which, unlike the original Gower similarity coefficient, always returns similarity values that do not exceed the presented lower and upper bounds. BASIC NOTIONS RELATED TO GOWER SIMILARITY COEFFICIENT Gower proposed a measure of objects' similarity, which can be applied in the case of qualitative attributes, quantitative attributes, dichotomous attributes or their mixtures [4].In the measure, only the attributes for which it is possible to determine their similarity are taken into account; the other are ignored.In particular, if for a pair of objects, an attribute value for at least one of these objects is missing, then the two objects are treated as not comparable on this attribute and the Gower similarity coefficient is calculated without taking this attribute into account. In the remainder of the article, we assume that objects are characterized by n, where n ≥ 1, attributes whose domains contain at least two different values.The missing value will be denoted by *.The value of attribute i of object u will be denoted by u i . The function (. , . ) is used to indicate whether two objects are comparable on attribute i or not.Let u and v are objects under consideration.If u and v are comparable on attribute i, then ( , ) = 1; otherwise ( , ) = 0. We already mentioned that two objects u and v are incomparable on attribute i if the value of at least one of the objects is missing and so, ( , ) = 0.However, in the case of a dichotomous attribute (indicating whether or not a feature is present), the objects may also be incomparable, even if their values are known (this happens when two objects do not have the feature represented by the dichotomous attribute). The Gower similarity coefficient [4] for objects u and v is denoted by G(u,v) and is defined as follows: , where ( , ) is a coefficient determining similarity of two objects on attribute i, i = 1..n, taking values from the interval [0,1] ∪ {undefined}.It is assumed that whenever ( , ) = 0, then ( , ) × ( , ) = 0. Thus, the Gower similarity coefficient is the average similarity of two objects on the attributes on which they are comparable. In the case when the values of attribute i are not missing for both objects u and v, then ( , ) and coefficient ( , ) are determined as follows: ; where range i = max imin i , where max i is the maximal value of attribute i, while min i is the minimal value of attribute i. • If attribute i is dichotomous: In the case when the value of attribute i is missing for at least one of the objects u or v, then ( , ) and the coefficient ( , ) is determined for any type of attribute i in the same way as follows: Now, we are ready to formally define comparable and incomparable objects on an attribute.Objects u and v are defined as incomparable on attribute i if: • either the value of attribute i is missing for at least one the two objects • or attribute i is dichotomous and the values of both objects are equal to −.Otherwise, objects u and v are comparable on attribute i.In the remainder of the article, we will use the following notation: • CMP_ATT(u,v) denotes the set of attributes on which u and v are comparable; that is, :;<=_>??( , ): Objects u and v are defined as comparable if they are comparable on at least one attribute; that is, if Otherwise, objects u and v are defined as incomparable; that is, when ( , ) % @A = 0 (or equivalently, if |CMP_ATT(u,v)| = 0).Please note that the value of G(u,v) is not defined for incomparable objects.Otherwise, if u and v are comparable, then G(u,v) ∈ [0, 1]. A. What's Wrong with Gower Similarity Coefficient? Though Gower similarity coefficient is appreciated by the ease and intuitiveness of dealing with attributes on which objects are incomparable, we will show that it may take an unacceptable value if the values of attributes are missing (see Example 1). Objects u and v are comparable and different on attribute 1 (so, w 1 (u,v) = 1 and s 1 (u,v) = 0) and are not comparable on attribute 2 (so, Now we will consider what would be the Gower similarity coefficient of objects u and v i , where v i represents v after replacing its missing value of attribute 2 with some value from the domain range [0, 100].Objects v 1 , …, v 11 in Table I represent object v under assumption that its actual value of attribute 2 is 0, 10, …, 100, respectively.Clearly, each instance v i of object v is comparable with u on both attributes and is different from u on attribute 1, which is qualitative (so similarity of v i to u on attribute 1 equals 0).Hence, Clearly, G(u,v i ) reaches maximum for the greatest value of s 2 (u,v i ).This happens for object v 5 , for which s 2 (u,v 5 ) = 1 and, in consequence, G(u,v 5 ) = 0.5.G(u,v i ) reaches minimum for the least value of s 2 (u,v i ) (that is, for the largest absolute value of the difference between age of u and v i ).This happens for object v 11 , for which s 2 (u,v 11 ) = 0.4 and so, G(u,v 11 ) = 0.2.Please note that this least achievable value of 0.2 of G(u,v i ) is greater than G(u,v), which equals 0. As shown in Example 1, G(u,v) may take a value that is not obtainable for any actual completions of missing values of quantitative attributes of objects u and v. In the further part of the article, we introduce strict lower and upper bounds on the actual similarity value of any objects u and v on an attribute from the set INCMP*_ATT(u,v) and on the actual value of the Gower similarity coefficient for these objects.The bounds will make it possible to check when the Gower similarity coefficient takes values unattainable for any completions of missing values. B. Lower and Upper Bounds on Actual Similarity Value on an Attribute Let us recall that objects u and v are not comparable on attribute i either because at least one of the objects has missing value for this attribute (i.e.i ∈ INCMP*_ATT(u,v)) or the attribute is dichotomous and both objects have valuefor it (i.e.i ∈ INCMP d _ATT(u,v)).If u and v are incomparable on attribute i, then w i (u,v) = 0, and so attribute i does not contribute to the value of G(u,v).Nevertheless, in the case of attribute i ∈ INCMP*_ATT(u,v), u and v may become comparable on attribute i if the actual values of attribute i become known for both objects.Then, w i (u,v) can become equal to 1, and so, s i (u, v) can contribute to the value of G(u,v).Example 1 illustrates how replacing missing value of quantitative attribute i affects the values of w i (u, v), s i (u, v) and G(u,v).This influence is also illustrated for a qualitative attribute and a dichotomous attribute in Examples 2 and 3, respectively. Objects u and v are comparable on attributes 1 and 2 (w 1 (u,v) = w 2 (u,v) = 1, s 1 (u,v) = 0, s 2 (u,v) = 0.9) and are not comparable on attribute 3 (w 3 (u,v) = 0, s 3 (u,v) = undefined).Hence, G(u,v) = (1 × 0 + 1 × 0.9 + 0 × undefined) / (1 + 1 + 0) = 0.9 / 2 = 0.45.Objects v 1 and v 2 in Table III present instances of object v after replacing its missing value of attribute 3 with either − or +.Since, both u and v 1 have value − of attribute 3, they are not comparable on this attribute (so, w 3 (u,v 1 ) = 0) and s 3 (u,v 1 ) = 0.This means that attribute 3 does not contribute to the value of G(u,v 1 ) even though its value is known both for u and v 1 .Now, since, u and v 2 have values − and +, respectively, on attribute 3, they are comparable on attribute 3 (so, w 3 (u,v 1 ) = 1) and their similarity on this attribute is the least possible; namely, In Examples 1, 2 and 3, we considered instances of example object u, with known values for all attributes, and object v, with missing value only for one given attribute i.We considered all or some instances of object v in which missing value was replaced by possible actual values including those instances of object v whose similarity on attribute i was the least and greatest, respectively.Clearly, these least and greatest values are lower and upper bounds, respectively, on similarity values of objects u and v on the examined attributes. Let i ∈ INCMP*_ATT (u, v).Lower bound on the actual similarity value of u and v on attribute i will be denoted by i (u,v), while upper bound on the actual similarity value of u and v on attribute i will be denoted by i (u,v).The associated weights for the bounds will be denoted as i (u,v) and i (u,v), respectively. In Table IV, we provide the values of the similarity bounds i (u,v) and i (u,v) and their weights, respectively, under assumption that the value of attribute i is missing for at least one object.In fact u), thus, without loss of generality, we assume that the value of attribute i is missing for object v.The results are provided for quantitative, qualitative and dichotomous attributes.We also indicate for which possible actual values of v and eventually u, s i (u,v) = i (u,v) and s i (u,v) = i (u,v), respectively.Thus, we show that i (u,v) and i (u,v) are strict lower and upper bounds on the actual similarity value of objects u and v on each attribute i ∈ INCMP*_ATT (u, v). Please note that i (u,v) equals 1 for each attribute i ∈ INCMP*_ATT(u, v).On the other hand, the lower bound i (u,v) = 0 in all cases considered in Table IV except for quantitative attribute i whose value is missing for only one of the two compared objects.In that exceptional case, i (u,v) depends on the known value of the other object and can be TABLE IV.STRICT SIMILARITY BOUNDS i (u,v), i (u,v) AND THEIR ASSOCIATED WEIGHTS i (u,v) AND i (u,v) FOR MISSING VALUE OF OBJECT v AND KNOWN OR MISSING VALUE OF OBJECT u. greater than 0 (as shown in Table IV, in this case, i (u,v) = min{(xmin i ), (max ix)} / range i. ). Property 3. Let i be a quantitative attribute.Let the value of attribute i be missing for object v and be equal to x for object u.Then: a) i (u,v) reaches maximum, which is equal to 0.5, for x = (min i + min i ) / 2. b) i (u,v) reaches minimum, which is equal to 0, for x = min i or x = max i . Proof: Follows from i (u,v) for a quantitative attribute (see Table IV). Note also that for each attribute i ∈ INCMP*_ATT(u, v), upper bound i (u,v) = 1 and i (u,v) = 1, unless attribute i is dichotomous and its value is equal to − for one object, say u, and is missing for the other object, say, v.In that exceptional case, i (u,v) = 0 and i (u,v) = 0 (which corresponds to the situation when the actual value of v is also equal to −), while i (u,v) = 1 and i (u,v) = 0 (which corresponds to the situation when the actual value of v equals +).In the former case, attribute i does not contribute to the Gower similarity coefficient, while in the latter case, attribute i contributes to it with the least possible value of 0. C. Lower and Upper Bounds on Actual Value of Gower Similarity Coefficient We start with defining lower and upper bounds on the actual value of the Gower similarity coefficient, which are achievable after replacing all missing values in the compared objects with some values from the domains of corresponding attributes. Lower bound on the actual value of G(u,v) is denoted by G(u,v) and is defined as follows: . Upper bound on the actual value of G(u,v) is denoted by I(u,v) and is defined as follows: .In fact, G'(u,v) can be regarded as an improved version of G(u,v). denotes the set of attributes on which u and v are not comparable; that is, INCMP_ATT(u,v) = {attribute i| ( , ) = 0}.• INCMP*_ATT(u,v) denotes the set of attributes on which either u or v or both have missing values.• INCMP d _ATT(u,v) denotes the set of dichotomous attributes on which both u and v have value −. Example 4 allows us to conclude what follows: Property 6 .Corollary 1 . Let u and v be comparable objects.Let i be a quantitative attribute with missing value for object u and known value for object v. Then:a) It is probable that i (u,v) > G(u,v).b) If i (u,v) > G(u,v), then it is probable that G(u,v) > G(u,v).It is probable that G(u,v) > G(u,v) when there is a missing value in u or v.If G(u,v) > G(u,v), then G(u,v)takes an incorrect value, which cannot be obtained for any possible actual value of attribute i of object u.To avoid the problem stated in Corollary 1, one may use, depending on an application, the lower bound G(u,v), the upper bound I(u,v) or an appropriately modified version of G(u,v) instead of G(u,v) itself.Below we introduce new G'(u,v) similarity coefficient defined as follows: ??( , ):E :{ ∈JK;<= * _>??( , ): ( , )PQ( , ):
3,974.6
2023-09-17T00:00:00.000
[ "Computer Science", "Mathematics" ]
Elevation of Inflammatory Cytokines and Proteins after Intra-Articular Ankle Fracture: A Cross-Sectional Study of 47 Ankle Fracture Patients Introduction Intra-articular fractures are the leading etiology for posttraumatic osteoarthritis (PTOA) in the ankle. Elevation of proinflammatory cytokines following intra-articular fracture may lead to synovial catabolism and cartilage degradation. We aimed to compare cytokine levels in injured and healthy ankle joints, examine the longer-term cytokine levels in fractured ankles, and investigate the association between cytokine levels in fractured ankles and plasma. Materials and Methods In this cross-sectional study, synovial fluid (SF) and plasma of forty-seven patients with acute intra-articular ankle fractures and eight patients undergoing implant removal were collected prior to surgery. We determined concentrations of sixteen inflammatory cytokines, two cartilage degradation proteins, and four metabolic proteins and compared the levels in acutely injured ankles with those of the healthy contralateral side or during metal removal. Cytokine levels in injured ankles were also compared to serum cytokine levels. Nonparametric Wilcoxon rank-sum and Spearman tests were used for statistical analysis, and a p value below 0.05 was considered significant. Results Compared to the healthy ankles, the synovial fluid in ankles with acute intra-articular fracture had elevated levels of several proinflammatory cytokines and proteases (IL-1β, IL-2, IL-6, IL-8, IL-12p70, TNF, IFNγ, MMP-1, MMP-3, and MMP-9) and anti-inflammatory cytokines (IL-1RA, IL-4, IL-10, and IL-13). The levels of cartilage degradation products (ACG, CTX-2) and metabolic mediators (TGF-β1 and TGF-β2) were also significantly higher. Synovial concentrations of ACG, IL-12-p70, IFNγ, IL-4, and bFGF correlated with serum levels. While most of the examined synovial cytokines were unchanged after implant removal, IL-4 and IL-6 levels were upregulated. Conclusions We show that an acute ankle fracture is followed by an inflammatory reaction and cartilage degeneration. These data contribute to the current understanding of the protein regulation behind the development of PTOA and is a further step towards supplementing the current surgical treatment. This cross-sectional study was “retrospectively registered” on the 31th October 2017 at ClinicalTrials.gov (NCT03769909). The registration was carried out after inclusion of the first patient and prior to finalization of patient recruitment and statistical analyses: https://clinicaltrials.gov/ct2/show/NCT03769909?term=NCT03769909&draw=2&rank=1. Introduction Up to 80% of cases of posttraumatic osteoarthritis (PTOA) in the ankle are caused by joint injury [1,2], and the main predisposing factor is an intra-articular ankle fracture [3][4][5]. The current gold standard to treat unstable ankle fracture is surgery [6][7][8]. However, up to 36% of intra-articular fractures in the lower extremity develop osteoarthritis [9,10]. Unlike degenerative osteoarthritis (OA), PTOA has a known time of onset, allowing early intervention. Recent studies indicate that the initial inflammatory response following intraarticular fracture may lead to synovial catabolism and cartilage degradation [11][12][13], but this has so far been ignored in standard therapy. Elevations of proinflammatory cytokines such as interleukin (IL)-1β, tumor necrosis factor (TNF), IL-6, and IL-8 have been documented in the early course of anterior cruciate ligament (ACL) injuries [14][15][16][17]. However, only few studies have described the upregulation of proinflammatory cytokines after intra-articular ankle fracture [12,18,19]. In addition, it is unclear whether cytokine levels and inflammatory cells are also elevated in whole blood after ankle fracture. Adams et al. [20] suggested that the initial inflammatory response is preserved for several months after ankle joint injury and may play an important role in cartilage degradation leading to PTOA development [21]. In this study, therefore, we aimed to identify and quantify the cytokines that are elevated in acute intra-articular ankle fractures and compare these to the levels in the contralateral healthy ankle joints. Specifically, we investigated whether synovial cytokine levels differed between the fractured and healthy ankle joints and whether cytokine levels in the fractured ankle joints correlated to plasma levels. In addition, we quantified synovial cytokine levels after implant removal. Finally, we investigated whether postinflammation was reflected in an increase in synovial immune cells. Materials and Methods This cross-sectional study was registered at Clinical Trials (NCT03769909) and approved by the National Committee on Health Research Ethics (J. No. S-20170139). The study is reported according to STROBE guidelines [22]. Patients with acute intra-articular ankle fracture admitted to Odense University Hospital and Svendborg Hospital from October 2017 to March 2019 were enrolled in the study. The inclusion criteria were an acute intra-articular fracture involving the ankle joint, a need for internal or external fixation within 14 days, patient aged between 18 and 65 years and able to read and understand Danish, and written informed consent. The exclusion criteria were open fractures, associated arterial and nerve injuries, multiple injury patients with an Injury Severity Score > 15, primary or secondary infections, and systemic inflammatory diseases such as rheumatoid arthritis, anti-inflammatory medication, and injuries associated with a Charcot foot. Patients were also excluded if they had any sign of radiographic OA in the fractured or healthy contralateral ankle joint (Figure 1). A second cohort with implant removal was recruited at our orthopedic outpatient clinic. The eligibility criteria corresponded with the primary cohort, but these patients had a consolidated ankle fracture instead of an acute fracture. Our aim with this cohort was to obtain long-term followup in view of the limitations of the cross-sectional study design. Patients with clinical signs of inflammation in the ankle were excluded. For the technical reason of centralized and automatic counting, specimens used for cytokine measurement could not be used for differential cell counting. Therefore, a third cohort was included to analyze synovial cell composition in the injured ankle joints alongside leucocyte blood count. Collection of Synovial Fluids and Blood Samples from Ankle Fracture Patients. Prior to surgery, synovial fluid (SF) was collected from the healthy contralateral ankle joint and then the fractured joint (n = 47) by puncture using the anteromedial portal. SF aspiration was performed mainly by two independent surgeons (TMP/HS). As it can be difficult to obtain sufficient volumes of SF from the ankle joint [23,24], 5 ml saline was injected prior to aspiration in both the fractured and healthy ankle joint that included the implant removal group. SF samples were stored in 15 mL conical centrifuge tube. Blood samples for cytokine analysis and white blood cell count were collected simultaneously from the elbow vein according to the standard procedure. Within 2 hours after sampling, SF and blood samples were centrifuged at 2,000 revolutions per minute (RPM) for 15 minutes, aliquoted, and stored at -80°C until further chemiluminescence analysis. Chemiluminescence Analysis. SF levels of IL-1α, IL-1RA, IL-1β, IL-2, IL-4, IL-6, IL-8, IL-10, IL-12p70, IL-13, interferon gamma (IFN-y), TNF-α, and TNF-β were measured by an electrochemiluminescence immunoassay using a human customized U-Plex (Mesoscale, Rockville, MD). Matrix metalloproteinase (MMP)-1, MMP-3, and MMP-9 were measured using a human MMP-3 Plex Ultrasensitive kit (Mesoscale, Rockville, MD); transforming growth factor (TGF)-β1, TGF-β2, and TGF-β3 were measured using a human U-PLEX TGF-β Combo kit (Mesoscale, Rockville, MD); and basic fibroblast growth factor (bFGF) was measured using a Human V-PLEX bFGF kit (Mesoscale, Rockville, MD). Prior to measurement, the samples were diluted in Diluent 41, and MSD Discovery Workbench software was used for analysis (MESO QuickPlex SQ 120). Samples were run in duplex, and coefficient of variation (CV) values above 20% in individual analyses was considered high (Additional file 1). SF levels of C-terminal telopeptides of type 2 collagen (CTX-2) and Aggrecan (ACG) were analyzed by the enzyme-linked immunosorbent assay (ELISA) (MyBiosource, VersaMax™). Samples were run in duplex and performed according to the manufacturer's instructions. 2 Mediators of Inflammation 2.3. Statistical Analysis. The quantile-quantile (q-q) plot tests of nearly all cytokines indicated a nonparametric pattern. To identify differences in cytokines levels between the fractured and healthy ankles, we used the paired Wilcoxon rank-sum test. The unpaired Wilcoxon rank-sum test was used to compare cytokine levels in fractured ankles and those for implant removal. The correlation analyses were performed using Spearman's test for nonparametric data and Pearson's test for parametric data. Cytokine values below the lower limit of detection (LLOD) were replaced by a value of ½ LLOD for statistical analysis. The percentage of cytokines below LLOD is presented in Additional file 1. Results are presented as medians and interquartile ranges, and a p value <0.05 is considered significant. All statistical analyses were performed using STATA MP 16. Results Data from 47 ankle fracture patients were included in the chemiluminescence analysis; these were 22 men and 25 women with mean age 42 years ± 14:4 years. The mean BMI was 27:6 ± 4:1, and nearly all fractures were classified as malleolar fractures (97.9%), with only one as a tibial plafond fracture (2.1%). SF was collected at a mean of 4.3 days postfracture, ranging from 0 to 13 days. Eight patients undergoing implant removal were included from our outpatient clinic. In all cases, the indication for surgery was a complaint related to the implant. Removal was carried out after 9.3 months on average (range 3 to 13 months). A further nine ankle fracture patients were included for mononuclear cell (PBMC) count and had baseline characteristics as shown in Additional file 2. Comparison of Cytokine Levels in Ankle Joints of Patients Undergoing Implant Removal Vs Healthy Control Ankles and Acute Fracture Ankles. Articular levels of cytokines measured in ankles after implant removal were significantly lower than those in acute fracture ankles ( Table 2). The differences were more than 100-fold for IL-1β, IL-8, MMP-1, MMP-3, MMP- Mediators of Inflammation 9, IL-1RA, and IL-10. In contrast, the levels in the implant removal group were similar to those in healthy control ankles. In these two groups, the levels of several measured cytokines were below LLOD (Additional file 1). Consequently, we excluded all cytokines with a LLOD value above 50 percent and reported only results of the remaining cytokines (IL-6, IL-8, MMP-1, MMP-3, IL-1RA, IL-4, ACG, CTX-2, bFGF, and TGF-β1). Under this condition, we found significantly higher levels of IL-4 and IL-6 in the implant removal group compared to healthy control ankles. However, Abbreviation: IQR: interquartile range. Data in bold indicate a significant difference in cytokine levels. Acute fractured ankles had significantly elevated levels of nearly all cytokines except from IL-1α and TGF-β1 compared to levels in implant removal ankles. The cytokine levels in implant removal and healthy contralateral ankles were overall significantly different. However, the ratio illustrated a double directional pattern. The cytokine levels in implant removal and healthy contralateral ankles were nearly all below the lower limit of detection, as illustrated in Additional file 1. Mediators of Inflammation levels of MMP-1, MMP-3, and CTX-2 were significantly higher in the control ankle group (Table 2). White Blood Cell Analysis of Synovial Fluid in Fractured Ankle Joints and Blood Samples. White blood cell analysis of SF in acute ankle fracture joints showed an initial upregulation of neutrophils after injury. The neutrophil level then decreased in the following days. In contrast, the monocyte level was initially low and increased over the following days ( Figure 2). The number of leucocytes in serum remained constant after acute ankle joint fracture (R 2 < 0:0001). However, the number of leukocytes in SF was initially high and then decreased in the following days (R 2 = 0:842) (Figure 3). Discussion To the best of our knowledge, this is the largest study to report on cytokine levels in acute intra-articular ankle fractures using the contralateral joint for comparison and the first study to report an association between cytokine levels in the synovial fluid and plasma. We found that synovial fluid in ankles with acute intra-articular fracture had elevated levels of several pro-inflammatory cytokines (IL-1β, IL-2, IL-6, IL-8, IL-12p70, TNF-α, IFN-γ, MMP-1, MMP-3, and MMP-9) and of anti-inflammatory cytokines IL-1RA, IL-4, IL-10, and IL-13. In addition, synovial levels of ACG, IL-12-p70, IFN-γ, IL-4, and bFGF levels in acute ankle fractures correlated with the levels found in serum. Finally, we found that IL-4 and IL-6 levels were still upregulated in previously fractured joints nine months after surgery. This study may supplement our knowledge about the regulation of the initial inflammatory cascade after acute intra-articular ankle fracture and the following longer-term conditions in the joint space. In addition, it supports the current understanding of the interaction behind the development of PTOA and offers potential avenues for supplementing the surgical treatment of fractures [11,25,26]. However, with a small sample size, these results need to be interpreted with caution. IL-1β and TNF-α have previously been identified as principle mediators for an acute inflammatory response after joint trauma, and they are upregulated by several cell types including chondrocytes, cells forming the synovial membrane, and infiltrating inflammatory cells such as mononuclear cells [27]. IL-1β and TNFα upregulation stimulates cartilage matrix degradation by interfering with the synthesis of collagen type 2 and ACG and inducing a group of metalloproteinases (MMPs) that have a destructive effect on cartilage components [26]. Upregulation of IL-1β and TNF-α in synovial fluid also stimulates the synthesis of other pro-inflammatory cytokines, including IL-6 and IL-8. Production of IL-6 and IL-8 is mainly implemented by chondrocytes and macrophages and plays a major role in OA [26,27]. Our data show that the concentrations of IL-6, IL-8, and IL-1RA increased up to 400-fold following an acute ankle fracture when compared to levels in the healthy joints. Interestingly, nearly all cytokines returned to nonmeasurable levels nine months after injury, except for IL-4 and IL-6. This is different to a previously published study by Adams et al. [20], who found that not only IL-6 levels but also IL-8 levels, MMP-1, MMP-2, and MMP-3 continued to be elevated in the ankle joint six months after injury. This may be explained by differences in patient characteristics and the methodology used, but it may also indicate that in a subgroup of patients, the imbalance of metabolism leads to long-term inflammation in the joint, resulting in cartilage degradation. We also found elevated levels of anti-inflammatory cytokines, such as IL-1RA, IL-4, IL-10, and IL-13, in the acutely fractured ankle joints. IL-10 is a potent anti-inflammatory cytokine that has shown a chondroprotective effect in the course of OA by stimulating the synthesis of type 2 collagen and ACG [28]. Furthermore, IL-10 is involved in the inhibition of MMPs and of chondrocyte apoptosis, as well as in Mediators of Inflammation downregulation of IL-1β and TNF-α, by stimulating the production of IL-1RA [27]. IL-1RA can bind to the IL-1R receptor, thereby blocking the connection to IL-1β and indirectly inhibiting the pro-inflammatory effect of IL-1β. In this study, we found that after acute ankle fracture, the levels of nearly all cytokines were elevated simultaneously after injury. While some cytokines remained at the same increased level during the first two weeks (IL-1β, IL-2, IL-8, TNF-α, TNF-β, IL-1RA, IL-10, IL-13, ACG, CTX-2, TBF-β1, and TGF-β2), others decreased (IL-1α, IL-6, IL12p-70, IFN-γ, MMP-9, IL-4, and bFGF), and some cytokines even increased (MMP-1 and MMP-3). This reflects the dynamic process of inflammation interfering with the cartilage metabolism. Interestingly, we found that the levels of the antiinflammatory cytokines IL1-RA and IL-10 remained constant, while the levels of the pro-inflammatory cytokines IL-1α and IL-6 decreased during the first two weeks. This may indicate a downregulation of the inflammatory cascade during this period. However, some other pro-inflammatory cytokines remained elevated or even increased (MMP-1, MMP-3). This could just be because MMPs are downstream the inflammatory cascade compared to initiators such as IL-1β. The ratio of pro-and anti-inflammatory cytokines levels in the joint at a certain time point after injury may play an important role in the imbalance of metabolism leading to PTOA development. However, this cannot be clarified in this study. Fracture Severity Does Not Correlate with Cytokine Levels in Synovial Fluid. Fracture severity has previously been correlated to increasing risk for PTOA development [29,30]. However, this study shows nearly no correlation between fracture severity and protein levels, except for MMP-1. These findings are similar to the study of Adams et al. [20], who reported cytokine levels and fracture lines in ankle joints, while a study on tibia fractures also found also some correlation between cytokine levels and the level of fracture comminutions [25]. This could indicate that there arenot unexpectedlyalso biomechanical factors that determine the outcome after fracture treatment. It is possible that the inflammatory response does not differentiate so much between severe and simple fracture patterns. Furthermore, fracture classification based on X-ray images may not be the most appropriate measurement for fracture severity and the energy level affecting the joints. Correlation of Cytokines in Plasma and Ankle Joint. We believe that no previous study has examined whether cytokine elevation in acute fractured ankle joints is reflected in elevated plasma cytokine levels. It is possible that the initial impact of an ankle fracture has a systemic influence, which results in a correlation between local and global levels. This may be limited, however, as each cytokine may be secreted and degraded at other locations. In this study, we found that elevated ACG levels in the fractured ankle joint were positively correlated to plasma ACG levels. ACG has shown to be a reliable marker for cartilage degradation in previous studies [31]. The other cytokines showed no significant or compelling correlations, however, perhaps due to the timing of sample collection. We collected both blood and SF samples prior to surgery, but an elevation of cytokines in the fractured joint may not immediately cause an elevation of cytokines in the blood. Sequential blood sampling after surgery could reveal other results. The level of cytokines in the fractured ankle joint changes dynamically after joint injury. However, the ratio of pro-and anti-inflammatory cytokines levels in the joint may play an important role in the imbalance of metabolism leading to PTOA development. Evaluating the level of cytokines in the ankle joint by SF aspiration is correlated to a greater risk of joint infection and discomfort to the patients. Therefore, if a correlation of cytokines was found in the ankle joint and plasma, a blood sample may be used as a substitution. Unfortunately, in our study, only ACG was positively correlated to plasma ACG levels. A positive correlation of cytokines could indicate which cytokine might be suitable as a diagnostic biomarker in future. Limitations. A limitation of this study is that we included a relatively small number of implant removal patients and ankle fractured patients for cell analysis. Furthermore, implant removal was performed only when patients had symptoms and was thus not routine. The procedure of SF aspiration from the ankle joints was quite complex and even though a limited number of surgeons performed this procedure, possible failures cannot be excluded. Finally, the cross-sectional study design meant that we were not able to monitor temporal concentrations in the ankle joints and serum. Therefore, we included the second time point related to implant removal. Clinical Relevance. This study contributes information about the interaction of the initial inflammatory cascade after acute intra-articular ankle fracture and the following longerterm conditions in the joint space. It supports the current understanding of the mechanism behind the development of PTOA and is a further step towards supplementing the surgical treatment of fractures. Data Availability All data are hosted online at OPEN (Open Patient data Explorative Network) and are available from the corresponding author on reasonable request and approval from The Regional Committees on Health Research Ethics for Southern Denmark and the National Danish Data Protection Agency https://www.sdu.dk/da/om_sdu/institutter_centre/ klinisk_institut/forskning/forskningsenheder/open.Aspx. Ethical Approval This cross-sectional study is registered at ClinicalTrials.gov (NCT03769909), was approved by the local committee on health ethics (The Regional Committees on Health Research Ethics for Southern Denmark: J.No. S-20170139), and was reported to the National Danish Data Protection Agency (17/28505). Written informed consent was obtained from all patients. Disclosure None of the sponsors or funders has any specific role in developing the protocol or in finalizing the manuscript. Conflicts of Interest All authors declare no competing interests. Authors' Contributions TMP and HS performed the study design, data acquisition, statistical analysis, and writing of the manuscript. LHF and KLL performed the chemiluminescence analysis and contributed to writing the manuscript. SO performed the study concept and design, data interpretation, and writing of the manuscript. All authors read and approved the final manuscript.
4,638.2
2021-01-08T00:00:00.000
[ "Medicine", "Biology" ]
MapReduce Performance in MongoDB Sharded Collections In the modern era of computing and countless of online services that gather and serve huge data around the world, processing and analyzing Big Data has rapidly developed into an area of its own. In this paper, we focus on the MapReduce programming model and associated implementation for processing and analyzing large datasets in a NoSQL database such as MongoDB. Furthermore, we analyze the performance of MapReduce in sharded collections with huge dataset and we measure how the execution time scales when the number of shards increases. As a result, we try to explain when MapReduce is an appropriate processing technique in MongoDB and also to give some measures and alternatives to take when MapReduce is I. INTRODUCTION We live in the era of the Information Age.Everything is connected and online services are more and more oriented to user data gathering.Major companies process hundreds of petabytes daily at their servers and the computations have to be distributed across hundreds or thousands of machines in order to finish it in a reasonable amount of time.The issues of how to parallelize the computation, distribute the data, and handle failures obscure the simple computations within a large amounts of complex code in dealing with them.With these problems in mind engineers try to borrow ideas from functional programming languages by using the map and reduce primitives as an abstraction that allows to express the simple computations, and hide the complex details of parallelization, fault-tolerance, data distribution and load balancing, hence MapReduce was introduced.The main purposes of this paper are:  Analyzing MongoDB sharding capabilities  What is MapReduce and why use it  Presentation of the results using MapReduce in sharded collections by number of shards used. In this paper we measure MapReduce time performance through MongoDB, and try to find out how the MapReduce execution time changes with increased number of MongoDB shards.We have described the environment, defined a mini cluster of three virtual machines on which MongoDB is run and we have experimented with a collection of relatively large number of documents.And at the end, the results and conclusions are shown, tried to answer some questions such are the use of MapReduce within MongoDB when is a good option, what needs to be done to speed up the processing and what alternatives to consider. The rest of this paper is organized as follows: Section 2 presents a summary of some related work in this area.Section 3 contains a short description of the MongoDB where the main point is the shard techniques and possibility of sharding.Section 4 provides the testing results and MapReduce performance evaluation implemented on MongoDB by use different number of shards.Finally, Section 5 provides some conclusions. II. RELATED WORK Big companies started facing issues on how to handle the huge amount of data they were receiving and how to process those.Google as the pioneer in search technologies needed computations that process a large amounts of data such as crawled documents, web request logs, graph structure of web documents, etc.According to authors Jeffrey D. and Sanjay G., Google needed a simple solution that was easy to understand, fault tolerant, cheap and reliable.In their paper [1] they analyze MapReduce in large clusters that are highly scalable where hundreds of programs are run and thousands of MapReduce jobs are executed, what is Google on a daily basis. The authors Smita A. et al., in their paper [2] have introduced an explanation of the MongoDB, its features, advantages and disadvantages.Especially, they address the MongoDB features such as MapReduce, Autosharding, MongoDump, etc.They continue with their analyses in case of dealing with small and large amount of unstructured/semi structured data and at the end the conclude that if the amount of the data is big and permanently increases, and high performance and availability are required then MongoDB should be considered as options to use as database. The authors Zeba Khanam and Shafali Agarwal, in their paper [3], explore large scale data processing using MapReduce and its various implementations to facilitate the database, researchers and other communities in developing the technical understanding of the MapReduce framework.They continue with exploring different MapReduce implementations; most popular Hadoop implementations and other similar implementations using other platforms and compare those based on different parameters.The authors A. Elsayed et al., in their paper [4], look back to the MapReduce and try to find out the strengths and www.ijacsa.thesai.orgweaknesses, dealing with failures and enhancements that could be made to it.Furthermore, they argue that MapReduce doesn't show the expressiveness of query languages like SQL and it needs improvement of limitations such as collocation of related data, implementing efficient iterative algorithms, and managing skew of data. Another study shows an attempt to analyze user data with MapReduce in real time [5].The authors Ian B. and Joe D., in their paper show a system which uses the information state collected during a person-machine conversation and a casebased analysis to derive preferences for the person participating in that conversation.They use MapReduce in their processing model to achieve a near realtime generation of user preferences regardless of total case memory size.Authors Michael T. G. et al., in their paper [6], study the MapReduce framework from an algorithmic standpoint and demonstrate the usefulness of approach by designing and analyzing efficient MapReduce algorithms for fundamental sorting, searching, and simulation problems. In time when not only big data but also fast data are exploded in volume and availability, authors Wang L. et.al. in their paper [7], address the key challenges that MapReduce is not well suited for and provide solutions with MapUpdate use which is a framework like MapReduce and specifically developed for fast data. Into the researches [8]- [10] are analyzed the MongoDB, NoSQL databases and reasoning behind choose of them, query optimizations, and comparisons between NoSQL and SQL databases are shown.Authors Shuai Z. et al., in their paper [11], analyze the MongoDB clusters and introduce how to partition spatial data to distributed nodes in the parallel environment, using its spatial relationships between features. Mohan and Govardhan, in the papers [12], [13], have analyzed MapReduce as a paradigm and combine it with online aggregation used in MongoDB.Online aggregation, according to them is useful when the data collected from massive clusters and can be very advantageous when the data are collected and estimated from sensors, various social media or Google search.Combining those two area (MapReduce and Online Aggregation) they introduced a new methodology that uses MapReduce paradigm along with online aggregation.[14], have evaluated the combination of the MapReduce capabilities of Hadoop with the schemaless database MongoDB, as implemented by the mongo -Hadoop plugin.This study provides insights into the relative strengths and weaknesses of using the MapReduce paradigm with different storage implementations, under different usage scenarios.They have concluded that, in general, if the workload uses MongoDB as a database that needs to be occasionally used as a source of data for analytics then MongoDB is appropriate solution, however, it is not appropriate when using MongoDB as an analytics platform that sometimes must act like a database.Also, they show that using Hadoop for MapReduce jobs is several times faster than using the built-in MongoDB MapReduce capability and this is due to Hadoop file management system (HDFS). III. MONGODB Document oriented databases are designated to work without of use of SQL, and instead of it, they use a different language to communicate.A document can have any number of fields listed in any order, like in a relational database.Unlike to relational databases, a row inside a document oriented database, not need to have the same number and types of fields as any other row inside the database.This is due to the fact that there is no schema that restricts a row to be identical in number and the sequence of fields.While there are many document based databases, MongoDB stands out due to its high performance and ease of setup. MongoDB as document based database uses BSON to store the data, which is the binaryencoded serialization of JSON format.JSON currently supports the following data types: string, number, Boolean, array and object.BSON supports: string, int, double, Boolean, date, byte array, object, array and others.BSON's only restriction is that data must be serialized in littleendian format.Since BSON is a format that the data are sent/retrieved and stored, there is the need of decode those to text.In an analogy with the relational database, a table into MongoDB is a collection of the documents and a database is a group of collections.A document is the most basic entity where MongoDB stores information, similar to a row inside a table in relational database, except the data structure is schemaless.One of the best features of a document is that it may contain other documents embedded inside. Indexes in MongoDB work almost the same as in relational databases.MongoDB uses B-Tree to implement the indexes and also allows twodimensional geospatial indexing which is very useful when dealing with location based services. A. Sharding The problem of huge amount of data, MongoDB solves in an effective fashion with use of the horizontal data distribution, known as horizontal scaling.Horizontal scaling is shown as very well solution and means a distributed and balanced work across the machines.This way of work in MongoDB is known with the name sharding.Sharding in MongoDB is designed to partition the database into smaller pieces accommodated to different machines, so that no single machine has to store all the data or handle with all the load.MongoDB handles sharding very easily and transparently which means that the interface for querying a sharded cluster is exactly the same as the interface for a single MongoDB instance.Usually, there are collection which needs to be together and the others which allow or might be need to be distributed across some machines.So, no all collections need to be sharded, but only some collections that need data to be distributed over some shards to improve read and/or write performance.All unsharded collections will be held in only one shard that is called primary shard (e.g., Shard A in the Fig. 1).The primary shard can also contain sharded collections.www.ijacsa.thesai.orgIn case of more complex application, we should shard only the collections that would benefit from the added capacity of sharding while leaving the smaller collections unsharded for simplicity.Because sharded and unsharded collections are possible to be accommodated into a same system, all of this will work together, completely transparently to the application.In fact, if later we find that one of the collections that is not sharded, becomes larger and larger, we can shard it, so, it is allowed, at any time, to enable the shard and make a sharding [15]. Manual sharding can be done with almost any database software.Manual sharding is when an application maintains connections to several different database servers, each of which are completely independent.The application manages storing different data on different servers and querying the appropriate servers how to get data back.This approach works well, but there are difficulties when adding or removing nodes to/from the cluster is needed or in face of changing data distributions or load patterns. MongoDB supports autosharding, and by use of this tries to avoid the abstract architecture from the application and simplify the administration of such a system.MongoDB allows to application to ignore the fact that it isn't talking to a standalone MongoDB server, to some extent.On the operations side, MongoDB automates the data balancing across shards and makes it easier to add and remove capacity. A sharded cluster consists of shards, mongos routers, and config servers, as shown in Fig. 2. B. Shared Key To shard a collection, we have to choose at least one field which will be used to split up the data.This field(s) is called a shard key.In case, when there are a few shards, it's almost impossible to change the shard key, so, it is important to choose a correct one.To choose a good shard key, a good knowledge of the workload and how the shard key is going to distribute the application's requests are needed.And it is often difficult to imagine.There are three most common distributions ways of splitting the data, which are: ascending key, random, and locationbased.Also there are other types (with other key types) but most of those fall into one of the mentioned categories: Ascending key distribution: The shard key field is usually the data type of Date, Timestamp or ObjectId.With this pattern, all writes are routed to one shard.MongoDB keeps distribution and spends a lots of time migrating data between shards to keep data distribution relatively balanced across the shards.This pattern shows weaknesses in the write scaling. Random distribution: This pattern is more appropriate in case of when the fields (taken for shard key) do not have an identifiable pattern within dataset.For example, if shard key includes any of the following field username, UUID, email address, or any field which value has a high level of randomness.This is a preferable pattern for write scaling, since it enables balanced distribution of write operations and data across the shards.However, this pattern shows weak performances in case of query isolation, if the critical queries must retrieve large amount of "close" data based on range criteria in which case the query will be spread across the most of the shards of the cluster. Locationbased distribution: The idea around the location-based data distribution pattern is that the documents with some locationrelated similarity will fall into the same range.The location related field could be postal address, IP, postal code, latitude and longitude, etc. MongoDB supports three types of sharding strategies: Rangebased sharding: MongoDB divides dataset into ranges determined by the shard key values. Hashbased sharding: MongoDB creates chunks via hash values it computed from the field's values of the shard key.In general, rangebased sharding provides better support for range queries that need query isolation while the hashbased sharding supports more efficiently write operations. Tagaware sharding users associate shard key values with specific shards.This type of sharding is usually used to optimize physical locations of documents for locationbased applications.I) is shown the guidance how to select the shard key. C. MapReduce MapReduce is a programming model which is capable to process a huge amount of data with a parallel and distributed algorithm on a cluster.It is a programming paradigm that allow for massive scalability across hundreds or thousands of servers in a Hadoop cluster.MapReduce also is a powerful and flexible tool for aggregating data, solves some problems that are too complex to express by use the aggregation framework's query language.In our case we use MapReduce with JavaScript as its "query language" to express arbitrarily complex logic. MapReduce processes different problems across large datasets using a large number of computers (or computing nodes) in parallel.Basically, it takes a set of input key/value pairs and produces a set of output key/value pairs [15] and this operations is executed in three steps: Map is the first step, takes the input pairs and to each node applies the "map" function and finally writes the temporary output.To prevent same data being processed a master node ensures that only one copy of redundant input data is processed.Shuffle is the second step, where the shards redistribute that data based on the output keys and reaches a stage that all data with the same key value belonging to the same shard.And finally, reduce is the final step which takes the shuffled data and processes each group of data per key. MapReduce uses a finalize function to clean the temporary results and to manipulate with the MapReduce output, which are given from the last reduce phase.The finalize function is called before the MapReduce output is saved to a temporary collection.Returning large result sets is less critical with MapReduce, so the call of the finalize function is a good chance to take averages or remove the temporary or unnecessary information in general [16].MongoDB allows the user to define which shards will execute the map function, the shuffle and reduce and also we can use the same shards for map function execute and as well as reducer function or define other shards that will do that job. By default, MongoDB creates a temporary collection while MapReduce processes with the data and the temporary collection name is unlikely chosen from a collection name, but, it is a dotseparated string containing MapReduce, the name of the collection which is in MapReduce process, a timestamp, and the database job's ID.It looks something like mr.geonames.1525765769.2.MongoDB automatically destroy this temporary collection when the job is finished and /or MapReduce connection is closed.To keep the temporary collection after the job finishing and connection closed we have to set keeptemp in true as an option parameter.In case that the temporary collection is used, MongoDB allows naming the output collection with the out part option, which is a combined name and out string.To address the last issue MongodB contains an optional parameter called as out and which needs to be set to true, if out parameter is set to true, then there is no need to specify keeptemp, since it is implied.Even if a name for the temporary collection is specified, MongoDB again uses the autogenerated collection name for MapReduce further intermediate steps.When the computations have finished, the temporary collection automatically and atomically will be renamed from the autogenerated name to our set or chosen name.This means that if MapReduce is run multiple times with the same target collection, it will never use an incomplete collection in performing operations.The output collection created by MapReduce is a normal collection, which means that there is no problem with doing a MapReduce on it or a MapReduce on the results from that MapReduce. IV. MAPREDUCE PERFORMANCE ANALYSIS To analyze the MapReduce performances, used in MangoDB circumstances, we have created a mini cluster of few virtual servers on which is run MongoDB and the geonames database.Geonames database is an open source database and is taken as an example.Geonames database contains detailed information to world countries such are population, size, geolocation, rivers, villages, capital, etc. [17].It contains around 11 Million records, rendered on tab separated text file.To manipulate on a better way, we converted those data to a csv format, that could easily be exported to mongo.We scaled down the database only to documents with population larger than zero and the number of those documents was 469660.From the csv file we took into consideration and imported only geonameid as id, asciiname, country and population. Next, we built a sharded cluster to which was run MongoDB 3.2 under Ubuntu 16.04, based on Fig. 2, through three Virtual Machines, named as mongo-c1, mongo-c2, and mogno-c3, one VM for the configuration server and one query server VM.We indexed the id with "hash" that allows us to create a shard key with the hashed id which makes sure the equally distribution of our geoname collection documents.The hostnames and ip addresses of each VM was set as follow: www.ijacsa.thesai.In our tests a simple map function was set, which finds the country code and returns a value of 1, and reducer function which iterate through the values to count the number of documents in the collection which belongs to each country.The number of documents included in our tests was 469660 and the id was used as a shard key to shard the documents to the different shards. On the above-mentioned architecture, we executed three tests.In our first test we used only one shard (mongo-c1 was used).The number of documents was 469660 and the total import time was 11.28.In the second test, we used the same number of documents but sharded into two shards (in this case was added the second shard mongo-c2).The total import time in this case was 08.25.The collection was sharded successfully and after sharding the achieved distribution was as follow: into first shard (mongo-c1) 234349 documents and into second shard (mongo-c2) 235311 documents.And in our third test we included another shard (mongo-c3), the same number of documents was included but distributed into three shards.For this case the total import time was 8.47 and the collection was successfully sharded as follow 156646 documents into first shard (mongo-c1), 156693 documents into second shard (mongo-c2) and 156321 documents into third shard (mongo-c3).We executed the same MapReduce job (with the same map and reduce functions) three times to each test and the results are shown on Table II.To better express the dependence between MapReduce job execution time and the number of nodes used, so the dependence on the number of shards to which the documents are distributed, on Fig. 3, the curve which clearly expresses the decrease of the time with increasing the number of shards is shown. Next, we again performed the last test, but this time with a little complex shard key.We chose the pair (id, population) as a shard key.Total import time was 8:08.Since the shard key cannot be changed afterwards, so, we drop the before collection shards and recreate a new shard by use of the new shard key.By use of the new shard key the sharding was 349699 documents to the first shard, 119947 documents to the second shard and only 14 documents to the third shard.So, it is produced ununiform distribution.We executed the same MapReduce job as in previous tests and the results are shown on the following table (Table III).Regarding to the above tests, clearly we can conclude that when the number of shards increases MongoDB MapReduce performs better and faster.The only trouble as shown in last test is that we should be very precautious in chose of the shard key, so, we need to choose an appropriate (a good shard key which provides as far as possible uniform documents distribution) that will not slow down MapReduce. V. CONCLUSIONS Big data has indeed reshaped the way we deal with data.The problems that arise when trying to manipulate huge amounts of data are growing every day and solutions are found from both scientists and companies alike.MongoDB and other NoSQL databases alike has seen growth by providing an alternative to SQL databases.Their design, high availability and fault tolerance have attracted usage in projects where SQL databases cannot be used such as handling unstructured raw huge amount of data. MapReduce as a framework is designed to solve many problems with huge amount of data, so, MapReduce has a little significance when dealing with small data, but it has an impact when the collections grow.Also it is clear and our tests show that as the number of shards scales up, MapReduce jobs are executed faster especially if we take precautions and use a good shard key.However, the Mongo 3.2 documentation [18] recommends the avoid of the MapReduce use and instead of MapReduce the Aggregation Pipelining is preferred for better performance. Fig. 3 . Fig. 3. Average time of mapreduce job execution per number of shards.executed on 469660 documents.. execution as a function of the number of shards TABLE I . KEY CONSIDERATIONS FOR A SHARD KEY SELECTION REGARDING THE QUERY ISOLATION AND WRITE SCALING REQUIREMENTS TABLE III . MAPREDUCE JOB EXECUTION TIME EXECUTED ON THREE SHARDS USED.NUMBER OF DOCUMENTS USED IS 469660, SHRD KEY (ID,
5,367.6
2018-01-01T00:00:00.000
[ "Computer Science" ]
Examining the Asymmetric Effects of Third Country Exchange Rate Volatility on Trade between the US and the EU : This paper aims to examine the symmetric and asymmetric effects of third country exchange rate volatility on the trade flow between the US and EU from January 2003 through March 2021. The monthly disaggregated data of the top twelve export and import industries are the sample frame. We find that separating increased volatility from declines and introducing a nonlinear adjustment to the volatility shows a more significant outcome than symmetric analysis. Different industries carry distinctive behaviors regarding exchange rate risk, and the third country effect plays a vital role in trade. Moreover, increased CNY/USD real exchange rate volatility increases bilateral trade between the US and EU. Introduction proposed that bilateral trade is affected by the direct risks of bilateral exchange rates and the indirect risks of third country exchange rates. Empirical findings have shown that ignoring third country effects leads to exchange rate volatility prejudice in bilateral trade. Academia did not pay much attention to this issue until Bahmani-Oskooee and Hegerty (2007) emphasized the third country effects when researching the impact of exchange rate volatility on trade. Then, empirical studies on the third country effect became more widespread. Following in the footsteps of Cushman (1986), Bahmani-Oskooee et al. (2013) found strong evidence of a "third country" exchange rate volatility effect by comparing the results with and without consideration of a third country. Other studies have revealed that the third country exchange rate risk does indeed play a significant role in bilateral trade (Bahmani-Oskooee et al. 2015). Wang et al. (2016) noticed that exports from China to the US (or Europe or Japan) rise concurrently with an increase in the third country exchange volatility. In similar research, Tunc et al. (2018) found that high volatility in the external exchange rate compared to the volatility of the bilateral exchange rate between the exporting county and the destination country leads the exporting country to shift its exports from the third to a bilateral country. Moreover, Usman et al. (2021) confirmed that nonlinear models generate more significant results in both the short and long run. Other empirical studies suggest that the asymmetric assumption alone is insufficient (e.g., Choudhry et al. 2014). We also note that it is crucial to consider CNY/USD exchange rate fluctuations while evaluating bilateral trade between the US and EU. In 2020, the EU was the largest trading partner of the United States, followed by China, the largest trading country (647.6 billion USD of trade flows between the US and EU compared to 560.1 billion USD between the US and China; US Census Bureau). The exchange rate volatility significantly impacts international trade since it influences trade decisions and overall economic performance. Therefore, the prolonged debate on the influence of exchange rate volatility on international trade continues. Previous studies on this issue have reached different verdicts through different assumptions that cannot be validated in all cases. The primary rationale for the idea that exchange rate volatility decreases international trade is that volatility increases trade risk between countries. An early study by Arize (1997) argued that an increase in exchange rate volatility would decrease trade with foreign countries. The effect of exchange rate fluctuations on imports and exports is widely accepted. Similar studies, for example, those of Sukar and Hassan (2001), Ozturk (2006), Hayakawa and Kimura (2009), Ekanayake et al. (2011), Yakub et al. (2019), Dada (2020), Sugiharti et al. (2020), and Bahmani-Oskooee and Karamelikli (2021), reinforce this causal relationship. McKenzie and Brooks (1997) explored the effect of exchange rate volatility on trade between the United States and Germany. Their conclusions differ from those of previous works, as the effects of volatility were positive and statistically significant. Broil and Eckwert (1999) showed a positive impact of exchange rate volatility on export production in countries such as the US, where companies benefit from a large domestic market that permits them to compensate for exchange rate volatility more easily. The empirical findings of Perée and Steinherr (1989) and McKenzie (1998) support exchange rate risks that are positively related to trade. Earlier research showed either a negative or positive link between exchange rate volatility and international trade; some studies find a biased connection between the two variables (e.g., Bredin et al. 2003;Wong and Lee 2016;Nyambariga 2017;Bahmani-Oskooee and Gelan 2018). The ambiguous impacts of exchange rate risk expressed in previous studies remain, as in papers by Serenis and Tsounis (2013), Bahmani-Oskooee and Harvey (2017), Senadza and Diaba (2017), Sharma and Pal (2018), and Bahmani-Oskooee and Nouira (2020). Traditionally, studies have used aggregated data in investigating the impact of exchange rate volatility on the trade of one country with the rest of the world or the trade between two countries, including McKenzie and Brooks (1997) for the United States and Germany, Choudhry (2003) for Canada and Japan, and De Vita and Abbott (2004) for the effects of exchange rate volatility on US exports. Bahmani-Oskooee and Karamelikli (2019) employed aggregated data and found that the impact between exchange volatility and exports is insignificant; however, the volatility impact became significant when using disaggregated data, which may be due to aggregation bias. Recently, Shin et al. (2014) argued that traders' reactions to increased volatility could differ with decreased effects. Chien et al. (2020) conducted an asymmetric analysis and found that the long-run asymmetric effect of exchange rate volatility showed far higher impacts on Taiwan's exports to Indonesia than on Taiwan's imports. Similar asymmetric results can be found in Bahmani-Oskooee and Arize (2020) and . The previous studies' disagreement may be due to different methodologies, data sources, and economic development patterns. However, most researchers have used aggregated data and symmetric assumptions, and only a few employed disaggregated data (Bahmani-Oskooee and Wang 2007) or asymmetric analysis (Chien et al. 2020). Additionally, most studies have focused on the impact of the exchange rate fluctuations between a country and its relevant trading partners. However, a country has more than one trading partner in the real world, and bilateral trade flows could be interfered with by another important trade partner. Our research attempts to discover the asymmetric effects of real exchange rate volatility on trade flows while accounting for third countries. We employ disaggregated data and focus on the three largest economies in the world: The United States, the European Union (EU) 1 Our first contribution in this paper is to add to the literature on the asymmetric effect of exchange rate volatility on international trade and compare the nonlinear autoregressive distributed lag (NARDL) outcomes with linear autoregressive distributed lag (ARDL) outcomes. The second contribution allows traders to gain better insights into the role of third country exchange rate volatility in US-EU bilateral trade. To our knowledge, few studies have used asymmetric analysis to examine the effect of the CNY/USD real exchange rate volatility on trade between the US and EU. Lastly, our paper contributes to using disaggregated data in the NARDL model. We test industries' responses to exchange rate volatility and find that different industries exhibit distinctive behaviors regarding exchange rate risk. In order to achieve our goal, we provide the model specifications in Section 2. Section 3 describes the data sources and empirical results. Finally, we present the major conclusions. Our first contribution in this paper is to add to the literature on the asymmetric effect of exchange rate volatility on international trade and compare the nonlinear autoregressive distributed lag (NARDL) outcomes with linear autoregressive distributed lag (ARDL) outcomes. The second contribution allows traders to gain better insights into the role of third country exchange rate volatility in US-EU bilateral trade. To our knowledge, few studies have used asymmetric analysis to examine the effect of the CNY/USD real exchange rate volatility on trade between the US and EU. Lastly, our paper contributes to using disaggregated data in the NARDL model. We test industries' responses to exchange rate volatility and find that different industries exhibit distinctive behaviors regarding exchange rate risk. In order to achieve our goal, we provide the model specifications in Section 2. Section 3 describes the data sources and empirical results. Finally, we present the major conclusions. Our first contribution in this paper is to add to the literature on the asymmetric effect of exchange rate volatility on international trade and compare the nonlinear autoregressive distributed lag (NARDL) outcomes with linear autoregressive distributed lag (ARDL) outcomes. The second contribution allows traders to gain better insights into the role of third country exchange rate volatility in US-EU bilateral trade. To our knowledge, few studies have used asymmetric analysis to examine the effect of the CNY/USD real exchange rate volatility on trade between the US and EU. Lastly, our paper contributes to using disaggregated data in the NARDL model. We test industries' responses to exchange rate volatility and find that different industries exhibit distinctive behaviors regarding exchange rate risk. In order to achieve our goal, we provide the model specifications in Section 2. Section 3 describes the data sources and empirical results. Finally, we present the major conclusions. Model Specifications Conventional ARDL models using bilateral trade aggregated data cannot distinguish between the different export and import effects of exchange rate changes across sectors because different industries are subject to different prices and trade contracts. They face different price rigidities and thus reveal relatively asymmetric effects of exchange rate volatility on trade flows at the industry level. Recently, Shin et al. (2014) modified the ARDL model so that it could be used to assess the possibility of asymmetric effects of the exogenous variables on the dependent variable. A nonlinear ARDL model allows us to decompose the real CNY/USD exchange rate volatility into its positive and negative changes to examine the short-run and long-run asymmetric effects on trade flow between the US and EU. In the beginning, it is assumed that the exports and imports are autoregressive processes that depend on lagged values of other economic variables. The critical variables are the import country's income, real exchange rate, real exchange rate volatility, and the lag periods of the export and import volumes. Following McKenzie and Brooks (1997), we use the Industrial Production Index (IPI) as a proxy for income, and we use models by Bahmani-Oskooee et al. (2013) augmented with the volatility measure of a third country. The US export and import demand equations are as follows: where EXP US tj is the export volume of commodity j from the US to the EU at time t. IPI EU t−i is the volatility of the real CNY/USD exchange rate at time t − i. ε and v are error terms. n1-n10 are the optimum lag periods of variables. I MP US tj is the US import volume of commodity j from the EU at time t. Bollerslev (1986) proposed a generalized autoregressive conditional heteroskedasticity (GARCH) model, which modifies the ARCH model's longer lag structure. It is more flexible and reasonable compared to the ARCH model, as the GARCH model can better achieve the principle of the time series model. Hence, we employ the GARCH model to estimate the real exchange volatility. The theoretical specification of a GARCH (p,q) model is as follows: where REX t follows a first-order autoregressive process. ∆lnREX t represents the change in the logarithm of the real exchange rate between time t − 1 and t. η is the mean. u t is white noise that stands for an error term in period t and follows the normal distribution with volatility V t . p and q are the optimum lag periods of variables, which have been defined above. Equation (3) is the mean equation. Equation (4) denotes the variation equation and addresses the volatility. We develop an ARDL model like Pesaran et al. (2001) by extending Equations (1) and (2) to the cointegration model and present them as Equations (5) and (6). Equations (5) and (6) are error-correction models. Trade flows are assumed to respond to changes in independent variables in a symmetric manner. Recently, Shin et al. (2014) modified the linear models and assessed the possibility of asymmetric effects of the independent variables on the dependent variable. Our purpose is to assess the asymmetric effects of the third country exchange rate volatility. We follow Shin et al. (2014) and use the following concepts to separate the increased and decreased volatilities: where POS CN is the partial sum of positive changes in ∆lnV CN , which reflects only increased volatility. NEG CN is the partial sum of negative changes in ∆lnV CN and reflects only decreased volatility. We turn back to Equations (5) and (6) to replace lnV CN t with POS CN t and NEG CN t , then the new error-correction models can be described as: where α and b are the short-run coefficients. π and ρ are the long-run coefficients. Data Sources This paper examines the asymmetric effects of the third country, China, on the real exchange rate volatility on commodity trade between the US and EU from January 2003 to March 2021. In order to avoid aggregated bias, the monthly disaggregated data of the top 12 export and import industries are employed 2 . These top 12 export and import industries account for 81.90% of exports and 77.36% of imports. The data sources are reported in Table 1. Empirical Results The advantage of ARDL is that there is no need to consider the order of each variable. Whether variables are I (1) or I (0) does not affect the results; moreover, it is possible to identify which variables are independent and which are dependent (Bahmani-Oskooee and Aftab 2017). However, the presence of I (2) could cause spurious estimates. Therefore, we start with a unit root test and report the results. The Estimation Results of the Unit Root Tests There are several types of unit root tests in the literature. This paper applies the augmented Dickey-Fuller (ADF) test with trend and intercept, which evaluates variables as stationary or nonstationary and adopts the Phillips-Perron (PP) robustness test. Table 2 shows that our data are stationary either in level (for example, codes 29) or first difference (for example, REX), and there is an absence of I (2). This means that we can apply ARDL in our estimates. Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Our main purpose is to examine the asymmetric effects of the third country exchange rate volatility on trade flows using Equations (8) and (9). However, we also estimate the linear models in Equations (5) and (6) for comparison. Akaike's information criterion (AIC) is used to select an optimum model. The Estimation Results of the Linear ARDL Model for Exports We begin by estimating the linear export demand model (5) and report the results in Tables 3 and 4. Table 3. Short-run coefficient estimates of volatility for US exports to the EU with third-country effects using the linear ARDL model (5). Lags on ∆ln V EU and V CN Code Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Table 3 reports the short-run coefficient estimates of real exchange rate volatility in the linear ARDL model for USD/EUR and CNY/USD. Of the 12 industries, 4 show at least 1 significant coefficient for USD/EUR exchange rate volatility, while 7 out of 12 sectors show at least 1 significant coefficient for CNY/USD exchange rate volatility. The third country effect generates more significant outcomes than bilateral exchange rate volatility, meaning that this effect plays an important role in US exports to the EU. Bounds testing is conducted to check whether a cointegration relationship exists. When the F-test is used to investigate cointegration, there are two thresholds. One is the upper-critical bound I (1), and the other is the lower critical bound I (0). If the F-statistic is higher than I (1), it means that the null hypothesis of no cointegration is rejected. If it is lower than I (0), it means there is no significance, and the null hypothesis cannot be rejected. If the F-statistic result falls between the lower and upper critical bounds, then we implement the error correction model (ECM t−1 ) and re-estimate Equations (5) or (6) and (8) or (9). If the estimated coefficient is significant, we use this as evidence for cointegration (Bahmani-Oskooee et al. 2014). In Table 4, the ARDL bounds testing results show that the F-statistics are significant at the 10% level for industry codes 84, 88, 85, 27, 87, 29, 39, and 98. The results support a cointegration relationship between the variables in the long run, and the volatility of the real CNY/USD exchange rate has implications for trade between the US and EU. The third country real exchange rate volatility estimation results show that industry codes 27 (Mineral Fuel, Oil, etc.; Bitumen Substances; Minerals) and 98 (Special Classification Provisions, Nesoi) have a significant negative effect with coefficients of 1.16 and 0.43, respectively. This implies that if the CNY/USD volatility increases by 1%, it will cause the US to decrease exports to the EU for codes 27 and 98 by 1.16% and 0.43%, respectively. The bilateral real exchange rate volatility estimation results show that industry codes 85 (Electric Machinery, etc.; Sound Equipment; TV Equipment) and 87 (Vehicles, Except Railway or Tramway, and Parts, etc.) have a significant negative coefficient. This suggests that higher bilateral exchange rate volatility significantly decreases their export from the US to the EU. Arize (1997) also found that exchange rate risks are negatively related to trade. The coefficients of EU income in Table 4 show a significantly positive effect in eight industries at the 10% significance level, namely codes 84, 88, 85, 27, 87, 39, 98, and 38. This means that higher income boosts United States exports to the EU. The real exchange rate could be a price that is level compared to the trading partner, and the local currency depreciation could boost exports. The long-run empirical results indicate that the real exchange rate significantly negatively impacts US exports of code 29. The Estimation Results of the Linear ARDL Model for Imports Table 5 reports the short-run coefficient estimates of bilateral volatility for US imports from the EU with third country effects using the linear ARDL model. Six industries coded 87, 90, 98, 88, 22, and 71 carry at least one significant coefficient for ∆ln V EU . However, eight industries with codes 84, 87,29,85,27,22,71, and 39 are affected by third country exchange rate volatility. The estimation outcomes indicate that the third country effect has more significant results than bilateral exchange rate volatility, suggesting that the volatility of the real CNY/USD exchange rate cannot be ignored in trade between the US and EU. Table 5. Short-run coefficient estimates of volatility effects on US imports from the EU with third country effects using the linear ARDL model (6). Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Table 6 presents the long-run estimates of US imports from the EU with third country effects in the linear ARDL model. The coefficients of the F-statistic and ECM t−1 are significant and meaningful at the 10% level for seven industries, i.e., 84,87,29,27,88,22, and 71. The empirical results confirm a cointegration relationship among these variables in the long run. Table 6. Long-run coefficient estimates on US imports from the EU with third country effects using the linear ARDL model (6). Note: ***, **, and * denote significance at the 1%, 5%, and 10% levels, respectively. Long-Run Coefficient Estimates The coefficients of real third country exchange rate volatility are significant in three industries, including one positive effect, coded 29 (Organic Chemicals), and two negative effects, namely coded 87 (Vehicles, Except Railway or Tramway, and Parts, etc.) and 22 (Beverages, Spirits, and Vinegar). This shows that for import industry code 29, a 1% increase in the real CNY/USD exchange rate volatility stimulates the US traders to increase their imports from the EU by 0.10%. However, for industry codes 87 and 22, an increase of 1% of the CNY/USD volatility will induce the US to lower its EU imports by 0.106% and 0.114%, respectively. The bilateral real exchange rate volatility estimation results show that industry code 87 (Vehicles, Except Railway or Tramway, and Parts, etc.) carries a significant positive coefficient. This implies that higher bilateral exchange rate volatility significantly increases US imports from the EU and that industry code 87 benefits from increasing the volatility. The income coefficients are significantly positive in 9 out of 12 industries, coded 84, 87, 90, 85, 27, 88, 22, 71, and 39. This means that higher income in the US will raise these industries' imports from the EU. For the long-run effects of the real exchange rate on the US imports from the EU with third country effects in the linear ARDL model (6), one industry, coded 27, is found to possess a significantly negative effect. This implies that USD depreciation against the EUR increases the import volume. It could be due to price inelasticity or may be because the increase in demand is more than the decrease in price caused by the depreciation, increasing the total import volume from the EU instead. For comparison, we also conducted long-run export and import estimates on the bilateral trade between the US and EU without including third country exchange rate volatility and found that the results are pretty similar to Tables 4 and 6. Hence, we can infer that the third country effect does not affect USD/EUR volatility. The Estimation Results of the Nonlinear ARDL Model for Exports To distinguish between the linear and nonlinear model, we next test the nonlinear ARDL model results with regard to US exports and report the estimation results in Tables 7-9. Table 7 reports the short-run coefficient estimates of volatility on US exports to the EU with third country effects using the nonlinear ARDL model. At least one significant short-run estimate related to increased or decreased CNY/USD volatility measures is found in nine industries with codes 84, 88, 90, 85, 27, 87, 29, 39, and 98. This is two more industries than were found to be affected when using the previous estimation of the linear export model. This significant increase in the number of industries affected may be due to the separation of positive and negative changes in the nonlinear model leading to capturing the positive and negative exchange rate fluctuations simultaneously. Moreover, only three industries, coded 88, 30, and 87, are affected by USD/EUR volatility when using this model. This means that the volatility of the real CNY/USD exchange rate plays an important role in trade between the US and EU. Table 8 reports the long-run coefficient estimates on US exports to the EU with third country effects using the nonlinear ARDL model. The positive and negative volatilities of the CNY/USD rate have a significant effect on three industries, which is one more than when using the linear model. The second-largest industry is code 88 (Aircraft, Spacecraft, and Parts Thereof), with an export share of 11.59%. Its POS CN coefficient is −0.0814. This result confirms that a 1% increase in the CNY/USD volatility hurts the exports of bilateral trade by 0.08%. Moreover, the NEG CN coefficient is −0.0954, implying that a 1% decrease in the CNY/USD volatility increases bilateral trade export volumes by 0.10%; for a similar study, see Usman et al. (2021). As for codes 27 (Mineral Fuel, Oil, etc.; Bitumen Substances; Minerals) and 98 (Special Classification Provisions, Nesoi), the POS CN coefficients are −0.4200 and −0.2158, which means that a 1% increase in the CNY/USD volatility hurts the exports of bilateral trade by 0.42% and 0.22%, respectively. The NEG CN coefficients are −0.4943 and −0.2296, which suggests that a 1% decrease in the CNY/USD volatility benefits the exports of bilateral trade by 0.49% and 0.23%, respectively. This implies that the third country exchange rate volatility positively correlates with the US exports to the EU. If CNY/USD exchange rate fluctuations increase, US exports to the EU will be boosted in the long run. Similar results can be found in Tunc et al. (2018). The effect of the volatility of the USD/EUR rate on industry 87 (Vehicles, Except Railway or Tramway, and Parts, etc.) has a significant negative effect (coefficient = −0.0894). This means that a 1% increase in bilateral volatility leads to a 0.09% decrease in US exports to the EU. The expected sign of ln IPI EU is positive, i.e., increasing EU incomes leads the US to export more goods to the EU. Our empirical analysis shows significantly positive effects in ten industries, namely codes 84 ( The local currency depreciation will cause domestic goods to be cheaper than foreign goods, thus pushing up exports. We expect the coefficient of ln REX to be negative. Our empirical results support the expectation, and significantly negative effects are found in eleven industries, namely codes 84 ( Table 9 reports the diagnostics in the nonlinear ARDL export demand model. The empirical bounds testing and error correction model results reveal a cointegration relationship among the variables. Out of 12 industries, 11 are cointegrated and meaningful. LM is the Lagrange Multiplier statistic to test autocorrelation, and a maximum of 12 lags is adopted. Most LM statistics are insignificant, implying that most models are autocorrelation-free. Most RESET statistics are insignificant, which means that most models are correctly specified. The CUSUM and CUSUMSQ tests are significant at the 5% critical bound and reject the structural instability of estimates in most industries. Furthermore, the strong evidence of asymmetric by the Wald test rejects the null hypothesis ∑â 6i = ∑â 7i for the short-run, π 5 =π 6 for the long-run model (8) and ∑b 6i = ∑b 7i for the short-run,ρ 5 =ρ 6 for the long-run model (9). The outcomes of Table 9 indicate that both short-run and long-run are significant, at least at the 10% level, and the symmetry assumption can be rejected by the Wald test in eight and ten industries, respectively. The Estimation Results of the Nonlinear ARDL Model for Imports In this section, we present the results of the nonlinear ARDL for the import model (9) subsequent to the nonlinear ARDL for export earlier and report the results in Tables 10-12. Table 10 shows short-run coefficient estimates of volatility on the US imports from the EU with third country effects in the nonlinear ARDL model. At least one significant short-run estimate, attached with increased or decreased CNY/USD volatility measures, is found in nine industries coded 84, 87, 29, 90, 85, 27, 88, 22, and 39, which is one more than the previous estimate of the linear import model. Furthermore, three industries coded 87, 98, and 88 are affected by USD/EUR volatility. It is clear that the impact of the CNY/USD volatility on US imports from the EU is considerable. The long-run coefficient estimates on the US imports from the EU with third country effects in the Nonlinear ARDL model are reported in Table 11. We notice that increased or decreased third country volatility of CNY/USD rate significantly affects imports in seven industries, namely coded 84 (Nuclear Reactors, Boilers, Machinery, etc.; Parts), 87 (Vehicles, Except Railway or Tramway, and Parts, etc.), 29 (Organic Chemicals), 90 (Optic, Photo, etc., Medic or Surgical Instruments, etc.), 88 (Aircraft, Spacecraft, Additionally, Parts Thereof), 22 (Beverages, Spirits and Vinegar), and 39 (Plastics and Articles Thereof). However, only three industries carry a significant estimate in the linear model. The asymmetric estimate results show that significant coefficients are negative in 84, 87, 90, 88, 22, and 39. We could explain them by the sample of the largest industry code 84, with an import share of 16.28%. The estimated POS CN coefficient is −0.0777, which implies that a 1% increase in the CNY/USD volatility hurts the exports of bilateral trade by 0.08%. Furthermore, the estimated NEG CN coefficient is −0.0885, suggesting a 1% decrease in the CNY/USD volatility increases bilateral trade export volumes by 0.09%. For industry 29, both significant coefficients are positive. The POS CN coefficient is 0.0942. It means that a 1% increase in the CNY/USD volatility boosts the imports of bilateral trade by 0.09%. Moreover, the estimated NEG CN coefficient is 0.0919, representing that a 1% decrease in the CNY/USD volatility increases the import volumes of bilateral trade by 0.09%. These estimates appear that decreasing CNY-USD exchange rate volatility will cause the US to reduce imports from the EU. Wang et al. (2016) showed a similar result. The bilateral volatility coefficients of USD/EUR are significant in two industries, namely 87 (Vehicles, Except Railway or Tramway, and Parts, etc.), and 98 (Special Classification Provisions, Nesoi). Both significant coefficients are positive. This suggests that increased bilateral exchange rate volatility will boost US imports from the EU. It is expected that the coefficient of the US income is positive. The empirical analysis shows that the coefficient is significantly positive in ten industries, namely 84 ( Metals, Coins). This means that the USD's depreciation against the EUR increases the import volume. We can infer that due to price inelasticity, the increase in demand is more significant than the decrease in the price caused by depreciation. 3.2.7. Diagnostic Test Results in the Nonlinear ARDL Model for Imports Table 12 reports the diagnostics in the nonlinear ARDL import demand model. In order to avoid spurious estimates, we perform bounds testing and use an error correction model. According to the F-statistics and coefficients of ECM, the results reveal that a meaningful cointegration relationship between these variables is found for all twelve industries at least at the 5% level of significance. Most LM and RESET statistics are insignificant, meaning that most models are autocorrelation-free and correctly specified. The CUSUM and CUSUMSQ tests reject the structural instability of estimates in most industries. Our empirical symmetry test results show that the Wald tests are significant, at least at the 10% level. Moreover, rejecting the long-run symmetry in nine sectors and the short-run symmetry in eight. To date, most empirical analyses have provided strong evidence of asymmetric assumptions. Our sample is taken from January 2003 through March 2021. The period included the Global Financial Crisis. In order to investigate the effect of the financial crisis, we added a dummy variable (D08) during 2008 in our estimates. We found that the estimated results with and without the dummy did not exhibit a significant difference in exchange rate volatility. Appendices A and B are the long-run coefficient estimates of the US exports to and imports from the EU with dummy variables in the nonlinear ARDL models. Conclusions This paper demonstrates the symmetric and asymmetric effects of third country real exchange rate (CNY/USD) volatility on commodity trade between the US and EU. The nonlinear ARDL models are employed in order to compare them with the linear models. Our research reveals that the third country exchange rate risk plays a significant role in bilateral trade between the US and EU. The bounds-testing approach to cointegration and error correction terms are significant, and the Ramsey statistics are insignificant in most industries, which provides robust evidence. This implies that bilateral trade has a long-run equilibrium relationship with income, real exchange rate, and bilateral and third country exchange rate volatility and that most models are specified correctly. Furthermore, substantial evidence of asymmetry by the Wald test is identified in most industries. When comparing the linear and nonlinear ARDL models, our results show that the inclusion of the third country exchange rate effect does not disturb the results in the case of bilateral trade, and the nonlinear adjustment of the volatility has a more significant outcome than the ARDL model, which may prevent positive changes from being canceled out by negative changes in the linear model. Different industries exhibit distinctive behaviors regarding exchange rate risk. However, in the export sector, US exports to the EU are hurt by an increase in bilateral real exchange rate volatility, in which case the US exports less to the EU. Third country exchange rate volatility has significant negative impacts, and the coefficient of negative changes is greater than that of positive changes. This implies that an increase in the real exchange rate (CNY/USD) volatility will boost US exports to the EU in the long run. In the import sectors, US imports from the EU will benefit from increasing bilateral exchange rate volatility, and the US will import more goods from the EU to create more profit in the future. In addition, for most industries, increased CNY/USD exchange rate volatility will raise the trade volume between the US and EU. The nonlinear ARDL model provides more significant results than the linear ARDL model since the increased and decreased volatility may cancel each other out in the symmetric ARDL model. We decompose the volatility into positive and negative changes and present more detailed results to reflect the actual circumstances. Our empirical analysis provides valuable advice to policymakers who should not ignore the third country exchange rate fluctuations when dealing with US-EU trade friction. US traders who manage potential risks in global trade should pay attention to both the bilateral and third country exchange rate risks simultaneously. The limitation of this paper is the trade flows include all of the EU, but the volatility measures are based on the USD-EUR exchange rate only. Thus, it seems to be using an exchange rate volatility measure that may not accurately reflect the currencies of all the EU countries. In future and extended research: (1) We may consider the impact of exchange rate volatility on the trade flows of the EUR-originating countries; (2) We should try to aggregate those predictions over all the industries and generate a predicted change in overall US-EU trade. Conflicts of Interest: The authors declare no conflict of interest.
7,856.8
2022-07-22T00:00:00.000
[ "Economics" ]
Study of electric conduction mechanisms, dielectric relaxation behaviour and density of states in zinc sulphide nanoparticles Zinc Sulfide (ZnS) nanoparticles were synthesized by solid-state reaction method at 190°C. Dielectric, electrical properties, and conduction mechanism of ZnS nanoparticles were investigated. Average crystallite size and interplanar spacing of ZnS nanoparticles were approximately 4.47 nm and 1.92 Å respectively. The nanoparticles were spherical with size range of 10–20 nm. Complex impedance spectroscopy (CIS) of ZnS nanoparticles was performed at 20 Hz to 2 MHz and 236–320 K. The ZnS nanoparticles have negative temperature coefficient of resistance (NTCR). The AC measurements of ZnS nanoparticles from 236 K to 320 K revealed that the conduction in ZnS nanoparticles is due to correlated barrier hopping (CBH). The density of states (DOS) of ZnS nanoparticles have been calculated by CBH model as a function of temperature using photon frequency (fo) 1013 Hz and localized wave function (α) 1010 m−1 which ratified hopping as dominant conduction mechanism in ZnS nanoparticles. Introduction Zinc sulphide (ZnS) is an important non-toxic, chemically stable II-VI semiconductor with a direct bandgap of 3.5-3.7 eV in the cubical form and 3.7-3.8 eV in the hexagonal form [1,2]. Due to its wide bandgap, it has been extensively studied and explored for different applications such as photoluminescence, electroluminescent displays, photo-catalysis, solar cells and blue light diodes [3]. Zinc sulphide has structural and chemical properties as those of Zinc Oxide (ZnO) [4], however, its wider bandgap has an advantage over ZnO ( ∼ 3.4 eV). The wider bandgap makes it suitable for visible-blind ultraviolet-light-based devices such as sensors and photodetectors [5]. Due to the innate defects in ZnS, it is difficult to grow ZnS in thin films and single crystals with high conductivity. Different dopants, such as Al, Cd, In, Ag, Mn, Cu, etc. are used to improve the properties of ZnS. It can be doped easily with Mn +2 atoms [6]. Different techniques, such as the Sol-gel method [7], the co-precipitation method, chemical vapour deposition [8], the solid-state reaction method [9], the wet chemical method [10], the hydrothermal method [11], microwave-assisted synthesis [12], etc., are used for the synthesis of ZnS nanoparticles. Each of the above techniques has its own benefits and limitations. Among them the solid-state reaction method is easy, economical, and fast for the synthesis of ZnS nanoparticles. ZnS has a negative temperature coefficient of resistance (NTCR) and exhibits semiconducting behaviour. Therefore, the investigation of temperature-dependent properties of ZnS nanoparticles at higher and lower temperatures is of prime importance. To the best of our knowledge, frequency and temperature-dependent density of states (DOS), hopping distance (R min ), binding energy (W m ), detailed CBH conduction mechanism and calculations of experimental data by applying the CBH model have not been reported for ZnS nanoparticles. These parameters have great potential in electrical devices. Therefore, we report a one-step synthesis of ZnS nanoparticles without using surfactants. Different relevant characterizations confirm the physical features, purity and chemical composition of ZnS nanoparticles. Low-temperature (236-320 K) and frequency (20 Hz to 2 MHz)-dependent ac electrical measurements were carried out to determine ac conductivity, dielectric, modulus properties and the dominant conduction mechanism of ZnS nanoparticles. Solid-state reaction method ZnS nanoparticles were synthesized by the solidstate reaction method. The Zinc acetate dihydrate (Zn(CH 3 OO) 2 .2H 2 O) and thiourea (NH 2 CSNH 2 ) were used as precursors. The precursors were mixed in stoichiometric ratios and ground with a mortar pestle for half an hour. The grinding of the mixture released heat to start the reaction immediately. The ground mixture was placed in a box furnace and kept under heating at 190°C for 4 h. The temperature was increased gradually in steps of 10°C/min to minimize the thermal stresses and the cooling to room temperature was carried out with a cooling rate of 10°C/min heat loss. The mixture was kept in the oven for 24 h to avoid moisture. Pellet formation The mixture was ground to prepare pellets of ZnS nanoparticles. The pellets were prepared in a uniaxial hydraulic press with a dye of 10 mm diameter and a pressure of 700 bar was applied. The thickness of the prepared pellets was 2 mm. The pellets were sintered at 170°C for two hours to enhance their strength then, they were removed from the furnace and cooled naturally before applying the contacts on the pellets. The contacts were made on the pellets with copper wires using silver paste. Characterizations The X-ray diffraction (XRD) pattern of ZnS nanoparticles was obtained using a "Bruker D8 advance powder" X-ray diffractometer with monochromatic X-rays of wavelength 1.5418 Å (CuKα). The XRD pattern was taken in Bragg's angle (2θ) range 25°-80°in steps of 0.013°. The field emission scanning electron microscopy (FESEM) "JEOL JSM-6340F" and the transmission electron microscopy (TEM) "TEM JEOL 2100F" were carried out for the morphological analysis of the sample. Energy-dispersive spectroscopy (EDS) "JEOL JSM-6340F" was used for the elemental and composition analysis. Impedance spectroscopy The impedance analysis was carried out in the frequency and temperature range of 20 Hz to 2 MHz and 236-320 K, respectively, with a complex impedance analyzer "Agilent E4980A LCA". XRD analysis The crystallographic structure of the prepared ZnS was investigated using a Bruker D8 advance X-ray diffractometer operated at 40 kV and 40 mA with Cu K α radiation (λ = 1.5418 Å). The XRD pattern of ZnS nanoparticles matched with JCPDS card number 00-005-0566, as illustrated in Figure 1, which shows the sphalerite or zinc blende cubical structure. The diffraction peaks were observed at positions 2θ = 28.60°, 32.78°, 47.69°, 56.47°, 69.99°and 76.96°for planes (111), (200), (220), (311), (400) and (331), respectively. The broadening of peaks in Figure 1 indicates the nanocrystalline nature of the particles. The average crystallite size calculated by the Debye-Sherer formula (D = Kλ βCosθ ) is ∼ 4.47 nm. In the Debye-Sherer relation, "K" is the Sherer constant with a value 0.89, "λ" is wavelength of the incident X-ray (1.54 Å Cu K α ), "β" is the full width at half maximum of the peak and "θ " is Bragg's diffraction angle [13]. The interplanar distance (d) calculated by Bragg's formula [14] 2dsinθ = nλ for n = 1 is ∼ 1.92 Å. The lattice constants calculated by relation ) [15], where "I" is the measured intensity of a peak, while "I r " is the reference intensity along that plane obtained from JCPDs card and "n" is number of peaks. Texture coefficients along different planes are listed in Table 1. Texture coefficient has a maximum value along plane (200) which is 2.30. Hence the preferred direction of growth of ZnS nanoparticles is along plane (200). FESEM and TEM analysis FESEM "JEOL JSM-6340F" and TEM "TEM JEOL 2100F" were used for the morphological analysis of ZnS nanoparticles. FESEM and TEM images of ZnS nanoparticles are shown in Figure 2(a, b). The ZnS nanoparticles are spherical with particle sizes in the range of 10-20 nm. Figure 3 confirms the presence of zinc and sulphur in the EDS analysis of ZnS nanoparticles. No other peak for any impurity has been observed. The peak at 0 keV is caused by the noise of the electronics detector of the EDS spectrometer, mainly the electron shot noise in the preamplifier behind the silicon drift detector. The inset of Figure 3 shows the atomic and weight percentages of zinc and sulphur in ZnS nanoparticles. Impedance analysis Complex impedance spectroscopy was performed to investigate electrical, dielectric properties, conduction mechanism and DOS of the ZnS nanoparticles. The measurements were carried out in the frequency range from 20 Hz to 2 MHz and the temperature range from 236 to 320 K using an Agilent E4980A LCR meter. The complex impedance is mathematically expressed as follows: where Z and Z are the real and imaginary parts of impedance, respectively. In Figure 4 (a), Z was plotted as a function of frequency at various temperatures. Initially, the Z decreases as the frequency increases and a frequency-independent region is observed at a higher frequency. Hence, the emission of space charges reduces the impedance of the ZnS nanoparticles. The Z decreases with the increase in temperature from 236 K to 320 K indicating the NTCR in ZnS nanoparticles. This exhibits the semiconducting behaviour in ZnS nanoparticles. Thus, the overall actual impedance of ZnS nanoparticles decreases with an increase in frequency and temperature. ZnS nanoparticles synthesized by the co-precipitation method at 400°C also show the decrease in Z with the increase in temperature and frequency [16]. The variation of −Z as a function of frequency and temperature is shown in Figure 4(b). The imaginary part of the impedance is minimum at lower frequencies and increases with an increase in frequency. As actual impedance corresponds to resistive behaviour, while imaginary impedance corresponds to capacitive behaviour of a material. Hence, ZnS nanoparticles exhibit resistive impedance at lower frequencies and capacitive impedance at higher frequencies [17]. Nyquist plot of ZnS nanoparticles in the temperature range 280-320 K in steps of 10 K has been plotted further to investigate impedance and conduction mechanism of ZnS nanoparticles, as shown in Figure 4(c). The impedance spectrum is in the form of semicircles. The radius of semicircles gives the magnitude of the impedance of the circuit. The radius of semicircles decreases with an increase in temperature, which shows that the impedance offered to the mobility of charge carriers varies inversely with an increase in temperature, which confirms NTCR behaviour in ZnS nanoparticles. A decrease in impedance with a rise in temperature is due to thermally activated hopping and the release of charge carriers from trapped states. In this case, the Nyquist plot is a single depressed semicircle. Depressed semicircular arcs represent the presence of multiple relaxation phenomena in the sample. The semicircles in the impedance spectrum are modelled with an equivalent circuit (R g C g )(R gb Q gb ), where R g and C g represent grain resistance and grain capacitance, respectively. At the same time, R gb and Q gb represent resistance and constant phase element (CPE) for grain boundaries, respectively. CPE is used to compensate non-ideal capacitive behaviour arising from nonuniform grain boundaries and the presence of more than one relaxation phenomenon. The value of capacitance can be estimated by C = (R 1−m Q) 1/m , where m is an exponent and its value depends upon the temperature and nature of a material. It is 1 for a pure capacitor and 0 for a pure resistor. Physically, it varies from ideal Debye behaviour. The total resistance (R t ) of the sample is the sum of grain and grain boundary resistances. The values of R g , R gb , C g , C gb and R t in the temperature range 246-320 K are listed in Table 2. The values of R g , R gb , and R t decrease with temperature due to the release Values of R g , R gb , C g , C gb and R t at different temperatures. of trapped charge carriers and thermally activated hopping. The value of C g first decreases with the increase in temperature from 246 K to 266 K and increases after 266 K. The value of C gb increases with an increase in temperature. The increase in C gb is due to an increase in charge carrier hopping from bulk to interfaces and piling up of charges at interfaces. Piling up of charges gives rise to polarization that results in an increase in grain boundary capacitance [18,19]. Modulus analysis Complex electric modulus (M * ) is the inverse of complex relative permittivity. Mathematically, and By simplifying, we have here ω = 2πf is the angular frequency of the applied electric field, "ε * " is the complex relative permittivity, "Z * " is the complex impedance and C 0 = ε 0 A/t is geometric, where "ε 0 " is the permittivity of free space, "A" is the area of electrode and "t" is the thickness of the pellets. The variation of the actual part of electric modulus with frequency and temperature is shown in Figure 5(a). At lower frequencies actual electric modulus is approximately zero that increases with the increase in frequency at all temperatures. The increase in actual electric modulus is due to the short-range mobility of charge carriers [17]. As the frequency increases, polarization at grain boundaries decreases and dielectric permittivity also decreases. Therefore, from Equation (3), the increase in actual electric modulus with increased frequency is associated with a decrease in electric polarization and permittivity. Variation of the imaginary part of electric modulus as a function of frequency and temperature is shown in Figure 5(b). A peak is observed at all temperatures that correspond to the dielectric relaxation phenomenon in ZnS nanoparticles. The frequency at which the relaxation peak occurs is called relaxation frequency (ω r ). Dielectric relaxation occurs when polarization lags the frequency of the applied AC. Due to this lagging dielectric permittivity decreases to its minimum and hence electric modulus increases to its maximum. The time for which dielectric relaxation occurs is called relaxation time (τ r ). The increase in temperature shifts the relaxation peak to a higher value of frequency, as shown in Figure 5(b). The imaginary part of electric modulus was fitted with the Kohlrausch-Williams-Watts function ( Figure 5(b)) as: here M gmax and M gbmax are the peak maxima due to grain and grain boundary relaxation, respectively, both are mutually dependent on each other and difficult to differentiate individually from M spectra [20]. The height and broadening of the peak increase with temperature. The increase in height, broadening and shifting of peaks predict the dielectric relaxation phenomenon in ZnS nanoparticles is temperaturedependent. The distribution of dielectric relaxation is consistent with the condition ω r × τ r = 1. The values of relaxation frequencies (ω r ) and relaxation times (τ r = 1/ω r ) at different temperatures are listed in Table 3. The variation of relaxation frequency with temperature obeys Arrhenius relation: where E aM is the activation energy of the relaxation process, k B is the Boltzmann constant and f 0 is the preexponential factor. The value of activation energy for relaxation phenomenon is 0.524 eV calculated from the slope of ln (f r ) versus 1000/T graph, as shown in Figure 5(c). Figure 5(d) shows the cole-cole (M vs M ) plots of electric modulus for ZnS nanoparticles to investigate grain and grain boundary contribution in the relaxation process. The graph shows two depressed adjoint semicircular arcs that tend to enlarge with the decrease in temperature. The tiny relaxation arcs at the lower frequency side correspond to the poor grain boundary effect and significant relaxation arcs at the higher frequency zone are related to the dominant bulk or grain effect [20]. Dielectric analysis The dielectric analysis is the first tool to study various electrical properties such as permittivity, conductivity, and conduction mechanisms in materials. It is also used to understand the dynamics of complex materials and quantify their response to an applied electric field. The complex permittivity or dielectric constant is a function of frequency and physical conditions like temperature etc. It is defined as [21] ε * (ω, T) = ε − iε (9) here "ε " is the actual part of permittivity and "ε''" is the imaginary part of permittivity. The real part of permittivity represents the alignment of dipoles, and it corresponds to energy storage. While imaginary permittivity represents ionic conduction and corresponds to energy loss. Permittivity is calculated by the following relations: To investigate dielectric response and energy loss in ZnS nanoparticles, real permittivity, and loss tangent (Tanδ) are plotted as a function of frequency (20 Hz to 2 MHz) and temperature (236-320 K). The ε' decreases with an increase in frequency, as shown in Figure 6. At lower frequencies permittivity is maximum. However, with an increase in frequency, permitivity decreases quickly to a minimum value. At higher frequencies (greater than 10 6 Hz) real permittivity is approximately zero. It can be attributed to the polarization in the ZnS nanoparticles. Different polarizations, such as ionic, space charge, dipolar, electronic and atomic polarization, are expected at different temperature and frequency regimes [22,23]. At lower frequencies, space charges are accumulated at the inner boundary layers separating dielectric components and interface between the sample and electrode. This accumulation of space charges gives rise to a strong Maxwell-Wagner-Sillars (MWS) interfacial polarization [21]. According to this model, the dielectric structure is visualized as well-conducting magnet grains separated by deficient conducting grain boundaries [22]. At lower frequencies MWS interfacial polarization mainly contributes to ε'. Also, ionic polarization and dipoles at the sample-electrode interface contribute to the permittivity. Therefore, at lower frequencies, ε is maximum. With an increase in frequency ε decreases because at higher frequencies electric dipoles are incompatible with aligning themselves, according to the varying electric fields. At higher frequencies AC field changes quickly and dipoles relax to nonaligned positions. This results in a decrease in electric polarization and real permittivity. The ε increases with the temperature, as depicted in Figure 6. The increase in ε with temperature is due to an increase in the thermal energy of space charges. The space charges hop from bulk to interfaces, and are accumulated on the interfaces of grains due to the increase in thermal energy with temperature. This accumulation also results in an increase in polarization and hence ε . The ε merges at higher frequencies for all temperatures. Real permittivity represents the alignment of dipoles that is the energy storage component. In conclusion, the dielectric and energy storage ability of ZnS nanoparticles decreases with an increase in frequency but increases with an increase in temperature. The capacitance of ZnS nanoparticles is given by the following relation [24]: Variation of the capacitance of ZnS nanoparticles with frequency and temperature is shown in Figure 7. At lower frequencies different types of electric dipoles are aligned completely in the direction of the applied AC field. Therefore, at lower frequencies ZnS nanoparticles have maximum permittivity and capacitance and store maximum possible energy. At higher frequencies due to dielectric relaxation phenomenon, a small number of dipoles are aligned in the direction of the applied AC field, and energy storage is minimum. Due to this fact, at higher frequencies ZnS nanoparticles have minimum dielectric constant and capacitance. The capacitance of ZnS is also temperaturedependent. At lower frequencies the value of capacitance of ZnS nanoparticles at 320 K is almost 100 times the capacitance at 236 K. At higher frequencies capacitance curves are merged and become almost frequency independent. Temperature-dependent permittivity and capacitance make ZnS nanoparticles an important material for technologically critical dielectric applications. Variation of loss tangent (Tanδ = ε /ε ) as a function of frequency at temperature range 236-320 K is shown in Figure 8. Tanδ of ZnS nanoparticles is frequency-and temperature-dependent. As the frequency increases Tanδ , decreases, and at higher frequencies, it becomes approximately frequency independent. However, Tan δ increases with an increase in temperature. Frequency and temperature dependence of Tanδ can be explained by Koop's theory that considers a dielectric system as an inhomogeneous medium of two layers of the Maxwell-Wagner type [25]. According to this model, the dielectric system consists of semiconducting grains that are separated by insulating layers. Through hopping, charges reach grain boundaries and pile up due to the insulating layer separating grains. This pilling up of charges gives rise to electric polarization. As the frequency of the applied field increases, charges reverse their direction more often before reaching the grains boundaries. As a result, the piling up of charges and electric polarization decrease and hence Tan δ decreases with an increase in frequency. On the other hand, with the increase in temperature the thermal energy of the charge carrier increases hence increasing the probability of reaching charges on grain boundaries. This gives rise to the piling of charges, and consequently, electric polarization increases. As a result, Tanδ increases with an increase in temperature. Electrical conductivity analysis The conduction mechanism in ZnS nanoparticles was investigated as a function of frequency at 236-320 K temperature range as shown in Figure 9 (a) linear and (b) log-log scale. It can be observed in Figure 9(a) that each AC conductivity curve has two regions, the first region is frequency independent, and the second region is frequency-dependent. Direct leakage current may lead to direct conductivity in the frequency-independent region, while the conductivity increases with the increase in frequency in the second region. This behaviour shows that the conductivity follows Jonscher's power [26]: "σ dc " is the DC conductivity, "A" is a pre-factor, "ω"is the hopping frequency at which the slope of curves changes and "n" is the slope of conductivity curves. Jonscher's power law deals with the mobility of charge carriers and is based on hopping between energy states near-Fermi level. Variation of n as a function of temperature determines the type of carriers hopping and conduction mechanism in a material. Based on the variation of n with temperature, different theoretical hopping models have been reported for conduction in materials: Firstly, if n increases with an increase in temperature, then conduction is due to small polaron hopping (SPH) [27]; secondly, if n decreases with an increase in temperature, then conduction is due to correlated barrier hopping (CBH) [28]. Thirdly, if n decreases to a minimum value with temperature and increases with further increase in temperature, then conduction is due to overlapping large polaron hopping [29] and fourthly if the value of n remains approximately constant and its value remains about 0.8, then conduction is due to quantum mechanical tunnelling [30]. Values of σ dc , pre-factor A and slope n for conductivity curves of ZnS nanoparticles at different temperatures obtained by nonlinear curve fitting are listed in Table 4. The variation of σ dc as a function of temperature is shown in Figure 10(a). It increases with an increase in temperature. Below room temperature (300 K) there is a slight increase in σ dc with temperature, while there is significant increase above room temperature. This variation of dc conductivity with temperature indicates thermally activated transport in ZnS nanoparticles. The variation of dc conductivity with temperature is according to the Arrhenius equation: where "σ o " is the pre-exponential factor, "E dc " is the activation energy for dc conductivity, "K B " is the Boltzmann constant and "T" is the temperature in kelvin. The value of activation energy for dc conductivity has been determined from the slope of ln(σ dc ) versus 1000/T graph, as shown in Figure 10(b) is 0.69 eV. Variation of slope n of the conductivity curves with temperature is shown in Figure 11(a). With the increase in temperature n decreases. A decrease in n with temperature indicates conduction in ZnS nanoparticles is according to the "correlated barrier hopping (CBH)" model. Consequently, an increase in AC conductivity with temperature is due to the increase in hopping between localized states. The pre-factor A is temperature-dependent. Its variation with temperature is shown in Figure 11(b). At temperature below room temperature, variation in A is approximately independent of temperature, but above room temperature, A increases rapidly with an increase in temperature. In the CBH model conduction of charges occurs from localized states created from barriers separating. According to the CBH model [31] here "K B " is the Boltzmann constant, "W m " is the binding energy and "τ 0 " is the characteristic time constant. For W m > > K B Tln(1/ωτ 0 ), Equation (14) becomes: Variation of W m with temperature is shown in Figure 12(a). It can be observed, the binding energy of ZnS nanoparticles decreases with an increase in temperature. In other words, the number of free charge carriers increases with temperature. Therefore, conduction in ZnS nanoparticles increases with a rise in temperature. Based on the CBH model the minimum hopping distance R min is [32] R min = 2e 2 πε 0 ε W m (16) here "e" is the charge of an electron, "ε'" is the real dielectric permittivity, "ε 0 " is the permittivity of free space and "W m " is the binding energy. The variation of R min with the temperature at 632.5 Hz frequency is shown in Figure 12(b). The minimum hopping distance decreases with an increase in temperature. The variation of minimum hopping distance as a function of frequency at all temperatures (246-320 K) is shown in Figure 13. At lower frequencies R min is minimum and its value is a few nanometers. As the frequency increases R min increases continuously. This dispersion in R min with frequency is associated with conduction due to short-range carrier hopping. At higher frequencies R min approaches to a saturation value, which is associated with long-range carrier hopping. AC conductivity can be used to calculate the DOS. In the CBH model σ ac and DOS (N(E f )) are related as [32] here "α" is the localized wave function and "f 0 " is the frequency of the photon. Values of α and f 0 used to calculate DOS are 10 10 m −1 and 10 13 Hz, respectively [26]. The variation of DOS as a function of temperature at different frequencies is shown in Figure 14. DOS decreases with an increase in frequency, while it increases with an increase in temperature. The calculated value of DOS was of the order of 10 21 . Higher values of DOS indicate that the dominant conduction mechanism in ZnS nanoparticles is hopping between pairs of localized states. Conclusions ZnS nanoparticles were synthesized by the solid-state reaction method. XRD confirmed the zinc blende or sphalerite structure with the space group F-43 m of prepared ZnS nanoparticles. ZnS nanoparticles have an average crystallite size of ∼ 4.47 nm, average interplanar spacing ∼ 1.92 Å, average lattice strain of ∼ 3.45 × 10 −2 due to lattice dislocation and the average dislocation density of ∼ 1.73 × 10 17 lines/m 2 . FESEM and TEM revealed that ZnS nanoparticles have a spherical morphology with the size range 10-20 nm. The EDS analysis confirmed the purity of ZnS nanoparticles with the presence of Zn and S peaks only. The impedance, dielectric, and electrical properties of ZnS nanoparticles were studied as a function of frequency (20 Hz to 2 MHz) and temperature (236-320 K). The impedance of ZnS nanoparticles decreases with an increase in frequency and temperature. ZnS nanoparticles have a NTCR and hence showed semiconducting behaviour in the temperature range 236-320 K. The dielectric behaviour of ZnS nanoparticles decreases with an increase in frequency but increases with an increase in temperature. AC conductivity of ZnS nanoparticles obeys Jonscher's power law. Conduction in ZnS nanoparticles is according to the CBH model. The binding energy of ZnS nanoparticles is much less than 1 eV, and it decreases with increase in temperature. The minimum distance for hopping (R min ) in ZnS nanoparticles is in nanometers, and it decreases with an increase in temperature while increasing with the increase in frequency. At lower frequencies, a small range of charge carrier mobility dominates at all temperatures, while at higher frequencies long-range charge carrier mobility dominates. The DOS of ZnS nanoparticles shows an increase with an increase in temperature while it decreases with an increase in frequency. The calculated values of DOS lie in the range of 10 21 ; higher values of DOS confirmed hopping as the dominant conduction mechanism in ZnS nanoparticles.
6,400.8
2021-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of 86.5% ± 2.4%, 95.6% ± 3.9%, 95.6% ± 2.1%, 98.3% ± 2.0%, and 95.9% ± 1.5%. However, ETHNOPRED was unable to produce a classifier that can accurately distinguish Chinese in Beijing vs. Chinese in Denver. Conclusions ETHNOPRED is a novel technique for producing classifiers that can identify an individual’s continental and sub-continental heritage, based on a small number of SNPs. We show that its learned classifiers are simple, cost-efficient, accurate, transparent, flexible, fast, applicable to large scale GWASs, and robust to missing values. Single nucleotide polymorphisms Single nucleotide polymorphisms (SNPs) as single base substitutions in DNA are the most common type of genetic variation in humans. SNPs are evolutionarily conserved and heritable. They give rise to one or more allelic variations at a loci and may confer phenotypic variance. Polymorphisms result from the evolutionary processes, and are modified by natural selection. They are common in nature and are related to biodiversity, genetic variation, and adaptation [1]. To date, millions of human SNPs have been identified and recorded in public databases such as dbSNP [2] or Ensembl [3]. Genome wide association studies A genome wide association study (GWAS) is an examination of a large set of common genetic variants, such as SNPs, over a set of "labeled" individuals, seeking variants that are associated with a phenotype, such as disease susceptibility, disease prognosis or drug response under the "Common Disease-Common Variant" hypothesis [4,5]. A GWAS normally compares the DNA of two groups of participants: subjects who expressed a phenotype (cases) versus subjects who did not (controls). Here, the researcher compares the values of each individual feature (e.g., specific SNP) in the cases, with the corresponding values for this feature in the controls. If the range of values in these subgroups is significantly different, this feature is said to be associated with the phenotype. In contrast to candidate gene polymorphism studies which test only a few pre-defined genetic regions, GWASs investigate the entire genome [6,7]. The database of genotypes and phenotypes (dbGaP) [8] and the catalogue of published GWASs [9] archive and distribute the findings from GWASs to the broader scientific community. Population stratification Population stratification (aka population structure) is the presence of a systematic difference in allele frequencies between populations or subpopulations possibly due to different ancestry. We observe population stratification because of the differences in social history, ancestral patterns of geographical migration, mating practices, reproductive expansions and bottlenecks of different human subpopulations [10]. Population stratification in GWASs While conducting a GWAS, a major concern is the possibility of inducing false positive or false negative associations between a SNP and the phenotype due to population stratification. This has motivated many researchers to consider techniques to address population stratification problem. As a pre-processing step in GWAS, these techniques either exclude some of the study subjects to alleviate the problem or adjust some of the SNPs to correct for population structure [11]. Here we review some of the standard techniques used to deal with population stratification problem in GWASs and discuss their limitations: Self-declared ancestry Many studies ask subjects to identify their own ethnicity, by reporting their ancestry and country of origin. Then they address the problem of population stratification by including the cases and controls that have the same self-reported ancestry and by excluding other subjects from the GWAS. However this method is sometimes misleading as some people might not know their full lineage information, or simply are mistaken. Furthermore, self-declared ancestry is not always sufficient to control population stratification as nearly all populations are confounded by genetic admixture at some level [12]. Ancestry informative markers Some projects attempt to estimate ancestry using a panel of ancestry informative markers (AIMs) that show the highest absolute value difference in allele frequency between two ancestral populations. A small set (typically tens to hundreds) of well-established AIMs can perfectly distinguish continental differences between individuals [13][14][15][16]; however, panels of AIMs, described thus far, are less informative in detecting sub-continental differences in closely related populations such as Europeans [17][18][19][20][21][22][23][24][25]. Genomic control A widely used approach to evaluate whether a dataset is confounded due to population stratification involves computing the genomic control λ, which is defined as the median χ 2 (1 degree of freedom) association statistic across SNPs, divided by its theoretical median under the null distribution. A value of λ ≈ 1 indicates no stratification, whereas λ > 1 indicates population stratification or other confounders [26][27][28][29]. Despite its widespread application, genomic control method has a fundamental limitation. In the real world, some markers differ in their allele frequencies across ancestral populations more than others while the genomic control corrects for stratification by adjusting association statistics at each marker using a uniform overall inflation factor. This uniform adjustment is not sufficient to deal with both markers that have strong differentiation across ancestral populations and also those with smaller differentiation. Structured association Structured association techniques are unsupervised learning (clustering) methods such as STRUCTURE [30], which is based on a Bayesian framework, and latent class analysis [31], which is based on maximum-likelihood, that assign subjects of a case-control study cohort to discrete subpopulations based on their inter-cluster similarities and intra-cluster dissimilarities [32,33]. Although structured association methods have the advantage of assigning samples into meaningful population groups, they cannot be applied to GWAS datasets because of their intensive computational cost on large datasets provided by recent high-throughput measurements. Principal component analysis Techniques based on principal component analysis (PCA) [34][35][36], like EIGENSTRAT [34], are currently the state-of -the-art methods used in GWASs for population stratification correction. The EIGENSTRAT algorithm applies PCA to genotype data to infer continuous axes of genetic variations represented by principal component vectors and then adjusts genotypes and phenotype by amounts attributable to ancestry along each axis. Despite the widespread application of such PCA-based techniques, they have some disadvantages: First, they are not cost-efficient since they require genotyping thousands to millions of markers to be able to calculate principal component vectors. Second, to infer ancestry of subjects they apply PCA, a black-box model, which is not human readable (i.e., not transparent). Third, as high-throughput measurements produce many missing values, the straightforward PCA does not apply, leading EIGENSTRAT to use missing value imputation. However, such imputation techniques can be problematic in population genetics as they ignore interindividual and inter-ethnic variations, meaning such imputed datasets can lead to spurious association findings [37]. Fourth, the genotyping errors (GEs) that arise in highthroughput SNP measurements are a major issue in association studies [38][39][40][41][42][43][44] and substantially affect the efficiency of PCA-based methods like EIGENSTRAT [45]. The purpose of our research study In this paper, we introduce a novel method, ETHNOPRED, for producing models that can accurately place subjects within continental and sub-continental populations, by applying a supervised learning (classification) technique to datasets from the second and third phases of the international HapMap project [46]. The resulting classifiers can help correct population stratification in association studies, overcoming some of the limitations of the conventional methods listed above. First, self-declared ancestry information is often problematic, except possibly for isolated populations with extensive inbreeding. ETHNOPRED does not rely on self-declared ancestry information and analyzes an individual's genome to properly identify his/her ancestry. Second, while small panels of AIMs for continental population identification are designed, panels of AIMs for subcontinental population identification, if designable, either are less informative or use a large set of markers. However, ETHNOPRED produces accurate classifiers not only for continental population detection but also for subcontinental population detection using a small number of markers. Third, ETHNOPRED does not rely on the assumption made by the genomic control method that all markers contribute equally to population stratification and instead benefits from the fact that different markers ontribute to population differences in different degrees. Fourth, unlike structured association methods, ETHNOPRED classifiers are fast and easily applicable to the large GWAS datasets generated by high-throughput measurement techniques like microarrays and next generation sequencers. Fifth, ETHNOPRED classifiers require genotyping of only tens to hundreds of SNPs for accurate population identification. Hence they are simpler and more cost-efficient than PCA-based methods, which require genotyping of thousands to millions of SNPs. Sixth, PCA based methods like EIGENSTRAT are substantially affected by the genotyping errors that arise in highthroughput SNPs measurements. However, low-throughput SNP measurements of tens to hundreds of SNPs required by ETHNOPRED classifiers may be easily validated on independent genotyping platforms to rule out genotyping errors and assess concordance of genotype calls across independent platforms. Once these criteria are established, these selected SNP panels could be used to identify population stratification across projects sharing similar cases and control cohorts in molecular epidemiological studies. Seventh, ETHNOPRED classifiers are a set of easy-to-read rules. Thus unlike PCA-based methods, these classifiers are transparent, and so can provide insight into the population classification problem they are dealing with. Eighth, unlike PCA-based methods, ETHNOPRED classifiers do not require any kind of imputation to handle missing values. ETHNOPRED classifiers are robust to missing values as their ensemble structure allows them the flexibility to deal with missing SNPs by simply removing some decision trees, and still remain able to accurately identify ancestry. Datasets Our objective is to build predictive tools to determine an individual's continental and sub-continental ancestry based on the values of a small set of his/her SNPs. We develop this tool by applying supervised learners to datasets from the second and third phases of the international HapMap project. The HapMap project is a multi-country effort to identify and catalogue genetic similarities and differences in human beings and to determine the common patterns of DNA sequence variations in the human genome. It is developing a map of these patterns across the genome by determining the genotypes of more than a million sequence variants, their frequencies and the degree of association between them, in DNA samples from subpopulations with ancestry from East and West Africa, East Asia, North and West Europe, and North America. The HapMap phase II datasets, released in 2007, contained 270 subjectsincluding 90 Utah residents with ancestry from Northern and Western Europe (CEU), 90 Yorubans from Ibadan, Nigeria (YRI), and a mixture of 45 Japanese in Tokyo and 45 Han Chinese in Beijing (JPT/ CHB)each genotyped on an Affymetrix SNP array 6.0 platform, measuring 906600 SNPs. We utilize the HapMap II datasets to build a predictive model for inferring the continental ancestry origins (West Africa vs. East Asia vs. North-West Europe) of an individual. We apply the resulting classifier to a dataset of 696 breast cancer study subjects (348 breast cancer cases and 348 apparently healthy controls) from Alberta, Canada, genotyped on the same Affymetrix SNP array platform. We have selfdeclared ancestry of these 348 control individuals. These study subjects provided written informed consent and the study was approved by the Alberta Cancer Research Ethics Committee of the Alberta Health Services [47]. The Pre-processing The allele with the dominant occurrence within a population is called the major allele (A), while the allele occurring less frequently is called the minor allele (B). Together, the alleles from paternal and maternal chromosomal loci can produce three distinct genotypes: When both alleles (ie, inherited from both parents) are the major alleles (A_A), the genotype is called wild type homozygous; when both the inherited alleles are minor (B_B), the genotype is called variant type homozygous; and when the two alleles are different (A_B), the genotype is called heterozygous. To build our continental population classifier, we first identified the relevant SNPs from the HapMap II dataset, by removing a SNP if (a) it has a NoCall for any of the 270 subjects; (b) it is located on the X, Y, mitochondria (MT), or on an unknown chromosome; or (c) its genotype frequency deviates significantly from Hardy-Weinberg equilibrium (HWE) proportions, tested with Pearson's chi-squared (χ 2 ) test (nominal p-value < 0.05) [48]. We used criteria (a) to train our model using SNPs without missing values; (b) so the tool would be applicable to anyone, regardless of gender; and (c) by reasoning that To build our sub-continental population classifiers, we followed similar filtering criteria on HapMap III dataset. These pre-processing steps respectively removed 841790, 565554, 575492, 931993, 677326, and 629023 SNPs, and left 616597, 892833, 882895, 526394, 781061, and 829364 SNPs amenable for further analysis for the African, East Asian, European, North American, Kenyan, and Chinese population classification problems. Table 2 summarizes the statistics of the SNPs removed in the pre-processing steps, applied on HapMap III datasets. Predictive modelling Machine learning provides a variety of statistical, probabilistic, and optimization techniques to analyze and interpret data, which allow computers to autonomously learn from past examples by finding patterns to form predictive modelsoften finding hard-to-discern patterns, from noisy and complex datasets [49][50][51]. Machine learning has been applied successfully in many areas: Baldi and Brunak [52], Larranga et al. [53], and Tarca et al. [54] each surveyed various applications of machine learning in biology, medicine, and genetics including gene finding [55], eukaryote promoter recognition [56], protein structure prediction [57], pattern recognition in microarrays [58], gene regulatory response prediction [59], and protein/gene identification in text [60]. Herein, we learn a sequence of CART decision trees for continental and sub-continental population identification [61,62]. While machine learning provides many systems for learning classifiers, we focus on decision trees as these learners are easy to use (as they do not require the user to provide any input parameters) and relatively fast to train, and the resulting classifiers run quickly and are easy to interpret (which may explain why they are widely applied in biological/medical domains). "Ensemble learning" refers to a class of machine learning methods that combine the individual decisions of a set of learned "base predictors" to obtain a better predictive performance [63]. In general, an ensemble of predictors will be more accurate than any of its individual members if the constituent predictors are individually accurate and collectively diverse [64]. Ensemble models have been successfully applied on high-dimensional datasets generated by novel "omics" measurements, such as gene expression microarrays [65,66]. Many ensemble techniquessuch as bagging, boosting, AdaBoost, and stackingrely on manipulation of the input dataset by sampling of subjects or sampling of features, then learning individual base classifiers on these subsets of the input dataset [67]. While the main goal of ensemble predictors is to produce an accurate classifier (as the ensemble can sometimes overcome the over-fitting problem reported for decision trees in high-dimensional problems [68]), we used this approach to produce a classifier that is robust to missing SNP values. Our system therefore learns a set of disjoint trees; we later explain how this allows the classifier to predict the label of a subject, even if that subject is missing many SNP values. Here we explain how ETHNOPRED learns an ensemble of disjoint decision trees, focusing on continental population classifier case. It first applies the CART learning algorithm to the dataset of 270 subjects over the 611146 SNPs mentioned above, to produce the decision tree ( Figure 2) with 3 internal nodes (each a condition on a specific SNP) and 4 leaf nodes (class labels), corresponding to the 4 rules shown in Figure 2. It then removes these 3 SNPs from the list of 611146 SNPs and applies the same CART decision tree learning algorithm to the dataset of 270 subjects and the remaining 611143 SNPs, to produce a second decision tree. We repeat this algorithm, each time removing the SNPs used in the previous trees, to produce the next decision tree. The ETHNOPRED continental population classifier learns N = 29 disjoint decision trees. We explain below that N = 29 guarantees that this system is robust against missing SNP valuesthat is, based on some simple assumptions, we anticipate that at least 99.9% of the subjects will include calls on the SNPs needed to "match" several decision trees; enough trees that the resulting sub-ensemble will be at least 99.9% accurate. This analysis appears below. Additional file 1: Appendix A and Figure 3 show the estimated accuracies of the first k decision tree: the first tree, alone, is 97.41% and the ensemble classifier using the first 3 decision trees is 100%. If accuracy was our only concern, our ensemble classifier would just use these 3 decision trees, involving its 10 SNPs. However, this 3 decision tree system can only classify a subject if that subject includes values for (essentially) all 10 SNPs. Missing genotype data is a common problem in genotyping experiments, due to assay design failures, platform specific differences in the SNPs analyzed or due to hybridization artifacts in these high-throughput array platforms [69]. Here, we show that N = 29 decision trees are sufficient, under mild assumptions, to obtain an accuracy (Acc) of ≥ 99.9% with 99.9% confidence (C), even considering missing SNPs: We trained 30 disjoint decision trees and found the average number of SNPs used in these 30 decision trees is n = 154/30 ≈ 5.13. We then assumed that, for the Affymetrix genome wide SNP array 6.0 platform, NoCall's are independent from one SNP to another, and that the probability that a SNP value will be a NoCall is at worst u = 0.1 (based on assessment on the HapMap II dataset). This means that the probability that a subject will include all of the SNPs for a decision tree is p ≤ (1-u) n = 0.9 5.13 = 0.59049, and so the probability that a subject will not include all of the SNPs of a decision tree is at least q = 1p = 0.40951. We now ask how many decision trees (m) are needed to insure that the average accuracy (Acc) of any subset of m trees is at least 99.9%. We therefore considered a sampling of ensembles of size 1 (i.e., individual decision trees) and Table 4 The confidence of having m = 9 decision trees without missing SNPs for N = 1..30 in continental population classification problem calculated the average 10-fold cross validation accuracy. We next computed the average 10-fold cross validation accuracy over a sample of pairs of decision trees; then over triples, and so forth, for i = 1..30 (Table 3). We found that m = 9 is sufficient to obtain an average 10-fold cross validation accuracy (Acc) of 99.9%. The next challenge was in determining how many trees (N) are necessary, to be confident that the SNPs for 99.9% of all subjects will include calls on all of the SNPs for at least 9 trees.The probability of having at least m decision trees with no missing SNPs, given N decision trees, with probability p that a decision tree includes only specified SNPs, is: Table 4 shows the values for C based on different values for N; here, we see N = 29 decision trees is sufficient to have 99.9% confidence (C) that a subject will include all of the SNPs in at least m = 9 decision trees, which our earlier experiments show is sufficient to produce an accuracy of ≥ 99.9%. Additional file 2: Appendix B summarizes this analysis. Models' usage for population stratification correction For each continental and sub-continental ancestry identification problem, the pre-processing and predictive modeling steps produce a model (i.e., in the case of continental classification problem, the model is an ensemble of 29 decision trees) that can be used to classify novel subjects. For example, in continental population identification, we need to only find the values {A_A, A_B, B_B, NoCall} of the relevant 149 SNPs, then hand this set of 149 values to each of the 29 decision trees. Each tree involves a small number of SNPs (typically 3-7); if they are all specified (that is, none are "NoCall") for a novel subject, this tree will produce a predicted labelone of the three ethnicity groups: CEU, YRI, or CHB/JPT. If not, the tree makes no prediction. This will lead to a set of at-most-29 predicted ethnicity values for this subject. As no human population is homogenous, given a novel subject with unknown ancestry, our model can provide a vector of population inclusion probabilities. For example, when classifying a novel person with the initial continental classification, imagine 15 trees vote for CEU, 4 for YRI, 8 for JPT/CHB, and 2 are silent; this would produce the vector (15/27, 4/27, 8/27). These vector-valued predictions provide flexibility for researchers conducting a GWAS, as they can then, for example, define cut-off criterion for including a subject within a population under study. For each subject, continental classifier then returns, as ethnicity label, the ethnicity with the largest number of trees. In the Results section, we explain such panels for resolving the population stratification problem in closely related populations within a continent or a country as well. Evaluation We built the ETHNOPRED classifiers using HapMap II and HapMap III datasets as training data. Before using each classifier, we estimated its quality using a 10-fold cross validation (CV) [70]. This meant partitioning the training dataset into 10 disjoint folds. Each time we used nine of these folds (9/10 th of data) as training set for learning a sequence of decision trees, applying the algorithm explained in the Predictive Modeling section. We then used the remaining fold (1/10 th of data) as a test set; here to compute, for each subject, class labels (one from each decision tree), and also the majority vote over these model (corresponding to the ensemble classifier). As we knew the true label for these subjects, we then obtained an accuracy score (the percentage of correct predictions over the total number of predictions) for each of the disjoint decision trees and for the final ensemble. We repeated this process 10 times, each time measuring accuracy of the predictors on a different fold. We estimated the final accuracy of the decision trees and ensemble model as an average of these 10 folds, with variance based on the spread of these 10 numbers. We used a similar way to evaluate the quality of the ETHNOPRED(k) classifier, where each such classifier was involved in returning the majority vote over subsequence of k individual decision trees. Results and discussion Continental ancestry identification Table 1 summarizes the statistics of the SNPs removed in the pre-processing step, which recall filtered out each SNP with a call rate of less than 100%, or that are located on X, Y, MT, or an unknown chromosome, or deviated from the HWE; this removed 295454 SNPs, leaving 611146 SNPs for further analyses. The final ensemble model, learned from all 270 subjects of the HapMap Phase II datasets, was composed of 29 disjoint decision trees, which each involved between 3 to 7 SNPs and between 4 to 8 leaf nodes/rules. This corresponds to a total of 178 rules involving 149 SNPs in the ensemble model (see Additional file 3: Appendices C, Additional file 4: Appendix D and Additional file 5: Appendix E). Additional file 1: Appendix A and Figure 3 present the 10-fold cross validation (CV) accuracy of the disjoint decision trees built based on the ETHNOPRED algorithm showing the mean of the 10-fold CV accuracy of these models was between 90.7% and 99.3%. We see that the ensemble over only the first tree had a mean accuracy of 97.4%; the accuracy decreased (albeit insignificantly) to 95.9% by adding the second tree; the ensemble over 3 (or more) trees was 100% accurate. While adding additional trees to the ensemble did not improve the accuracy, our approach did increase its robustness to missing SNP values, as it means ETHNOPRED can produce a classification label even if the subject did not have calls on all 149 SNPs. Recall that ETHNOPRED can classify most subjects with missing SNP values as it can ignore any tree that includes missing SNPs, and returns as label the majority vote of the remaining trees. To further assess the accuracy of ETHNOPRED, we also used a hold-out set of 696 breast cancer subjects (348 breast cancer cases and 348 controls) genotyped in Alberta, Canada. We had self-declared ethnicity labels for the control subjects. Here, we compared our ETHNOPRED against the commonly-used EIGENSTRAT system, in terms This table summarizes the result of our studies on various sub-continental classification problems. The "Number of Subjects, Split" column shows the total number of subjects, followed by the list of (ethnic-group; number) pairs, giving the name of each subgroups and its size here. The "Number of SNPs" column gives the number of SNPs used for this study. The "Baseline" column gives the baseline accuracy of just using the majority class. The "DT1 (Number of SNPs), Accuracy" column provides the number of SNPs in the first decision tree, and its estimated 10-fold cross-validation accuracy. The "Minimal Number of DTs (Number of SNPs), Accuracy" column gives the minimal number of disjoint decision trees required to achieve the highest accuracy, and the number of SNPs involved, in these trees. The "Number of Robust DTs (Number of SNPs)" column gives the number of decision trees required to achieve robustness and the number of SNPs involved. of the prediction accuracy and genomic control inflation factor (λ) improvement. Here, we extracted the values of ETHNOPRED's 149 SNPs for each subject. Note that 17 of these 149 SNPs had NoCalls for at least one subject. For each subject, each of ETHNOPRED's 29 decision trees predicted the subject's ethnicity to be one of "CEU", "YRI", "JPT/CHB", or "Missing". Continental classifier then calculates the covariate probability vector and returns the ethnicity with the majority vote as the predicted label for that subject. Additional file 6: Appendix F summarizes ETHNOPRED output for test dataset of 696 subjects. Prior knowledge of the subjects' ethnicity labels, when available, would help assess the predictive accuracies of ETHNOPRED (or EIGENSTRAT)eg, many previously published studies (including our [45]) have used the HapMap subjects' self-declared ethnicity label to evaluate their ethnicity classifiers. We extrapolated this logic to calculate the prediction accuracies of ETHNOPRED over 348 control subjects, based on their self-declared ethnicity. Additional file 7: Appendix G summarizes the subjects' ethnicity labels, classified by ETHNOPRED (and the number of decision trees involved), EIGENSTRAT, and selfdeclared ethnicity label. Table 5 shows that ETHNOPRED's ethnicity classification matched closely with the subject's self-reported ethnicity (96.8%); Table 6 provides similar statistics for EIGENSTRAT (97.4%). The ETHNOPRED classifier labels 677 subjects as "CEU"; we could therefore use only these subjects and exclude the other 19 subjects for which either "YRI" or "CHB/JPT" is the majority ancestry covariate. Then we computed the inflation factor using the Genomic Control method for these subjects. For the entire sample size of 696 unclassified subjects in the association study, the computed inflation factor was 1.22, whereas the inflation factor computed for the 677 subjects classified as "CEU" by ETHNOPRED was 1.11, and the inflation factor for the 623 subjects classified as "CEU" by EIGENSTRAT was 1.10. While ETHNOPRED's learned classifier gives roughly the same improvement to the inflation factor as EIGENSTRAT, it offered the advantage of using a set of only 149 SNPs to achieve the classification of ethnicity label (CEU), which is significantly smaller than the 906,600 SNPs used by EIGENSTRAT. Table 7 summarizes the results of our study on these sub-continental population classification problems respectively for the case of European, East Asian, African, North American, Kenyan, and Chinese population classification problems. Additional file 1: Appendix A and Figures 4,5,6,7,8,and 9 show the 10-fold CV accuracy of the individual disjoint decision trees and ensembles of varying size built over those trees using the ETHNOPRED algorithm. The baseline accuracy calculated by simply classifying every subject to the majority class in each of these sub-continental identification problems is as follows: 61.8%, 54.8%, 40.8%, 30.1%, 62.6%, and 55.7%. In each of these problems, the accuracy of a single decision tree, using 10, 12, 23, 19, 11, and 15 SNPs, is as follows: 79.0% ± 5.6%, 74.4% ± 7.9%, 66.2% ± 5.3%, 82.7% ± 5.4%, 79.2% ± 3.5%, and 47.2% ± 9.1%. These accuracies are significantly better than the baseline accuracy in every case except the Chinese one. Regardless of the Chinese case, ensembles of 3, 39, 21, 11, and 25 decision trees using 31, 502, 526, 242, and 271 SNPs have accuracy equal to 86.6% ± 2.4%, 95.6% ± 3.9%, 95.6% ± 2.1%, 98.4% ± 2.0%, and 95.9% ± 1.5% which are all statistically significantly better than the accuracy of the individual decision trees in other sub-continental classification problems. While adding additional trees to these ensembles does not improve the accuracy, using the arguments described in Predictive Modelling section, these additional trees do increase its robustness to missing SNP values; our analysis shows that an ensemble of 15, 67, 157, 70, and 31 decision trees using 180, 877, 4236, 1643, and 341 SNPs guarantees both accuracy and robustness to missing values in these cases. Additional file 2: Appendix B summarizes this analysis and Additional file 4: Appendix D and Additional file 5: Appendix E show information related to the SNPs used for sub-continental population identification problems under the accuracy condition satisfaction and the robustness to missing values condition satisfaction paradigms. As mentioned above, ETHNOPRED is unable to produce a classifier that can distinguish between Chinese in Beijing and Chinese in Denver. We believe this is not a limitation of our algorithm given the fact that the first Chinese immigrant arrived in U.S. less than 200 years ago. Conclusions This paper presents a new algorithm called ETHNOPRED that can learn classifiers (each an ensemble of disjoint decision trees) that can identify continental and subcontinental ancestry of a person. While this task is motivated by the challenge of addressing population stratification, it might be useful in-and-of itself, to help determine a person's ancestry. Applying this approach to downstream association tests/analysis may reduce the false positive and false negative findings by (i) removing the confounding subjects or alternatively, (ii) treating population classification probabilities as a covariate. Our results show that our machine learning approach is able to find distinctions between populations when there is a distinction. Unlike AIMS, our method can accurately distinguish genetically close populations such as subgroups within Europe, East Asia, Africa, North America, and Kenya. Unlike many structured association methods, ETHNOPRED is fast and easily extendible to large scale GWASs. Furthermore, ETHNOPRED uses decision trees, which are much simpler and easier to understand than models based on principal component analysis, such as EIGENSTRAT. Note also that decision trees can be easily translated into a set of comprehensible rules, which renders the model completely transparent to the user. While EIGENSTRAT typically uses data from genome wide scans, often involving hundreds of thousands of SNPs, ETHNOPRED uses a small number of SNPs to accurately determine the ancestry of subjects. This means our method is especially useful even in the absence of whole genome (high density) SNP data (e.g., during Stage 2 or Stage 3 of a GWAS). Moreover, as it requires genotypes of only a small number of SNPs, it gets less affected by the genotyping errors compared with methods such as EIGENSTRAT as there is typically a smaller percentage of genotyping errors when dealing with such small number of probes. ETHNOPRED's ensemble structure makes it robust to missing values, as its multiple trees include enough redundancies that it can return accurate predictions even if it discards some decision trees while dealing with missing SNPs. We believe that this property of ETHNOPRED makes it beneficial over commonly used methods that use imputation methods for missing values, as those techniques may introduce bias or imperfect estimations. These points all argue that future GWAS studies should consider using ETHNOPRED to estimate the ethnicity of their subjects, towards addressing possible population stratification. While our ETHNOPRED system is focused on predicting ethnicity, it is within the general machine learning framework, of using training information from a group of subjects to produce a personalized classifier, that can provide useful information about subsequent subjects. This paper shows that this framework can work effectively to solve important problems.
7,809.2
2013-02-22T00:00:00.000
[ "Computer Science" ]
Selected configuration interaction with truncation energy error and application to the Ne atom Selected configuration interaction SCI for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations those that can be expressed as products of combinations of singly and doubly excited ones in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions CSFs over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown’s energy formula, EK= E−HKK BK 2 / 1−BK 2 , with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Edis, is approximated by the sum of EKs of all discarded Ks. The remaining connected configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM =ES+ E dis+ E, where E is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy 0.5 cm−1 is achieved in a model space M of 1.4 109 CSFs 1.1 1012 determinants containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since Edis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper. © 2006 American Institute of Physics. DOI: 10.1063/1.2207620 I. INTRODUCTION For atoms and small molecules, Schrödinger's equation can be approximated by a matrix-eigenvalue equation, where H is the representation of H in terms of the Slater determinants or N-electron symmetry-eigenfunctions constructed from a given orbital basis.Equation ͑1͒, which can be applied to the complete range of quantum mechanical problems associated to the given system, defines the full configuration interaction ͑CI͒ method 1 and E FCI is the full CI ͑FCI͒ energy.In terms of FCI quantities, the exact eigenvalues E of Schrödinger's equation may be expressed as where ⌬E OBI is the error due to orbital basis incompleteness.2-5 Henceforth the subscript will be dropped in the understanding that the following also applies to excited states.⌬E OBI shall be further discussed in Sec.VII B. Full CI, on the other hand, is the central referent of all orbital methods based on Hamiltonians obtained from the first principles: 6,7 highly correlated CI ͑HCCI͒, 8 symmetry-adapted-cluster 9 ͑SAC͒ and SAC-CI, [10][11][12] sizeconsistent CI, 13 coupled cluster ͑CC͒ methods, 14,15 manybody perturbation theory ͑MBPT͒, 16,17 electron propagator theory, 18,19 and, more recently, density matrix variational theory, 20,21 the density matrix renormalization group method, 22 iterative CI ͑Refs.23 and 24͒ and extended CC, 25 and ab initio density functional theory. 26͑Quantum Monte Carlo methods 27 are becoming increasingly competitive but use an entirely different methodology.͒Traditional FCI is an impossible task, except to test ab initio electronic structure methods 28 with ͑necessarily͒ too small orbital bases lacking predictive value.The new FCI methods 22,23,29,30 considerably extend the scope of traditional FCI but will continue to be limited by the size of the orbital bases.This paper addresses HCCI methods in general. 8Let us convene calling HCCI any CI method which, despite a formal lack of size extensivity, [31][32][33][34][35] competes on a par with the better founded coupled cluster methods such as CCSD, 34 CCSD͑T͒, 36,37 CCSDT, 38,39 CCSDT͑Q͒, 40 CCSDTQ, 41,42 or even CCSDTQQn, 43 for a given problem at hand.͑S, D, T, Q, Qn, Sx, etc., refer to singles, doubles, triples, quadruples, quintuples, sextuples, etc.͒ Comprehensive studies on the water molecule 44,45 in which the HOH angle is fixed at 110.6°and the OH distance is varied between R e and 3R e with R e Ϸ 1.843 45 a.u.show that in order to compete with CCSDT, a CISDTQ treatment is adequate CISDTQ Ϸ CCSDT. The previous assertion applies, in general, to similar situations, atoms and small molecules.The reason for the obstinate endurance of CISDTQ vis-à-vis CCSDT is its variational character, not shared by standard CC methods.This is consistent with recent results 46,47 in variational CC calculations 48 achieving close to FCI quality.Accordingly, whereas nonvariational CC methods for N-electron systems include up to N-excited determinants that are normally absent in HCCI, the latter provides the best expansion coefficients up to the level of excitations actually incorporated, and that is enough-at least energy wise-to compensate for lack of size consistency.The problem with straight CISDTQ, nevertheless, has been the excessive computer resources required for current and even future computational power. Continuing with the water molecule, a recent application of the ultimate of CC tools, namely, CCSDTQQn, 43 suggests that CISDTQQnSx is a clear match, CISDTQQnSx Ϸ CCSDTQQn.This paper presents a complete and efficient approach to approximate CISDTQQnSx for atoms and small molecules to within reliable and acceptable errors by means of selected CI ͑SCI͒ and its corresponding truncation and residual energy errors relative to the full CISDTQQnSx treatment.͑The residual energy error accounts for inaccuracies in the truncation energy error itself and for other errors requiring sensitivity analyses.͒SCI calls for three methodological requirements: ͑i͒ a priori selection of configurations, 8,49-51 ͑ii͒ a priori estimate of truncation energy errors, and ͑iii͒ a posteriori assessment of all other errors not calculated in ͑ii͒. The present SCI method differs from its predecesors in two important aspects: ͑i͒ truncation energy errors are quantitatively assessed all along making use of Brown's energy formula, 52 and ͑ii͒ the selection scheme targets configurations rather than configuration-state functions ͑CSFs͒ or determinants, both advances, in combination, leading to orders of magnitude improvements in accuracy and precision.CI notation and the Brown formula are given in Sec.II.In Sec.III, the linked cluster expansion is compared with the determinantal cluster expansion to obtain n-excited determinantal CI coefficients in terms of those of lower-excited determinants, as already known 7,53 but unexploited.The expressions for determinantal coefficients are then generalized to approximate configurational coefficients in a quick and reliable way, therefore opening the way for large-scale a priori applications of Brown's formula. Selection of configurations involves additional conceptualizations discussed in Sec.IV.Truncation and residual energy errors are taken up in Sec.V, and an application on the Ne atom is presented in Sec.VI.Present achievements, their impact on other ab initio methods and conclusions are given in Sec.VII.Various details are given elsewhere. 54Efficiency requirements in connection with the matrix-eigenvalue problem demand the development of yet another variational method presented in a companion paper. 55 II. CI NOTATION AND BROWN's FORMULA A general HCCI model wave function can be written as 56 K and g label configurations and degenerate elements, respectively, and C gK denotes a CI coefficient.Triply and higher-excited configurations can be classified into disconnected and connected ones.Disconnected configurations are those that can be expressed as products of combinations of singly and doubly excited ones, whereas connected configurations are all others.F gK is an N-electron symmetry eigenfunction or CSF expressed as a linear combination of n K Slater determinants D iK , where O͑⌫ , ␥͒ is a symmetric projection operator 57 for all pertinent symmetry operators ⌫ and a given ͑N-electron͒ irreducible representation ␥. [58][59][60][61] Let ⌿͑−F gK ͒ denote N͑⌿ − F gK C gK ͒ where N is a normalization factor, viz., let us assume that after deletion of F gK , the new wave function ⌿͑−F gK ͒ has the same remaining expansion coefficients except for renormalization.The energy contribution ⌬E gK of F gK can be approximated by which readily yields Brown's formula, 52 In Eq. ͑6͒, E = ͗⌿͉H͉⌿͘.Approximation ͑6͒ is particularly good for small values of ⌬E gK , viz., for expansion terms F gK eventually to be discarded, like triply and up to sextuply excited configurations.As pointed out in Ref. 62 similar equations of perturbational lineage have been used by other authors.Equation ͑6͒ requires previous knowledge of C gK coefficients which so far could only be obtained after making a calculation. 63Quick prediction of C gK s for each g of a given K is probably hopeless.Fortunately, as shown in Sec.III E, it is possible to predict configurational B K coefficients defined below. First, Eq. ͑3͒ is rewritten as in terms of normalized symmetry configurations G K , ͑9͒ Similarly as ⌬E gK in Eq. ͑6͒, ⌬E K for expansion ͑7͒ is given by to be used just for estimating an approximate truncation energy error.The variational calculations are still carried out via Eq.͑3͒ but the selection process targets configurations G K instead of F gK s whereby the need to predict C gK coefficients is eliminated.In the next section, predictive formulas for B K coefficients of triply and up to sextuply excited configurations will be discussed.Returning to Eq. ͑10͒, for highly excited configurations, the term ͑E − H KK ͒ is generally of the order of several Hartree, thus E can initially be approximated by any correlated energy, viz., a singles and doubles CI ͑CISD͒ energy.Also, H KK can be well approximated by ͗D iK ͉H͉D iK ͘, where D iK is any determinant of K.In atomic work, where degeneracies g K may easily reach several thousands, thanks to simplification ͑11͒ Brown's formula can be used before generating very expensive F gK s, allowing to make a decision at this early stage whether to incorporate these explicitly in an ensuing variational treatment or to leave them out in the form of a contribution ⌬E K to the truncation energy error.The final expression for total truncation and residual energy errors is postponed to Sec.V. A. Determinantal CI and Oktay Singanoğlu A CI expansion in terms of n-excited determinants and a single reference determinant D 0 can be expressed in cluster form as 64 In a landmark paper, 65 In general, however, energy contributions of triples cannot be neglected since they are about equally important as quadruples. 66Analogously, in going to a higher order of approximation, quintuples and sextuples rather than just sextuples must be incorporated, even for closed-shell systems. 44 B. Exponential ansatz for the wave function Following the linked cluster theorem, [67][68][69] the introduction of an exponential wave function of a cluster operator T, 70,71 ⌿ = exp͑T͒D 0 , ͑13͒ established a powerful theoretical framework free from socalled CI traps, namely, the CI limitation to a given and necessarily low level of spin-orbital excitations.Let the cluster operator be defined as 70 in terms of creation operators a ˆa † for unoccupied orbitals a and annihilation operators a ˆi for occupied orbitals i. Developing the exponentials and collecting terms, 31 the following exact relationships between determinantal CI coefficients c ijk. . .abc. . .and cluster amplitudes t ijk. . .abc. . .are obtained 33 1 and so on for c ijkᐉ abcd and higher-excited CI coefficients.Apart from the coefficient c 0 , Eqs. ͑18͒-͑20͒ are particular cases of Eq. ͑A4͒ of Ref. 53.A hierarchy of coupled-cluster methods may be derived by replacing the right-hand side ͑rhs͒ of ͑18͒-͑20͒ and similar equations into the full CI equations. 33Instead, we shall move in the opposite direction. C. Predictor of determinantal coefficients By replacing the rhs of ͑18͒ in ͑19͒ one gets When ͑18͒ and ͑21͒ are replaced into the rhs of ͑20͒ it follows: Equation ͑22͒ shows that c ijk abc coefficients are given exactly in terms of coefficients of lower excited detors plus the irreducible amplitude t ijk abc .Apart from the occurrence of c 0 , these and similar equations for the coefficients associated to higher-excited determinants are particular cases of Eq. ͑A4Ј͒ of Ref. 53.The exciting promise of the above equations stems from the reasonable hypothesis that distinct from c ijl. . .abc. . .coefficients, the t ijk. . .abc. . .amplitudes diminish quickly with the order of excitation ͑in analogy with the virial expansion in imperfect gas theory 65 ͒, and can be neglected. In general one has Equation ͑23͒ is a shorthand for a predictor of CI coefficients of the n-excited determinants in terms of coefficients of lower excited detors if triply and higher-excited irreducible t ijk. . .abc. . .amplitudes on the rhs can be neglected.If t ijk. . .abc. . .Ϸ 0, as first envisioned by Sinanoğlu, Eq. ͑22͒ and similar ones can be used to estimate c ijk. . .abc. . .coefficients for evaluation of approximate truncation energy errors of disconnected determinants through an equation similar to ͑6͒ or ͑10͒.This is different from Sinanoğlu's original proposal 65 to use Eq.͑22͒ and similar ones as part of a scheme to calculate the total energy itself.Also, the need for large-scale CI is here anticipated as unavoidable.Moreover, as it is well known, 72 t ijk abc Ϸ 0 and even t ijkᐉ abcd Ϸ 0 is not always justified, causing the need of additional considerations to be discussed in Sec.IV. D. From determinants to configurations Simplifications that are essential for large-scale application of Brown's formula will now be considered for the first time.In molecules there is no much of an incentive to contract determinantal expansions into CSFs. 8But the situation changes, even in molecules, when the final purpose becomes to contract sums of determinants into symmetry configurations G K , Eq. ͑8͒, embracing all degenerate elements into a single term.Here the effective contraction factor becomes 1/n K , viz., it is equal to the reciprocal of the number of determinants for a given configuration K, which is around 0.05 for CISDTQ with the Abelian point-symmetry groups, falling under 0.0001 in atomic CISDTQ with orbitals of high angular momentum, continuing to decrease for higher exci-tations.Therefore, the configurational counterpart of equations such as ͑22͒ is considered next. The configurational cluster expansion is given by where ഛ has now taken the place of Ͻ in Eq. ͑12͒, symmetry-orbitals replace spin-orbitals, and the summation over the degeneracy index g has already taken place thus hiding the linear variational coefficients C gK through Eqs.͑8͒ and ͑9͒.Formally, other than for calculation purposes, ͑24͒ is entirely equivalent to ͑12͒ as well as to ͑7͒ and ͑3͒, thus any ⌽ ijk. . .abc. . . is identical with some G K of ͑7͒. E. Predictor of configurational coefficients A priori prediction of the C gK coefficients of Eq. ͑3͒ was discussed by Pipano and Shavitt 73 but the lengthy calculations of their proposal were never implemented.Rather than deriving equations similar to ͑23͒, approximate equations to predict the configurational coefficients B ijk. . .abc. . . of Eq. ͑24͒ shall be guessed.The correctness of the guessed equations will be tested by means of actual calculations. When there are no equal signs among the participating orbitals and all concerned degeneracies are equal to one, the predictor equations for the configurational coefficients B K of ͑9͒ or B ijk. . .abc. . . of ͑24͒ should be identical to those for the c ijk. . .abc. . .coefficients of determinantal expansions, Eq. ͑22͒, and similar ones.The question to be answered then is how Eq. ͑22͒, for example, is to be modified when there are equal orbital indices.Let us consider the extreme case when all occupied orbitals i are equal among themselves, as well as the excited orbitals a.Since the expansion in Eq. ͑24͒ does not contain repeated coefficients, the recipe must be to drop all terms with repeated coefficients.Consequently, for configurational coefficients, Eq. ͑22͒ changes into B ˆiii aaa in the rhs of ͑25͒ is a linked or irreducible coefficient ͑in Sinanoğlu's nomenclature͒ which will be neglected in the evaluation of the left-hand side ͑lhs͒ of ͑25͒.In this way, of the 15 terms of Eq. ͑22͒ only two survive.The codes expressing the 1440 formulas for up to sextuply excited coefficients ͓2͑2q −2͒ formulas for coefficients of q-excited con-figurations͔ were produced by FORTRAN programs and are further discussed elsewhere. 54Moreover, the irreducible components B ˆijk... abc. . .are significant in many triple excitations, and also in those instances where the remaining terms in the rhs of ͑25͒ are zero, as discussed in Sec.IV. A. Disconnected and connected configurations The selection process described so far may be summarized as follows: given a model space M, all disconnected configurations K with energy contributions ⌬E K greater than an energy threshold T egy are included in a selected space S that will subsequently be subjected to a variational treatment.However, other configurations also require systematic incorporation since there are two instances when the above criterion is inadequate: 76 independently of the magnitude of the disconnected terms in ͑22͒. Connected configurations do not exist when using orbital bases lacking spatial symmetry.They necessarily occur when at least one irrep does not appear as a fully occupied orbital in the reference configuration ⌽ 0 , namely, in all atomic and linear-molecule states, and in few-electron molecules with spatial symmetry.Our aim shall just be to guarantee that all deleted connected configurations, together with disconnected triples that were discarded by Brown's energy criterion, contribute less than a given amount of energy. B. Additional selection criterion The occurrence of connected configurations makes it necessary to introduce a new requisite: the correlation orbitals a , b , c ,... must be approximate natural orbitals, 77 viz., eigenfunctions of the reduced first-order density matrix or, better yet, average natural orbitals 78 so that orbital symmetry is preserved. Let ␥͑1,1Ј͒ be the average reduced first-order density matrix with eigenfunctions a and eigenvalues ͑occupation numbers͒ n a , ␥͑1,1Ј͒ = ͚ n a a * ͑1͒ a ͑1Ј͒. ͑27͒ In studies on atomic electron correlation 79 it was found that configurations can be chosen by the following criterion: for each q-excited configuration K the product P͑q , K͒ of corresponding occupation numbers is calculated where K i represents a correlation natural orbital.If g is the symmetry degeneracy of natural orbital a, n Ka = gn a .The whole configuration ͑all corresponding degenerate elements͒ is incorporated if P͑q , K͒ is greater than some occupation number threshold T on , P͑q,K͒ Ͼ T on .͑29͒ A functional form for T on can be expressed in terms of the excitation level q and of a parameter m as where m is shown explicitly on the lhs of ͑30͒ for later purposes.Thus, 10 −m may be interpreted as an average occupation number below which configurations involving a given natural orbital are deleted from an original model space M. In practice, starting from a sufficiently small energy threshold T egy , the value of m is increased until successive energy lowerings start to converge to within a prescribed residual energy error.Since the actual value of m in ͑30͒ guaranteeing a given contribution to the residual energy error depends on the holes i , j , k ,... of the configuration involved, there is ample room for enriching Eq. ͑30͒. 54 C. Strategy for configuration selection The following strategy for configuration selection is adopted ͑i͒ All triples with P͑3,K͒ ജ 10 −3m are selected.This criterion is applied to all triply excited configurations alike, disconnected and connected ones.The value of m must be sufficiently high to guarantee that the energy contribution of all deleted connected configurations is negligible.This is all that is to be done to select connected triples.͑ii͒ As to the disconnected triples that were not selected in ͑i͒, all those with ͉⌬E K ͉ ജ T egy are selected while the energy contributions of the discarded ones are accumulated into the total truncation energy error ⌬E dis , Sec.V A. ͑iii͒ All connected quadruples with P͑4,K͒ ജ 10 −4m are selected.This is the mechanism used to incorporate t ijkᐉ abcd s associated to connected configurations.͑iv͒ All disconnected quadruples with ͉⌬E K ͉ ജ T egy are selected while the energy contributions of the discarded ones are accumulated into ⌬E af dis .This implies to neglect all t ijkᐉ abcd associated to disconnected configurations deleted by the T egy test, no matter how significant they might be. ͑v͒ Quintuple-and sextuply excited configurations are selected according to ͑iii͒ and ͑iv͒. V. ENERGY EXPRESSION The discussions in Secs.II and IV allow to develop an appropriate notation and a general equation for the CI energy in terms of the usual energy upper bound, a computable ͑rather than formal͒ truncation energy error, and a residual energy error.The latter two contain a part corresponding to disconnected configurations, which can be estimated a priori, and another one due to connected configurations, which can be evaluated after studying energy convergence as a function of the parameter m of Eq. ͑30͒. A. Effect of truncating disconnected terms The a priori computable truncation energy error ⌬E dis comes from truncations of disconnected configurations, with ⌬E K given by Eqs.͑10͒ and ͑11͒ and predictor equations for CI coefficients such as Eq.͑25͒ and similar ones. 54⌬E dis decreases monotonically with the threshold T egy introduced in ͑26͒.⌬E dis is an approximation to an exact, usually unknown truncation energy error ⌬E exact dis , ͑32͒ For large values of ⌬E dis , the unknown quantity ␦E dis is comparatively small.As T egy is made smaller, ⌬E dis becomes tiny and ␦E dis , which may end up being larger than ⌬E dis , can be interpreted as a residual error which may be obtained through sensitivity analyses. In atoms, ⌬E dis has two sources: ⌬E bf dis from truncations before CSF evaluation and ⌬E af dis from truncations afterwards B. Effect of truncating connected terms The existence of connected configurations, and the need to truncate most of them, brings in a new kind of error, to be denoted ␦E con .Distinct from ⌬E dis and analogously as ␦E dis , ␦E con cannot be computed a priori; it can only be estimated after studying suitable patterns of energy convergence, see Ref. 54.For sufficiently small thresholds, ␦E con can also be understood as a residual error.The sign of ␦E con is always negative, since the latter is made up of bonafide variational energy contributions which have not been incorporated into the final calculation. C. Energy in a model space M The energy E M in a model space M is written as ͑34͒ where ␦E values are conditioned by various thresholds T. 54 Since E M is well defined, its value can in principle be obtained by a limiting process, letting all thresholds in T to become sufficiently small, thus lim ͑35͒ In very precise calculations, however, one must always settle for threshold values in T that are still too large to qualify as sufficiently small, and therefore the use of residual errors ␦E dis and ␦E con is inevitable Before ␦E becomes known, convergence studies necessarily ͑37͒ eventually converging to the net value E M .Equation ͑37͒ can easily be applied and may well be all that is needed if precision requirements on E M are not too tight.Otherwise, one must fall back into the more detailed Eq. ͑36͒. A. Choice of system As a numerical test, the Ne ground state is chosen because it is the simplest well known example 66,[79][80][81][82][83] exhibiting many of the complexities of a highly correlated CI.The basis set consists of 103 energy-optimized radial orbitals 79 up to ᐉ =13:12s12p11d10f10g9h8i7k6l5m4n3o3q3r, amounting to 1077 orbitals. Use was made of two programs: AUTOCL ͑106 000 lines of code and comments͒, for the calculation of pruned lists of CSFs together with the corresponding truncation energy error ⌬E bf dis , and ATMOL ͑159 000 lines of code and comments͒, for atomic and molecular SCI.The relatively large sizes of the above codes comes from the formulas used to predict energy contributions from quintuply and sextuply excited configurations.Both programs can be downloaded from a website. 54ull CI with the chosen basis calls for 2.4ϫ 10 25 CSFs ͑Ref.84͒ and 1.4ϫ 10 26 determinants disregarding spatial symmetry.CISDTQQnSx up to ᐉ = 7 demands 6.5ϫ 10 12 CSFs ͑4.2ϫ 10 15 distinct determinants͒ CISDTQ only requires 1.4ϫ 10 9 CSFs containing 1.1ϫ 10 12 determinants.Thus the size of the calculation to be presented exceeds by orders of magnitude the size of any calculations previously attempted. Despite neglect of relativistic effects, cm −1 precision within the CISDTQQnSx model is sought in order to exhibit various challenges and opportunities. B. Beginning of calculation A complete calculation starts as follows: .In all, the said CI coefficients were iterated fourfold. The eigenproblem is determined variationally by a method whose accuracy can be controlled, 55 presently to within less than one microhartree in the largest reported calculations. C. General strategy Convergence of connected configurations was first studied in detail 54 as a function of m until reaching very small values of T on ͑m͒ using triples and quadruples truncated after ᐉ = 7 for the purpose of gaining an idea about convergence behavior and expected values of m for ᐉ = 13. The final parameters for pruning the configuration list before CSF evaluation were obtained from various studies 54 aiming at both a sufficiently small truncation energy error ⌬E bf and a negligible residual error ␦E bf .In particular, the maximum value of degenerate elements g K per configuration was set at g K = 261, whereas a maximum value of n K = 70 000 was chosen for reasons explained in Ref. 54. D. CISDTQQnSx results for Ne ground state After the several studies outlined in the previous subsection CISDTQQnSx calculations are presented as a function of T egy in Table I.The occupation number thresholds are set at T on,dis = T on,con =3ϫ 10 −8q The second and third columns show the number of determinants, N sd ͑in 10 9 ͒, and of CSFs, N csf ͑in millions͒, respectively.These large numbers in many routine calculations are comparable to those in state-of-the-art full CI prowess, 28 except that in the present case the orbital bases are considerably larger while the sparseness of the CI matrices is greatly reduced due to the selection process.The fourth column holds the number of nonzero Hamiltonian matrix elements, N hme ͑in 10 12 ͒.In the penultimate row, this number amounts to 9.59ϫ 10 12 , entailing 153 terabytes of disk storage in a traditional application of Davidson's eigensolver 85 in which the matrix elements are expensive to evaluate thus precluding their recalculation at each iteration.Fortunately, this demand is obviated by the use of a select-divide-andconquer method 55 to solve the eigenproblem. Neglecting residual errors ␦E coming from quintuples and sextuples, the following conclusions are obtained. ͑i͒ For T egy =10 −8 , ⌬E af dis is larger than the actual energy lowering, yielding a too low gross energy.However, as T egy becomes 10 −10 a.u. and smaller, ⌬E af dis values achieve remarkable accuracy, allowing to produce a reliable converged energy as far as disconnected configurations are concerned; it is estimated that ͉␦E af dis ͉ ഛ 0.05 hartree.͑ii͒ In order to estimate ␦E af con , a final calculation with T egy =10 −11 a.u., and T on,dis = T on,con =10 −8q was carried out and reported in the last row of Table I.Considering patterns of energy convergence of connected configurations from previous studies 54 it may be estimated −␦E af con ഛ 0.55± 0.15 hartree.͑iii͒ Studies of ⌬E bf dis values in various circumstances 54 indicate that ␦E bf dis is negligible, around ±0.15 hartree.͑iv͒ From pilot calculations it was estimated −␦E bf con ഛ 0.5 hartree.The significance of connected configurations for still higher values of g K and n K not yet considered is deemed to be equally negligible.Adding both contributions, −␦E bf con ഛ 1 hartree. As to the truncation error ⌬E bf dis ͑QnSx͒ from quintuples and sextuples, it amounts to 373 hartree subdivided as fol- E. Comparison with previous Ne results The best previous variational calculation 79 used the same orbital set and consisted of a multireference CISD ͑MRCI-SD͒ supplemented with connected configurations selected according to Eqs. ͑33͒ and ͑35͒ including 0.35ϫ 10 6 CSFs and 34ϫ 10 6 determinants.Unknown and unsuspected to the authors at the time, 79 its CI energy error amounted to 739 hartree, as it may be deduced from Table I after subtracting the energy contribution from quintuples and sextuples. From the previous subsection, E S = −128.936541 a.u.͑Ne͒, and E M = −128.936914͑2͒ a.u.͑Ne͒ ͑Table II͒.The energy error ⌬E OBI due to orbital basis incompleteness was computed previously as −643± 20 hartree ͑this value may be deduced from Table VI of Ref. 79͒ through studies of successive saturation with radial functions for a given ᐉ value at the CISD level of approximation, together with patterns of convergence of angular energy limits. 2,66dding E M to ⌬E OBI , an upper bound E u = −128.937557 a.u.͑Ne͒ is obtained, 13 hartree above the "exact" value estimated by Chakravorty et al., 86 and probably fortuitously close to it since septuples and higher excitations are deemed to contribute slightly more than the observed 13 hartree.More accurate estimates of ⌬E OBI and of energy contributions beyond quadruples are needed to test the reliability of this exact energy prediction. 86 A. Achievements A priori SCI together with truncation and residual energy errors, Eq. ͑36͒, has been generally formulated, and a practical approach to approximate CISDTQQnSx has been given.SCI rests upon ͑i͒ Brown's formula 52 ͑Sec.II͒, ͑ii͒ the use of predictors for configurational CI coefficients to select and assess disconnected configurations via Brown's formula, ͑iii͒ the use of natural orbital concepts to select connected configurations and disconnected triples, and ͑iv͒ sensitivity analyses to determine residual errors. Predictors for configurational rather than determinantal coefficients are essential to reduce computational requirements by several orders of magnitude.Gross energies converge from below, however, as T egy becomes sufficiently small, ⌬E af dis values become remarkably accurate, well under 1 hartree ͑see Table I͒. As implemented here, the method can be applied quite in general to an important range of electronic systems including all atoms, for CISDTQQnSx calculations in model CI spaces exceeding 10 12 CSFs and quadrillions of determinants. SCI has been tested on the Ne ground state using a single computer processor.A rapidly convergent sequence of energies and wave functions ͑Table I͒ together with calculated truncation and residual energy errors is used to achieve an precision of 2 hartree within a CISDTQ model.A less precise result within a CISDTQQnSx model is also given.The final energy result still needs to be complemented by similar analyses with septuply and higher-excited configurations, and also by more accurate estimates of ⌬E OBI due to orbital basis incompleteness, 3,4 as discussed earlier in the Introduction in connection with Eq. ͑2͒. The largest previous CI calculation 28 involved ten electrons, 34 orbitals, 9.68ϫ 10 9 determinants, 128 processors, and attained absolute convergence to within 5 hartree.For comparison, the largest calculation in this paper ͑penultimate row of Table I͒ also involves ten electrons, 1077 energyoptimized orbitals, 88ϫ 10 9 CSFs expanded in 35ϫ 10 9 determinants in the selected space S with all corresponding CI coefficients being calculated variationally at least once, while energy convergence in S attains a fraction of 1 hartree.͑The last entry of Table I features 41ϫ 10 9 determinants.͒A recent FCI calculation 87 on the eight valence electrons of the C 2 molecule uses 68 orbitals, 65ϫ 10 9 determinants, and 432 processors, achieving convergence with a residual norm of 10 −5 . B. Estimate of the orbital basis incompleteness error ⌬E OBI Whatever ab initio method is being used, Eqs.͑31͒-͑33͒ can be applied to quickly estimate the orbital basis incompleteness error ⌬E OBI without ever carrying out a major calculation, if connected configurations can be neglected or do not occur altogether: ⌬E OBI can be approximated by the difference between ⌬E af dis values for one very large basis set and for the original basis.To do so one only needs ͑i͒ CI coefficients of singly, doubly, and triply excited configurations, from CISDT, CCSD͑T͒ or HCCI wave functions, ͑ii͒ one diagonal matrix element between the Slater determinants for each configuration involved together with corresponding one-and two-electron integrals, and ͑iii͒ the configurational coefficients calculated from the predictor equations, Eq. ͑25͒ and similar ones. A sequence of calculations with increasing basis set size can be used to yield increasingly small ⌬E OBI values ͑in magnitude͒ which may be extrapolated. C. Impact on other methods and outlook In order to formulate a theoretical model 88 one must settle for ͑a͒ accuracy with respect to the Breit-Dirac-Schrödinger theory or experiment, ͑b͒ precision with respect to the model itself ͑truncation and residual errors for energies, and sensitivity tests for all properties, in general͒, and ͑c͒ the method to be used, for example, CISDTQQnSx, or any of the suggestions below. Selected CI is too general to constitute a theoretical model on its own, however, it can be used to formulate new theoretical models or to improve upon existing ones. Brown's formula, Eq. ͑10͒, used in conjunction with the predictor equations for CI coefficients of higher than double excitations, can smoothly replace perturbation theory in all so called PT2 methods 89,90 since it is more accurate and about as efficient, thanks to Eq. ͑11͒.In principle, it can also be applied beyond PT2.1]91 Multireference CI ͑Refs.92-96͒ continues to be actively developed. 97In carrying the transition to SCI, MRCI can first be supplemented with the truncation energy error ⌬E af dis , Eq. ͑33͒, and with connected configurations, 79 given the case.Next, the configuration generator in MRCI can be extended from MRCI-SD to MRCI-SDTQ.If Qn and Sx excitations are considered only at the level of evaluation of truncation energy errors, the corresponding effort ͑which increases linearly with the number of configurations͒ is a small fraction of the one required for a selected CISDTQ calculation.In any case, introduction of leading selected quintuples and sextuples into the wave function is straightforward. General incorporation of higher than sextuply excited SCI is not a good idea, in general, although septuples and octuples are feasible in atoms. 54In molecules, variational expansion coefficients c ijk. . .abc. . .can be used to obtain accurate t ijk. . .abc. . .amplitudes, which in turn can be fed into a CC ansatz, 7 hopefully improving the energy and the efficiency of CC methods in a single step without need of CC iterations.Perhaps more interesting, accurate CI wave functions may be used to tailor CCSD. 98he density matrix renormalization group method, 29 and the growing family of iterative CI methods [23][24][25]99,100 can be used with the largest possible bases to estimate the energy errors due to truncations beyond quadruples, thus enhancing SCI capabilities. By advancing reliable CISDTQQnSx, the present SCI method considerably extends the scope of accurate atomic and molecular ab initio electronic structure applications.If the orbital bases are well chosen 101,102 or well developed ͑for example, through energy optimization 79,103 ͒, one may envision unprecedented accuracy for problems tractable by CIS-DTQQnSx or by the SCI-improved methods mentioned above. Needless to emphasize, SCI applies mutatis mutandis to the Dirac-Schrödinger equation, 104 and also to the Breit-Dirac-Schrödinger equation, 105 provided only positiveenergy orbitals are used, viz., within the no-pair Hamiltonian approximation. 106In fact, the computer program ATMOL 54 has precisely that capability for general atomic states. After what has been said, there remains the input/output bottleneck 107 inherent to large-scale applications of Davidson's eigensolver 85 when applied to CI matrices expressed in terms of CSFs.In the following paper, 55 this bottleneck is overcome by a select-divide-and-conquer variational procedure based on the present configuration selection scheme. ACKNOWLEDGMENTSI am indebted to Professor Ramon Carbó-Dorca for stimulating conversations and lasting friendship during my sabbatical year at the Institute of Computational Chemistry of University of Girona ͑Catalonia, Spain͒ in 2001-2002.Discussions with various colleagues stirred further motivations, particularly with Professors Ignacio Garzón ͑Mexico, D.F.͒, Ingvar Lindgren ͑Goteborg, Sweden͒, and Alejandro Ramírez-Solís ͑Cuernavaca, Mexico͒, and Dr. Oliverio Jitrik.The sharp and kind criticism of Professor Isahia Shavitt ͑Urbana, Illinois͒ is deeply appreciated.The Dirección General de Servicios de Cómputo Académico ͑DGSCA͒ of Universidad Nacional Autónoma de México, and the Computing Center at my own institute are thanked for their excellent and free services.Support from CONACYT through Grant Nos.E-26726 and 44363-F is gratefully acknowledged. 74r a given set of indices abc...ijk..., only t ijk...abc. . . in the rhs of ͑23͒ is different from zero.Such configurations shall be called connected configurations,74while all others are called disconnected ones.Examples of connected configurations are pdf in Li 2 S and pdfg in Be 1 S. 75͑ii͒ it is not sufficient for triply excited disconnected configurations: here, the largest part of the energy contributions comes from nonnegligible irreducible t ijk abc s and corresponding B ˆijk abc coefficients, is run to obtain approximate natural orbitals.͑ii͒Usingthese approximate natural orbitals, a CISDT is carried out and its energy as well as the CI coefficients of single, double, and leading triple excitations are saved on a file for later use in the prediction of configurational B K coefficients needed for the a priori evaluation of estimates of energy contributions of disconnected configurations.͑iii͒ With data from ͑ii͒, a pruned list of CSFs for CIS-DTQQnSx is calculated using suitable pruning parameters. 54͑iv͒ Using the list of CSFs obtained in ͑iii͒, an approximate CISDTQ wave function is obtained for the purpose of improving over the coefficients of single, double, and triple excitations first calculated in ͑ii͒. ͑v͒ These are used to run ͑iii͒ once more, yielding a very similar list of CSFs.The new CI coefficients of singles, doubles and triples yield more accurate truncation energy errors ⌬E bf dis and ⌬E af dis TABLE I . Convergence of the CISDTQQnSx ground state energy of Ne with a 12s12p11d10f10g9h8i7k6l5m4n3o3q3r basis, as a function of T egy =10 −n a.u., using fourfold iterated CI coefficients TABLE II . Comparison with best previous calculation using the same orbital basis; energies in a.u.͑Ne͒.
8,808.8
2006-07-06T00:00:00.000
[ "Chemistry", "Physics" ]
Anisotropy in the magnetic interaction and lattice-orbital coupling of single crystal Ni3TeO6 This investigation reports on anisotropy in the magnetic interaction, lattice-orbital coupling and degree of phonon softening in single crystal Ni3TeO6 (NTO) using temperature- and polarization-dependent X-ray absorption spectroscopic techniques. The magnetic field-cooled and zero-field-cooled measurements and temperature-dependent Ni L3,2-edge X-ray magnetic circular dichroism spectra of NTO reveal a weak Ni-Ni ferromagnetic interaction close to ~60 K (TSO: temperature of the onset of spin ordering) with a net alignment of Ni spins (the uncompensated components of the Ni moments) along the crystallographic c-axis, which is absent from the ab-plane. Below the Néel temperature, TN~ 52 K, NTO is stable in the antiferromagnetic state with its spin axis parallel to the c-axis. The Ni L3,2-edge X-ray linear dichroism results indicate that above TSO, the Ni 3d eg electrons preferentially occupy the out-of-plane 3d3z2−r2 orbitals and switch to the in-plane 3dx2−y2 orbitals below TSO. The inherent distortion of the NiO6 octahedra and anisotropic nearest-neighbor Ni-O bond lengths between the c-axis and the ab-plane of NTO, followed by anomalous Debye-Waller factors and orbital-lattice in conjunction with spin-phonon couplings, stabilize the occupied out-of-plane (3d3z2−r2) and in-plane (3dx2−y2) Ni eg orbitals above and below TSO, respectively. ] bond distances and bond angles, as mentioned in ref. 14 . Wu et al. 14 also found that magnetic dipole-dipole interactions, which are stronger than spin-orbit coupling, are responsible for the orientation of the Ni spin axis parallel to the c-axis. Figure 1(c) displays all these spin exchange paths. The unique noncentrosymmetric structure and variation in exchange interaction with various Ni spin sites together give rise to the favorable magnetic field-driven electric polarization properties of NTO [15][16][17] . Yokosuk et al. 15,16 and Kim et al. 17 used extremely high magnetic fields to elucidate the spin-induced electric polarization properties between 9 and 52 T. In a nominal magnetic field, NTO is a collinear antiferromagnet below ~52 K (Néel temperature: T N ) 15 , as revealed by anisotropic magnetization and specific heat capacity measurements 12,13 . Yokosuk et al. 16 and Skiadopoulou et al. 18 performed a combination of infrared, Raman and THz spectroscopic measurements and explained the anomalous temperature-dependent behavior of spin excitation by spin-phonon coupling below T N , whereby local lattice distortion in the ab-plane is induced by magnetic ordering, which subsequently modifies the AFM interaction between Ni ions owing to their displacements from their mean positions in the unit cell 16 . Moreover, numerous reports on related AFM systems have suggested a strong correlation among spin, orbital, charge and lattice degrees of freedom, which are responsible for the intriguing material properties of transition metal oxides [19][20][21][22][23][24][25][26] . Specifically, Ling et al. 19 reported that a structural transition in lanthanum manganate can trigger Mn 3d e g orbital ordering, causing AFM spin ordering. Deshpande et al. 20 also found temperature-and substrate-driven preferential electron occupancy of the Mn 3d e g orbital in La 0.85 Zr 0.15 MnO 3 (LZMO) thin films epitaxially grown on SrTiO 3 (STO) and MgO substrates. As Experimental results further suggested that the strong tensile strain stabilizes the 3d x 2 −y 2 orbital by inducing lattice distortions of the MnO 6 octahedra in LZMO/MgO 20 . Furthermore, in t 2g systems such as rare-earth vanadate, cooperative orbital ordering induces local lattice distortion below a certain transition temperature 21 . Therefore, the origin of all of the aforementioned exotic properties of NTO is believed to be closely associated with the intriguing interplay between the Ni 3d electron's spin, orbital, charge and lattice-related degrees of freedom, which could lead to intriguing magnetic properties, such as the existence of FM and AFM interactions with the spin-axis parallel to the crystallographic c-axis in NTO [27][28][29][30] . Although the spin orientations of Ni 12,14,16 and the spin-phonon coupling in NTO 15,16,18 have recently been studied, the spin-orbit-lattice-charge intercorrelations have not yet been fully explored, to the best of our knowledge. Therefore, this investigation is a detailed study of the temperature-and polarization-dependent electronic and atomic structure of NTO, as well as of the preferential orbital and anisotropic magnetic behaviors, to elucidate correlations among the aforementioned degrees of freedom across the transition temperature of the NTO single crystal. Results and Discussion Figure 2(a) displays the X-ray diffraction (XRD) pattern of the NTO single crystal at room temperature, showing the [003] Bragg reflection. The diffraction line shape is symmetrical, and Gaussian fitting yields a small full-width-at-half-maximum (FWHM = 0.11 0 ), as observed in the θ scan of the (006) Bragg reflection in the inset of Fig. 2(a). This very small FWHM indicates good crystallinity and chemical homogeneity of the NTO single crystal. Figure 2(b) also shows the temperature-dependent XRD pattern (all indexed peaks 7 are tabulated in Table 1 of the Supplementary Information) of crushed and finely ground NTO powder obtained from its single crystal. All data are similar at all temperatures of interest with no appearance or disappearance of peaks, which shows that NTO exhibits no structural phase transition in the investigated temperature range of 11-300 K. However, the intensities of some peaks [such as (1 0 1), (0 1 2) and (1 1 0)] relative to the most intense (1 0 4) peak vary with temperature; the implications of this finding will be discussed later in this manuscript. Sankar et al. 13 conducted XRD analyses of NTO using the general structure analysis system code 31 and identified a trigonal crystal structure in a rhombohedral lattice with space group R3 and cell parameters a = b = 5.11 Å and c = 13.74 Å at 300 K. Figure 3(a) plots the magnetic susceptibility (M/H) as a function of temperature [in the field-cooling (FC) and zero-field-cooling (ZFC) modes] using a nominal external magnetic field H = 100 Oe aligned parallel and perpendicular to the c-axis, thus revealing the anisotropic magnetic properties of NTO. The FM interaction [mostly a result of a J 2 exchange interaction, as depicted in Fig. 1(c)] is rather weak and appears as a hump close to ~60 K (T SO : spin-ordering temperature) when the magnetic field is applied parallel to the c-axis [inset in Fig. 3(a)] and it is absent when the magnetic field is applied perpendicular to the c-axis (H⊥ c). The weaker FM interaction suggests that the Ni spins may not be collinear. Instead, the uncompensated component of the Ni spin moment is aligned parallel to the c-axis (hereafter referred to simply as FM Ni spins) and the interaction between these components of the moments will be referred to as FM interaction in this manuscript. Below ~52 K (T N ), the M/H curves (for H// c and H⊥ c) turn downward, revealing AFM ordering (caused by J 3 -J 5 exchange interactions). The different magnetization features revealed by the H// c and H⊥ c curves below T N (~52 K) suggest that the AFM spin axis is primarily parallel to the c-axis 12 . This alignment of the magnetic spin axis parallel to the c-axis in the FM and AFM phases is consistent with the earlier calculations of Wu et al. 14 . Sankar et al. 13 claimed that in a high magnetic field, most of the Ni ions in NTO are in the Ni 3+ state in either the high-spin (S = 3/2; t 2g 5 e g 2 ) or low-spin (S = 1/2; t 2g 6 e g 1 ) configuration, with the remaining minority in the Ni 2+ state with the S = 1 (t 2g 6 e g 2 ) spin configuration. Their analysis reflects that at low magnetic fields, however, the Ni 2+ spin state in NTO is responsible for magnetization. Temperature-and polarization-dependent Ni K-edge X-ray absorption near-edge structure (XANES) analyses of the NTO single crystal, as well as Ni K-and L 3,2 -edge XANES analyses of powdered NiO, Ni 2 O 3 and NTO at room temperature, have also been performed, as described in Fig. S1 Fig. S1(a,b)] in Ni 2 O 3 are higher than those in NiO (20.08 ± 0.05 and 0.05 ± 0.02 at the Ni L 3,2 -and K-edge, respectively). Clearly, for NTO, the areas under the Ni L 3,2 -edge (21.05 ± 0.05) and K pre-edge (0.05 ± 0.01 and 0.06 ± 0.01 for E// c and E⊥ c, respectively) features are close to those for NiO, suggesting that most of the Ni ions in NTO are in the 2 + valence state. Additionally, the Ni L 3,2 -edge absorption line shapes of NTO are consistent with those of NiO in the results of Hu et al. 35 and Abbate et al. 36 , who compared the Ni L 3,2 -edge absorption line shapes of NiO with those of other Ni compounds to estimate the valence states. To determine the valence state of the Ni ions in NTO at various temperatures, as shown in the bottom panels of Fig. S1(a,b) based on the position of the threshold feature, the derivative of the threshold feature of the Ni K-edge absorption feature is used. Apparently, the Ni K-edge threshold energy and the line shapes of the NTO single crystal, as well as the area under the pre-edge peak, do not vary with temperature for two electric polarizations (E// c and E⊥ c), confirming that the valence state of the Ni ions is insensitive to temperature and orientation and is mostly 2+ in NTO. Furthermore, we have also carried out temperature-dependent Te K-edge XANES analyses [ Fig. S2(a,b) for E// c and E⊥ c, respectively] as complementary evidence to support the Ni 2+ state. The threshold/ peak position of the Te K-edge XANES is 31825.0 ± 0.5 eV for both the E// c and E⊥ c polarizations, which is consistent with the XANES spectra for the Te 6+ state as reported by Grundler et al. 37 for their Te(OH) 6 sample. Assuming oxygen in the O 2− state, the 6+ valence state of Te evidently suggests that Ni will be in the 2+ valence state to satisfy charge compensation in NTO. Moreover, similar to the Ni K-edge XANES, the Te K-edge XANES is also insensitive to changes in temperature within the measured 40-300 K range. This result further indicates that Te and Ni are stable in their respective 6+ and 2+ states and do not vary with temperature within this range. Evidently, the temperature-and polarization-dependent Ni K-edge XANES studies do not support any substantial effect of the disproportionation of Ni 3+ and Ni 2+ on the anisotropic magnetic properties of NTO below the transition temperature T N because the Ni ions in NTO are primarily not in the Ni 3+ state and therefore do not exhibit high-spin or low-spin configurations at various temperatures. In the FC and ZFC curves of NTO for H// c in Fig. 3(a), the FM region is barely observable relative to the dominant AFM feature in NTO. Therefore, for better understanding of the FM feature in NTO, magnetization measurements were performed in various magnetic fields, and the first derivatives (dM/dT) for H// c and H⊥ c are plotted in Fig. 3(b,c), respectively. dM/dT is useful in the sense that it can be used to identify minor features or fluctuations of magnetization with temperature, which are difficult to observe from raw data 38,39 . Although the measured absolute intensity of the FM feature for H// c gradually increases with the magnetic field, as shown of Fig. 3(b), the corresponding normalized dM/dT plots [normalized dM/dT = (dM/dT)/(dM/dT at T N )] in NTO gradually decrease, as shown in the inset in Fig. 3(b). The normalized dM/dT plots enable the identification of the variation in the FM feature with respect to the AFM feature in NTO under various magnetic fields. The plots in the inset of Fig. 3(b) indicate that in a stronger magnetic field, the AFM feature dominates the FM feature and since they are close to each other on the temperature scale, the FM feature is generally invisible. This fact may explain why the results of Sankar et al. 13 indicate several different magnetic behaviors of NTO in high and low magnetic fields. In contrast, the field-dependent normalized dM/dT plots for H⊥ c do not show any features close to ~60 K, as presented in Fig. 3(c), suggesting a lack of alignment among the FM Ni spins of NTO in the ab-plane. To further understand the correlation of the anisotropic FM and AFM phases with the preferential orbital and lattice degrees of freedom in the NTO single crystal, temperature-dependent X-ray magnetic circular dichroism (XMCD), X-ray linear dichroism (XLD) and extended X-ray absorption fine structure (EXAFS) analyses were performed and are described below. Figure 4(a) displays the temperature-dependent Ni L 3,2 -edge XANES spectra of the NTO single crystal, with the photohelicity of incident X-rays parallel (μ + ) and antiparallel (μ − ) to the direction of magnetization under a magnetic field of H = 100 Oe applied parallel and antiparallel to the c-axis, respectively. All temperature-dependent Ni L 3,2 -edge XANES spectra of NTO exhibit two broad features in the ranges of 852-857 eV and 868-873 eV, which are attributed to Ni 2p 3/2 → 3d 5/2 and 2p 1/2 → 3d 3/2 dipole transitions, respectively. Figure 4(b) also shows the corresponding Ni L 3,2 -edge XMCD spectra [(μ − − μ + )/(μ − + μ + )] obtained at various temperatures. From the field-dependent normalized dM/dT in the inset of Fig. 3(b), the FM feature gradually becomes obscured by the AFM feature at higher magnetic fields. Therefore, a considerable XMCD signal from NTO can be obtained at an applied magnetic field of 100 Oe without being buried under the dominating AFM feature. XMCD reveals the expectation value of the magnetic moment, <M> 40,41 , and thereby provides information about individual magnetic ion spins and magnetic orbital moments in the overall FM interactions 42,43 . Clearly, the temperature-dependent Ni L 3,2 -edge XMCD spectra obtained by switching the magnetic field from parallel to antiparallel to the c-axis reveal a weak but clear XMCD signal at 59 K [close to T SO (60 K) , marked by a green arrow in Fig. 4(b)] that diminishes as temperature varies upward or downward. The directionality of the FM spins of the Ni ions along the c-axis in NTO is consistent with the magnetization measurements herein (Fig. 3) and in other studies 12,13 . Importantly, in contrast, the temperature-dependent Ni L 3,2 -edge XMCD measurements made when a magnetic field of 100 Oe is applied perpendicular to the c-axis (two opposite magnetic field directions) do not reveal an XMCD signal (see Fig. S3 in the Supplementary Information). In magnetic materials, the FM moment is typically correlated with the degree of the long-range magnetic order of the magnetic ions, which is determined by competitive exchange coupling between electron spins and thermal fluctuation. However, the XMCD measurements in Fig. 4(b) clearly demonstrate weak FM interaction between the Ni 3d electron spins in NTO at ~59 K with the magnetic field along the c-axis. Typically, XMCD is sensitive only to the expectation value of the local magnetic moment, <M>, and therefore disappears in the AFM regime 40,41 at or below T N ~ 52 K, as shown in Fig. 4(b). The simultaneous determination of the spin orientation in an AFM system, unlike in an FM system, and local orbital structure is a challenging experimental task because of their compensated magnetic dichroism nature. However, with the development of synchrotron sources with high photon fluxes, soft XLD spectroscopy has emerged as a powerful tool for detecting the spin axis and orbital symmetry of all uniaxial magnetic systems because linearly polarized photons have only The sign of the XLD spectra is negative in the paramagnetic regime (above 60 K), indicating that the Ni e g holes preferentially occupy 3d x 2 −y 2 (that is, preferential occupancy of the Ni e g electrons in 3d 3z 2 −r 2 ) orbitals. However, upon cooling below T SO , the sign of the XLD spectra is reversed to positive, suggesting that the Ni e g holes preferentially occupy the out-of-plane 3d 3z 2 −r 2 (that is, preferential occupancy of the Ni e g electrons in 3d x 2 −y 2 ) orbitals. This result is reproducible and consistent with similar temperature-dependent Ni L 3,2 -edge XANES and corresponding XLD measurements with electrically polarized X-rays on a different beamline [BL-20A at the NSRRC (not presented here)]. To elucidate this phenomenon, the integrated intensity of the area under the Ni L 3 -edge of XLD spectra (A L3 , in the region from 848-858 eV) is plotted in Fig. 5(b), which clearly indicates the temperature dependency of the preferential electron occupancy in the Ni 3d e g orbitals. Negative and positive values of A L3 at various temperatures demonstrate the preferential Ni e g electron occupancy in the 3d 3z 2 −r 2 and 3d x 2 −y 2 orbitals, respectively. Above T SO (60 K), the negative A L3 s are fairly independent of temperature, as shown in Fig. 5(b). The evolution of this anisotropic behavior can also be attributed to an additional local CF effect and ligand-metal p-d hybridization, which stabilize electron occupancy in the 3d 3z 2 −r 2 orbitals in the NTO. Although the origin of this CF effect and ligand-metal p-d hybridization is currently not considered, the distortion of the crystal structure is primarily responsible for the lowering of the energy of either the in-plane or out-of-plane e g orbitals of transition metal oxides 45 . We believe that the inherent distortion of NiO 6 octahedra in an environment of trigonal symmetry also plays an important role, just as VO 6 octahedral distortion contributes importantly to orbit-lattice coupling in rare-earth vanadates 24,25 . Previous studies 6,14 have revealed that the NiO 6 octahedra in NTO are distorted and that the Ni-O-Ni bond distances and angles vary substantially among the three Ni sites (Fig. 1). This phenomenon may be responsible for additional CF and ligand-metal p-d hybridization effects and thereby stabilize Ni e g electron occupancy in the 3d 3z Fig. 2(b)], the switching of the preferred orbital is not caused by a structural transition, unlike in lanthanum manganite 19 . Several reports have verified strong orbital-lattice coupling in rare-earth perovskites of the families RVO 3 24,25 and RTiO 3 (R = La, Pr, Sm, Yb and Lu) 26 , in which VO 6 octahedral distortion is most likely responsible for the orbital ordering. Therefore, the spin-orbit-lattice couplings that accompany local lattice distortion, owing to static disorder and the breakdown of lattice symmetry, certainly seem to favor (or strongly correlate with) preferential orbital occupancy and orbital ordering in various transition metal oxide systems 20,[22][23][24]26,49 . To elucidate the correlation between the Ni 3d electron spin and the lattice, as well as the factors that drive the switching of the preferential orbital occupancy from out-of-plane to in-plane in the NTO single crystal below T SO , temperature-dependent Ni K-edge EXAFS measurements were performed with linearly polarized X-ray beams oriented along E// c and E⊥ c. Figure 6(a,b) display the temperature-dependent Fourier transform (FT) plots (solid lines) of the Ni K-edge EXAFS spectra of the NTO single crystal at θ = 70° (E// c) and θ = 0° (E⊥ c), respectively, within a k range of 3.3-11.6 Å −1 . The insets present corresponding EXAFS k 3 χ data. The first main feature of the FT plots of the Ni K-edge EXAFS spectra corresponds to the nearest-neighbor (NN) bond length of Ni-O in the NTO single crystal. We have analyzed the EXAFS region of the spectra by using the ATHENA and ARTHEMIS program packages 50 to extract quantitative local information, such as the mean NN Ni-O bond length (R), its mean-squared fluctuation, called the Debye-Waller factor (DWF), and the coordination number (N) around the Ni sites in NTO for E// c and E⊥ c. Figure 6(a,b) also show fitted plots (circles) of the temperature-dependent Ni K-edge FT for two polarization orientations, E// c and E⊥ c, respectively. This work primarily focuses on the NN oxygen coordination number around the Ni sites, the variation in the NN Ni-O bond length, and the corresponding DWFs for different temperatures in NTO, for both the E// c and E⊥ c polarizations. Therefore, we fit the first main FT feature within an R range of 1.26-2.30 Å, which resembles the first shell around the Ni site. The goodness of fit, i.e., the R-factor, lies between 0.009 and 0.014 (see SI Table 2), which indicates an excellent match between the experimental data and the model system (constructed with a calculated NN oxygen coordination number and known lattice parameters 13 ) used in fitting. Table 2 of the Supplementary Information and Fig. 7 present the fitted results for the first main FT feature, shown in Fig. 6(a,b). Notably Fig. 1(a)], the projections of the O atoms onto the c-axis and the ab-plane are calculated based on the angles they make with the c-axis. This results in fractional coordination numbers of NN O atoms for the E// c and E⊥ c polarizations. Figure 7(a,b) plot the NN Ni-O bond lengths (weighted according to the angle between the electric polarization and bond direction) and DWFs, respectively, as functions of temperature. Clear anisotropy is observed The out-of-plane mean NN Ni-O bond length in the 60-300 K temperature range is fairly independent of temperature, but compression is observed upon cooling below T SO (60 K). However, the in-plane mean NN Ni-O bond length is rather temperature-independent throughout the entire measured temperature range (40-300 K). Consistent with our earlier discussion of the preferential orbital occupancy of the Ni 3d e g electrons in NTO, this anisotropy of the NN Ni-O bond lengths above T SO may lead to additional CF and ligand-metal p-d hybridization effects, causing preferential Ni e g electron occupancy in the 3d 3z 2 −r 2 orbitals in the paramagnetic (PM) region, as mentioned above. Below T SO , a tensile-like strain (compression of the Ni-O bond lengths projected out-of-plane) in the NiO 6 octahedra can favor the preferential Ni e g electron occupancy in the 3d x 2 −y 2 orbitals 51 . Additionally, as shown in Fig. 7(b), the DWFs at Ni sites in NTO (for E ⊥ c) deviate from their expected decreasing trend at and below T SO . The variation in the DWF can be understood with reference to the intensity of the first main FT feature (better views of the variation in intensity of the first Ni K-edge FT features for both polarizations are shown in Fig. S4 in the Supplementary Information), which is determined by the oxygen coordination number N and the DWF [σ 2 (T)] at Ni sites 51,52 . Due to the absence of structural phase transitions in the measured temperature range [11-300 K, see Fig. 2(b)], it is fair to consider the oxygen coordination number, N, as independent of temperature. Therefore, a change in σ 2 (T) is associated with a variation in FT intensity with temperature. The DWF has two components, σ 2 s and σ 2 v (T), which are associated with static disorder in atomic structure and thermal lattice vibration, respectively 52 . Since σ 2 s is independent of temperature, thermal variation in the DWF is commonly associated with variation in σ 2 v (T), which, according to the Einstein and Debye models 51,52 , decreases as temperature decreases. As expected, the DWFs under both polarizations decrease (the intensity of the first main FT feature increases, as shown in Fig. S4) as the temperature decreases from 300 to 80 K because of the σ 2 v (T) component. Interestingly, below T SO , the intensity of the FT feature decreases as temperature decreases, causing the DWF to increase (for E⊥ c). This anomalous result clearly indicates that the static disorder component, σ 2 s , dominates the decrease in the σ 2 v (T) component caused by the disorder produced by thermal vibration of the lattice. Therefore, the overall DWF, σ 2 (T), increases meaningfully as temperature decreases below T SO (60 K). The large static DWF, particularly in the ab-plane (E⊥ c), can be understood to arise from the large static distortion of the octahedral oxygen network around the Ni sites in NTO below T SO , which strongly influences the intensity of the FT feature associated with NN Ni-O bonds. The evolution of this static disorder can be understood with respect to phonon-assisted behavior 53,54 , which is related to a decrease in or breaking of the crystal lattice symmetry, especially in the ab-plane of NTO. Infrared, Raman and time-domain THz spectroscopic studies have previously revealed phonon-softening modes in NTO in the ab-plane below T N 16,18 , which were explained by spin-phonon coupling. Spin-phonon coupling induces local lattice distortion in the ab-plane, and the corresponding local displacements modulate the magnetic interactions among the Ni ions in NTO, causing them to align along the c-axis, thus resulting in low-temperature polarization 16 . As stated earlier, the theoretical analysis of Wu et al. 14 suggested that the magnetic dipole-dipole interaction is stronger than the spin-orbit coupling and is therefore responsible for the orientation of the Ni spins parallel to the c-axis in NTO. Their calculations also revealed that the J 2 interaction between the Ni spins (Ni II -Ni III ) along the c-axis is more strongly FM than the J 1 interaction (Ni I -Ni II in the ab-plane). Consistent with these calculations, weak FM (~60 K) interactions are observed herein with the uncompensated component of the Ni spins aligned only along the c-axis, as shown in the magnetization plots (Fig. 3) and the XMCD spectra [for H// c, as shown in Fig. 4(b), at 59 K but not for H⊥ c, as shown in Fig. S3 of the Supplementary Information]. These interactions are followed by strong AFM (T N ~ 52 K) interactions with the Ni spin axis aligned parallel to the c-axis, as presented in the magnetization plots in Fig. 3(a). Notably, in NTO, the NiO 6 octahedra are inherently distorted with off-centered Ni atoms above T SO , so the evolution of phonon-softening modes below T SO in the lattice is expected to correlate with the (phonon-mediated) interaction between additionally off-centered Ni atoms with respect to O atoms in the NiO 6 octahedra in the Ni I -Ni II and Ni III -Te layers. Although EXAFS analysis does not reveal the exact Ni sites responsible for the phonon-softening modes, earlier studies 16,18 have established that the phonon-softening modes in the ab-plane of NTO are associated with the displacement of Ni I atoms with respect to Ni II and Ni III atoms and the octahedral stretching mode, which involves the shifting of O atoms with respect to Ni atoms. Nevertheless, Ni displacement and octahedral stretching modes in the ab-plane are evidently associated with the variation in Ni-O bond lengths. Figure 7(a) indicates that the NN Ni-O bond lengths are, on average, compressed along the c-axis (E// c) upon cooling below T SO . These variations in Ni-O bond lengths below T SO may explain the additional off-centering/displacement of the Ni ions, which causes phonon softening below T SO . Typically, the process of off-centering of metal ions preserves the overall crystal symmetry, but it may change the structure factor, resulting in a modification of the intensity of the powder XRD feature 53 . Variation in the intensities of the powder XRD feature of NTO is observed in Fig. 2(b), as mentioned earlier, while the crystal structure is maintained throughout the range of measurement temperatures. The unusually high DWFs in the ab-plane relative to those along the c-axis throughout the specified temperature range further suggest anisotropic local structural ordering of NTO. In addition, the higher slope of the DWF as a function of temperature between 80 and 300 K suggests that the in-plane (E⊥ c) NN Ni-O bond length is more sensitive to temperature than the out-of-plane (E// c) Ni-O bond length. This anomalous variation in DWFs at in-plane Ni sites and the compression of the apical projection (E// c) of Ni-O bonds, which are correlated with lattice-orbit coupling and occur in conjunction with spin-phonon coupling, stabilize the preferential occupancy of the Ni e g electrons in the 3d x 2 −y 2 orbitals in NTO below T SO . In contrast, the inherent distortion of NiO 6 octahedra stabilizes the Ni 3d 3z 2 −r 2 orbital above T SO . Phonon-softening behavior and, consequently, anomalous DWF behavior in the ab-plane below a transition temperature were recently observed in our study of a SrFeO 3−δ single crystal 45 . A detailed theoretical investigation must be conducted in the future to develop our understanding of the correlation between local lattice distortion and preferential orbital occupation and its effect on the spin-spin correlation function in the NTO single crystal. In summary, the electronic/atomic structure, preferential Ni 3d-orbital occupation and magnetic properties of the NTO single crystal were elucidated through magnetization measurements and temperature-dependent Ni L 3,2 -edge XANES, XLD and XMCD and K-edge EXAFS spectroscopic techniques. The magnetization measurements reveal a transition from the PM phase at high temperature to the FM phase close to T SO (~60 K), followed by an AFM transition close to T N (~52 K). Consistent with theory, the FM interactions associated with Ni II -Ni III spins (exchange interaction of J 2 ) are responsible for most of the evolution of the XMCD spectra along the c-axis close to T SO , whereas the corresponding signal is absent when a magnetic field is applied perpendicular to the c-axis. The Ni L 3,2 -edge XLD spectra above T SO reveal that Ni 3d e g electrons preferentially occupy the out-of-plane 3d 3z 2 −r 2 orbital and switch to the in-plane 3d x 2 −y 2 orbital at and below T SO . The inherent distortion of NiO 6 octahedra and anisotropic NN Ni-O bond length stabilize the preferential Ni e g electron occupation in the out-of-plane (3d 3z 2 −r 2 ) orbital above T SO . However, at and below T SO , a large static distortion of the NiO 6 octahedra (tensile-like strain due to the compression of the Ni-O bond length projected out-of-plane) network around the Ni sites, associated with phonon-softening behavior, stabilizes the preferential Ni e g electron occupation of the in-plane orbital (3d x 2 −y 2 ). These strong anisotropic lattice-orbital and spin-phonon couplings are responsible for the evolution of anisotropic magnetic properties and orbital switching in the NTO single crystal. Methods Sample preparation and characterization. Single crystal NTO with a (001) plane was synthesized using the flux slow cooling method; details on the sample dimensions (~3 × 2 mm surface dimension) and preliminary characterizations can be found elsewhere 13 . A detailed structural study of NTO was carried out using temperature-dependent powder and room temperature single-crystal XRD. Magnetic and electronic properties were obtained from temperature-dependent magnetization, Ni L 3,2 -edge XANES, XMCD and XLD and Ni K-edge XANES and EXAFS measurements. Single-crystal XRD measurements were made in 2θ and θ scan modes at room temperature on a four-circle X-ray diffractometer with the X-ray beam aligned with the (003) Bragg reflection of NTO. Cu K α X-ray radiation with a spot size of ~2 mm in diameter was focused onto the sample surface. Temperature-dependent powder XRD was also performed at beamline BL-07A of the NSRRC in Taiwan. The energy of the X-ray beam was 14 keV (wavelength ~0.8856 Å), which was later transformed to the wavelength (1.5406 Å) corresponding to the energy of Cu K α to maintain consistency with the single-crystal XRD data. Temperature-dependent FC and ZFC magnetization measurements were made using a Quantum Design superconducting quantum interference magnetometer in the temperature range of 2-300 K with external magnetic fields applied parallel and perpendicular to the crystallographic c-axis of the NTO single crystal. XANES measurements with a circularly polarized X-ray beam at the Ni L 3,2 -edge were carried out in beamline Dragon-BL-11A at the NSRRC (the X-ray beam spot size was ~0.3 × 0.1 mm on the sample surface), and the spectra were obtained in the total fluorescence yield (TFY) and total electron yield (TEY) modes after a magnetic field of H = 100 Oe was applied to NTO. By switching the direction of the magnetic field, NTO's magnetization direction was made parallel (μ + ) and antiparallel (μ − ) to the helicity of the incident X-rays. The difference between the above two measurements [(μ − − μ + )/(μ − + μ + )] is referred to as XMCD. For the linearly polarized X-ray beam, Ni L 3,2 -edge measurements (in Dragon-BL-11A and BL-20A in the TEY mode at the NSRRC) and K-edge measurements (in BL-17C in the TFY mode at the NSRRC) were made at two angles of X-ray photon incidence (θ) with respect to the normal to the NTO surface, θ = 0 0 (normal incidence, with the electric field E of the linearly polarized photons perpendicular to the c-axis, E⊥ c) and θ = 70° (grazing incidence, with E almost parallel to the c-axis, E// c). L 3,2 -edge XLD denotes the difference between the above two measurements for different θ (E// c − E⊥ c). The Ni K-edge EXAFS spectra were analyzed using the ATHENA and ARTHEMIS program packages 50 to extract quantitative local information, such as the mean NN Ni-O bond length (R), its mean-squared fluctuation, called the DWF, and the coordination number (N) for E// c and E⊥ c.
7,845.6
2018-10-25T00:00:00.000
[ "Physics", "Materials Science" ]
THE IMPORTANCE OF HUMAN CAPITAL IN THE STRATEGIC DEVELOPMENT OF AN ORGANIZATION In this new world where the level of information has reached overwhelming thanks to new technologies, the human capital embodied in a stock value, skill and knowledge, becomes the main factor of production in the new economy. In this paper I have tried to emphasize that, this relatively new concept such as human capital has become the main motor of organizational development, representing one of the most important advantages of firms to work properly in their environments. The economic growth is conditioned by human capital development; current trends are changing and organizations change their outward form of thought and action and they enhance their human capital (this intangible asset too little highlighted and measured) from the desire of being more flexible and easily assimilated by the market in the future. For modern companies, human capital has become a “golden coin” with 2 sides called: 1) capability (the ability to provide solutions to customers through knowledge, skills, knowhow, and talent); 2) attitude (the ability to profitably use these values of the organization). In conclusion, human capital is measured using IQ and the productivity of modern economies depend largely on investment in knowledge and skills, although statistics in Romania do not include costs and expenses of human capital. Human capital World technology should be constantly on the move, improving itself, and it basically stands for an indispensable tool for productivity as well as for the market, yet one should consider the fact that any innovation success depends greatly on the flexibility and vision of the people making part of any relevant organization.Nowadays, information and technology run fast and they come "in handy" for any organization, while the main competitive advantage differentiating one given company from another is rendered by the personnel within such organization, as well as by the latter's ability to achieve fast adjustment to new changes.This may be achieved by one's ability to constantly learn, to be open to new things, by designing one's own education and experience, all corroborated within one given system of skills and competences.In one such organization, competences arise out of the relevant business strategy and the same should be observed, measured, aligned to company's strategies and generating constant benefits.All this stands for quite a challenge for both the human resources departments and for the managers in general, since the latter shall be responsible for understanding and meeting customers' wishes and needs, so as to bring a relevant and truly tangible contribution to the business they run.Enhancing and maintaining competitive benefits within one given company may be carried out by human capital management.Human capital stands for the knowledge that each and every single individual possesses.Human capital is defined as the creative hand within one given organization and it stands for the most significant and most vital resource for one to develop the said business activity, the productivity in terms of goods and services for the purpose of meeting clients' needs and to sell the same on the relevant market in view of getting profit. Material means turn into one finished product following their going through the production process, yet it is human capital that stands for that particular force building this entire system by means of talent and ability.Human capital springs out of companies' need to rely upon one high tech instrument, which shall provide for support in production, since any last generation machine is unable to manage by itself, yet it requires supervisors, as well as workers.Any organization is that dynamic and economic entity, where the following factors shall be relevant:  Economic capital  Human capital  Company management.A company object is strictly related to such human capital; the economic capital is related to investors and management relates to managers and administrators.The human capital (humans-talent.blogspot.ro)has the following characteristics:  knowledge  skills  virtues All these variables turn the employee into the main pawn within any given organization, as well as into the main driving element in terms of creativity, production, vision, all of the aforementioned making such employee an indispensable factor at any company level.Upon the more and more acute perception of the globalization concept, organizations require the maintenance of one uptodate trained and qualified personnel, holding decisional power and who shall bring all processes having been started to a good end, within the shortest possible period of time.The staff within one given organization is very important since it adds value to the latter and the creativity around them is part of one company's long term strategy.Managers should invest in the personnel, by protecting, evaluating, constantly developing, awarding and remunerating the latter.Competences within one given organization spring out of the relevant strategy; the same should be observable, measureable, aligned to the daytoday strategy of any given company and generating competitive and sustainable benefits so as not to be able to be "copied" by any competitors (www.geopolis.com). Human talent in any given organization is genuine, this being far more difficult to be replayed by competitors, and thus investing in human capital stands for one of the most significant pillars of any company.The criterion relating to the hierarchies based upon which success relied in the past, has gradually changed within the recent years, and at present the primary relevance in any company is granted to the competences such individual holds and at the same time to the latter's attitude towards his ability to acquire new skills that may be of use for the organization.The yield in terms of human capital investments stands for one final indicator in terms of the measuring process, which involves the collection of data for the display of outcomes, as per various levels of interest:  customer satisfaction within the project, as run by the organization at that specific time  learning for the purpose of acquiring new knowledge and skills  applying the aforementioned on the job  the impact of the above described results upon the relevant business.Human capital has created a new metaphor at the global level: the employee may act as one new investor in any given organization.People stand for the driving element of change since they hold the knowhow resource, thus organizations should bear in mind the constant changes undergone by the business environment and to align their internal productivity towards innovation, analysis and new approaches.The human capital concept has set its mark in the specialized literature as of 1961, the year of the release in the American Economic Review, of the article written by Theodore W. Schultz, namely "Investment in Human Capital".Laureate of the Noble Prize for Economy, exponent of the New School in Chicagowhose theoretical lead and slogan was:" Manthe most precious richness of any country ".Knowledge and skills make up the relevant capital and this capital stands for the outcome of one deliberate and willing investment.The concept of such human capital corresponds to any individual's skills and knowledge enabling the later to achieve change in action and to reach economic growth (Coleman J.,1988).Human capital may be developed by formal training and education, focusing on updating and renewing any individual's abilities.One should make a distinction between the various distinct types of human capital, as follows: Company specific human capital consists of those particular abilities and knowledge that are relevant within one given company.For instance, some researchers have investigated the impact held by company knowhow within the founding team on the increase ratio of the companies that were at the beginning of their business activity.At the company level, human capital is regarded as one component of the intellectual capital (Edvinsson, 1997) together with the relevant structural capital.Human capital is defined as the value of company employees' knowledge, skills and experience, and structural capital is defined as the "body and supporting infrastructure of human capital, namely all those things that make up the human capital support in any given company and which are left aside when the employees leave the company at the end of the day" (Edvinsson, 1997). McElroy Mark (2001) stretches out and expands the relevant scope of analysis, adding to the human and structural capital, the relevant registered capital within the composition of the intellectual capital.In reference to its structural dimension, he defines the capital as being "the general model of connections between actors".The industrial branch specific capital consists in that knowledge deriving out of the specific experience in any relevant industry.Various researchers have investigated the impact held by the experience in a specific industrial branch upon economic growth and economic performance, at both micro and macroeconomic level (Siegel, MacMillan, 1993, Kenney, von Burg, 1990).Individual specific human capital refers to that knowledge applicable to a wide range of companies and industries; it includes managerial and entrepreneurial experience (Pennings, Lee & van Witteloostujin, 1998), the level of academic education and professional training (Hinz &Junghauer-Gans, 1999), such individual's age and family incomes (Kilkenny, Nalbarte &Besser, 1999).Within the relevant most recent economic literature, one speaks of the neoclassical theory of human capital, as well as of such human capital management and strategy.The latter stands for one form of active managementa plan for the insurance, management and motivation of the relevant labor force in view of business performance optimization.Human capital, as one central production factor in any given economic theory stands for the stock of knowledge and qualifications that are useful and valuable, as integrated into the relevant labor force, resulting out of one education and professional training process.It relates to any individual's ability to mobilize other production factors, to specifically combine the same and to predetermine so as to reach the expected and wished for outcome.It is upon this human capital that our richness depends, as well as the richness of our future generations, too.Therefore, the establishment of such human capital should enjoy the highest priority, all the more that without the support of an adequate knowledge and human experience, the other production factors shall not be able to produce but very little, if anything at all.The benefit following such human capital investment does not resume itself to the net amount of incomes, as achieved during a life time, out of the sale of qualified labor force unlike the unqualified one, but it also focuses on the subjective feeling of intellectual wellbeing, of trust, as well as of social acknowledgment.According to estimates, between 50 and 90 % of the overall capital stock in the US takes the form of human capital.Human capital has various distinct meanings in terms of the various groups of interests:  Employers In the business world, human capital stands for the economic value of one set of skills that employees hold; traditionally, it has been perceived as an education and experience function (the last one standing for both the training the employees have received, and the handson learning, as gathered by means of experience).Within the recent years, one has also added health to the list (physical, cognitive and mental capacity);  Society There is also the long term vision of human capital.This includes indicators resulting out of decisional practices and policies which shall have an impact on future generations and which shall shape the form of tomorrow's labor force.Quite often, this long term vision does not fit the political cycles and investment type scopes, yet the lack of such long term vision may trigger the constant loss of one country's population potential, as well as dropping of productivity level, just as things stand as far as our country is concerned.  Individual The individual shows interested in activating on a labor market which should enable him to achieve professional expression, the accurate acknowledgment and rewarding of his own value, which should provide for his mobility, a satisfactory health and wellbeing status, as well as the appurtenance to a good reputation human capital (in terms of education, productivity).Romania belongs to the Upper-mid income group of countries, which means that it pays better here than in Albania and Serbia, yet the payment is worse than in Hungary, Bulgaria, Macedonia, which countries belong to the same aforementioned group (Mocanu, 2014).Competitiveness stands for the outcome of talent and the abilities resulting out of such human capital.Any organization, regardless of the business activity the latter carries out, if it wishes to maintain one adequate long term competitiveness level, should use formal decisional and analysis procedures that should be embedded within the context of strategic planning.The role played by this process is that of coordinating and systematizing all efforts of the various number of units, as conceived in view of one's maximizing the overall organization efficiency.Internal competitiveness refers to the organization ability to reach maximum performance in terms of available resources, such as personnel, capital, materials, ideas, etc., as well as the relevant transformation processes: the company shall compete against itself, while stating its constant self-overdoing efforts (http://www.mundosigloxxi.ciecas.ipn.mx).External competitiveness is oriented towards the development of organization successes within the context of the current market: industry dynamism and economic stability. Once it has reached the relevant external competitiveness level, a company should maintain the same in the future, by generating ideas and products and it should also look for new opportunities on the market.Nowadays, in this age of changes, companies wishing to increase their productivity, efficiency ratios, as well as the quality services, tend to use as a central basis for the human element, the development of teams, so as to achieve competitiveness and to adequately reply to the demand for high quality products and services at all levels, as concerned. Corporate management quality has just as much relevance for any given organization, as technology and products innovation have.Upon consideration of this aforementioned concept, we cannot help wondering if it might be reasonable for one to conduct a comprehensive, fruitful and development related research in terms of companies' management, for the purpose of creating new instruments for the organization to impose its competitiveness on the market.In the 3 rd millennium, companies should turn their attention towards human capital management, by management processes which shall trigger a positive influence on the employees and which shall consolidate the required teams, so as to enhance productivity, in order for one to consider two main factors: The Environment stands for a challenge for any given company:  market evolution and expansion  products diversification  new technologies coming up on the market The innovating social-economic management, as elaborated and experienced by Savall H. (Jean Moulin University) stands for one management method closely integrating the human dimension of one given company and the latter's economic performances.His global level management methods rely on the human development of the company as the main factor for short, average and long term efficiency.Companies and enterprises efficiency and efficacy depend a great deal on their own ability to articulate traditional management methods with a human and social vision in terms of the latter's operation and they closely integrate the human dimension of the company and the latter's economic performances (Savall, 2006). Human capital may conduct either manual or intellectual works, and the latter may apply to various activity fields, such as industry, agriculture or services.The labor force, or the working ability that such human capital possesses is represented by the set of physical and intellectual abilities that the individual holds and applies within the production for the purpose of meeting the relevant needs.One study conducted in 1999 by PricewaterhouseCoopers in terms of the best human management practices, has reached the conclusion that the most significant skills one individual should possess in order to survive and develop and grow within the business world in the following years, are represented by:  leadership,  adjustability to changes,  people management,  teamwork.Thus, the role played by human capital within one given organization is vital, and without the latter's capitalization, the company's wish to align to market objectives and to enhance competitiveness would lack any meaning whatsoever.
3,677.8
2016-06-30T00:00:00.000
[ "Economics" ]
High-Throughput Multiplex SARS-CoV-2 IgG Microsphere Immunoassay for Dried Blood Spots: A Public Health Strategy for Enhanced Serosurvey Capacity ABSTRACT Early in the pandemic when diagnostic testing was not widely available, serosurveys played an important role in estimating the prevalence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in different populations. Dried blood spots (DBS), which can be collected in nonclinical settings, provide a minimally invasive alternative to serum for serosurveys. We developed a Luminex-based SARS-CoV-2 microsphere immunoassay (MIA) for DBS that detects IgG antibodies to nucleocapsid (N) and spike subunit 1 (S1) antigens. The assay uses a 384-well plate format and automated liquid handlers for high-throughput capacity. Specificity was assessed using a large collection of prepandemic DBS and well-characterized sera. Sensitivity was analyzed using serology data from New York State SARS-CoV-2 serosurvey testing and matched diagnostic test results. For DBS, the specificity was 99.5% for the individual N and S1 antigens. Median fluorescence intensity (MFI) values for DBS and paired sera showed a strong positive correlation for N (R2 = 0.91) and S1 (R2 = 0.93). Sensitivity, assessed from 1,134 DBS with prior laboratory-confirmed SARS-CoV-2 infection, ranged from 83% at 0 to 20 days to 95% at 61 to 90 days after a positive test. When stratified using coronavirus disease 2019 (COVID-19) symptom data, sensitivity ranged from 90 to 96% for symptomatic and 77 to 91% for asymptomatic individuals. For 8,367 health care workers reporting detailed symptom data, MFI values were significantly higher for all symptom categories. Our results indicate that the SARS-CoV-2 IgG DBS MIA is sensitive, specific, and well-suited for large population-based serosurveys. The ability to readily modify and multiplex antigens is important for ongoing assessment of SARS-CoV-2 antibody responses to emerging variants and vaccines. IMPORTANCE Testing for antibodies to SARS-CoV-2 has been used to estimate the prevalence of COVID-19 in different populations. Seroprevalence studies, or serosurveys, were especially useful during the early phase of the pandemic when diagnostic testing was not widely available, and the resulting seroprevalence estimates played an important role in public health decision making. To achieve meaningful results, antibody tests used for serosurveys should be accurate and accessible to diverse populations. We developed a test that detects antibodies to two different SARS-CoV-2 proteins in dried blood spots (DBS). DBS require only a simple fingerstick and can be collected in nonclinical settings. We conducted a robust validation study and have demonstrated that our test is both sensitive and specific. Furthermore, we demonstrated that our test is suitable for large-scale serosurveys by testing over 56,000 DBS collected in a variety of community-based venues in New York State during the spring of 2020. O n 1 March 2020, the first case of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in New York State (NYS) was detected in a symptomatic health care worker (HCW) who had returned to New York City after travel outside the United States. Shortly thereafter, SARS-CoV-2 began spreading rapidly in NYS. Many cases were not confirmed by diagnostic testing, especially those involving asymptomatic individuals who were unaware of their infection and individuals with mild to moderate symptoms who convalesced at home without receiving laboratory confirmation. Considering that only a subset of SARS-CoV-2 infections were reported via diagnostic testing, measuring the seroprevalence of SARS-CoV-2 through antibody testing became an important public health tool for assessing the extent of infections across NYS. Components of an effective SARS-CoV-2 serosurvey include (i) a well-validated, accurate antibody test, (ii) a representative sampling of the target population, including underserved populations, and (iii) a sample size that is large enough to provide sufficient statistical power (1). To meet these criteria, the assay must be sufficiently sensitive and highly specific. Ideally, it is also high-throughput, low-cost, and amenable to rapid deployment in diverse community settings to allow comprehensive population sampling, including vulnerable members of the community. The use of minimally invasive biospecimen collection methods that are amenable to nonclinical settings can reduce sampling disparities and foster participant diversity. Although serum is the standard specimen for serology testing, collecting blood by venipuncture and transporting blood tubes to a laboratory is often too complex for community and outreach settings (2). Dried blood spots (DBS) collected by fingerstick onto filter paper cards are a viable alternative to serum for serology testing (3). DBS collection requires only minimal training, and transportation of DBS is simplified because they are not considered a biohazard and do not require a cold chain (4). Furthermore, DBS have been shown to be suitable for self/home collection (5)(6)(7), another factor that can promote participation in serosurveys. To address urgent questions regarding the demographic and geographic distribution of SARS-CoV-2 infections in NYS, the New York State Department of Health (NYSDOH) initiated a series of cross-sectional serosurveys of the general community and targeted essential workers between April and June 2020 (8). The strategy included collecting as many as 3,000 DBS samples per day in nontraditional settings like grocery stores and pop-up sites at local community colleges. This necessitated the rapid development, validation, and implementation of a high-throughput assay for detecting SARS-CoV-2 antibodies in DBS samples. To achieve this, we expanded on the Wadsworth Center's extensive experience in developing Luminex-based microsphere immunoassays (MIAs) (9, 10), including the development of a SARS-CoV-2 pan-Ig MIA for serum (11), and our prior work developing high-throughput assays for detecting IgG antibodies in DBS samples from newborns (12). Building on this cumulative prior experience, we developed a high-throughput immunoassay for detecting SARS-CoV-2 IgG antibodies in DBS. In April 2020, we used an early version of this assay to test 15,101 DBS for reactivity to nucleocapsid (N) protein for a statewide assessment of SARS-CoV-2 seroprevalence in NYS (8). We have since expanded our SARS-CoV-2 IgG DBS assay to detect antibodies to both N and spike subunit 1 (S1) of SARS-CoV-2 and have analyzed data collected on 56,189 DBS samples tested using this multiplex assay. Here, we describe the performance characteristics of the multiplex SARS-CoV-2 IgG assay for DBS. We augmented laboratory-based validation studies with data collected during the NYSDOH serosurveys and clinical laboratory-reported SARS-CoV-2 diagnostic testing data to systematically define the sensitivity characteristics of the assay. Using median fluorescence intensity (MFI) values produced separately for each antigen, we independently analyzed index values for the N and S1 antigens. We analyzed these semiquantitative antibody results along with coronavirus disease 2019 (COVID-19) symptom data collected from serosurvey participants who had prior, laboratory-confirmed SARS-CoV-2 infection to demonstrate how the assay's sensitivity differs for asymptomatic and symptomatic individuals. RESULTS Assay validation. To determine cutoff values and assay specificity, we tested 730 DBS and 701 serum specimens collected in NYS prior to December 2019 on three lots of beads coupled to SARS-COV-2 N and S1 antigens (Fig. 1A). Significant differences occurred between median MFI values for N-coupled bead lots but not S-coupled lots when they were tested using DBS. Bead lot B was tested on both DBS and serum. MFI values were higher in serum than in DBS for S beads but were similar for N beads. Cutoffs for each bead lot were set at the mean MFI 1 6 SD to be classified as reactive. Results that fell between the mean MFI 1 3 SD and mean MFI 1 6 SD were classified as indeterminate, and below the mean MFI 1 3 SD were nonreactive. Cutoff values for S bead lot C were higher than for lots A and B because lot C was tested on a larger set of DBS (n = 547) than lots A and B (n = 92 and 91, respectively). Serum containing antibodies to other respiratory pathogens, including other human coronaviruses, were tested to assess cross-reactivity (Fig. 1B). In DBS, the specificity was 99.5% for each individual bead set. For the combined results from both bead sets, the specificities were 98.9% for the N or S1 criteria and 100% for the N and S1 criteria. In serum, specificity was lower in the respiratory panel than in normal serum, especially for the N bead set (95.8% in respiratory versus 99.6% in normal serum). Specificity remained at 100% using the combined N and S1 criteria ( Table 1). Concordance studies. To demonstrate the correlation between DBS and serum in our assay, we tested paired serum and spiked DBS specimens. MFI values for DBS and serum showed a strong positive correlation for both N (R 2 = 0.91) and S1 (R 2 = 0.93) (Fig. 2). To assess concordance between our assay and other SARS-CoV-2 immunoassays, we tested commercially available serum panels collected from individuals with confirmed COVID-19 infections and compared the results with those reported by the supplier for two assays with emergency use authorization (EUA) from the U.S. Food and Drug Administration (FDA) and one CE-marked assay ( Fig. 3A to C). All 50 samples were reactive on our SARS-CoV-2 IgG assay (on either the N or S1 bead sets), matching reactive results on the comparator assays. The MFI index values of N and S1 bead sets were significantly correlated with Gold Standard Diagnostics IgG units (N, R 2 = 0.46, P , 0.0001; S, R 2 = 0.24, P , 0.0001). The Centaur results were significantly correlated with the S1 MFI index (S, R 2 = 0.56, P , 0.0001). We also tested a commercially available seroconversion panel in which 28 plasma samples were collected from the same person on various days after COVID-19 infection. The results were compared with the index values of five FDA EUA or CE-marked assays (Fig. 3D). All assays were nonreactive for samples collected between 1 and 36 days after symptom onset. At the next time point, day 50 after symptom onset, all assays became reactive. Sensitivity analysis. We analyzed test data from DBS collected from 56,189 individuals during the statewide serosurvey. After merging serosurvey data with laboratoryreported SARS-CoV-2 diagnostic test data from the New York State surveillance database, we identified 1,134 samples that were collected from individuals with a confirmed positive SARS-CoV-2 diagnostic test result prior to DBS collection ( Table 2). The specimen collection dates for the positive diagnostic laboratory tests were available for all 1,134 members of this group. We used this date and the date of DBS collection to calculate the number of days between infection and antibody testing and analyzed reactivity against the number of days after a positive diagnostic test. Although the actual onset of infection may have been several days prior to diagnostic sample collection, this analysis provides a reasonable estimate of the assay's sensitivity relative to time from infection. Reactivity to S1 (range, 83 to 95%) was significantly higher than reactivity to N (range, 69 to 81%) at 21 to 40, 41 to 60, and 61 to 90 days after a positive diagnostic test. Reactivity to either N or S1 did not increase significantly by days after positive diagnostic test (Fig. 4A). At all time points, the highest sensitivity was achieved when reactivity to either N or S1 was used to classify a specimen as seropositive. Although requiring reactivity to both N and S1 resulted in 100% specificity, sensitivity was reduced to below 80% at all time points. Individuals with a positive diagnostic test were classified as asymptomatic or symptomatic based on their answer to a question in the serosurvey intake form asking if they experienced COVID-19 symptoms ( Table 2). Antibody reactivity to N was significantly lower in asymptomatic individuals than in symptomatic individuals at 0 to 20 and 21 to 40 days after a positive diagnostic test (Fig. 4B). S1 reactivity was also significantly lower in asymptomatic individuals than in symptomatic individuals at 21 to 40 days after a positive diagnostic test. Reactivity to S1 was significantly higher than reactivity to N in symptomatic individuals at 21 to 40, 41 to 60, and 61 to 90 days after a positive diagnostic result. Overall assay reactivity, which was based on reactivity to N or S, increased from 90 to 96% among symptomatic and 77 to 91% among asymptomatic individuals from 0 to 90 days after a positive diagnostic test. At days 21 to 40, overall assay reactivity was significantly higher among symptomatic than among asymptomatic individuals (Fig. 4C). Analysis of reactivity versus symptoms. For all serosurvey participants who reported on their COVID-19 symptom experience, there was a statistical association between experiencing symptoms and having a reactive antibody test result. Among 39,458 participants asked about COVID-19 symptoms, 55.6% (n = 2,294) of 4,122 reactive individuals reported experiencing symptoms compared to 22.4% (n = 7,912) of 35,356 non-reactive individuals who reported symptoms (P , 0.0001). Of the cohorts that participated in the serosurveys, the only group for which detailed information about specific COVID-19 symptoms was collected was the health care worker (HCW) (7) 18 (13) 0 (0) 75 (7) cohort (n = 8,367). Across all symptom categories, the percentages of reactive and mean MFI index values for S1 and N were significantly higher among those who reported experiencing the symptom than for those who did not (Fig. 5). In addition to collecting specific symptom data, the HCW questionnaire included questions about the severity of illness, including information about hospitalization and SARS-CoV-2 PCR test results. Consistent with the analysis using laboratory-reported diagnostic test data (Fig. 4), a self-reported positive diagnostic test and more severe illness were associated with statistically higher mean MFI index values (Fig. 5). The largest recorded mean N MFI index, 12.5 (95% CI, 8.5 to 16.6), was seen in those who reported being hospitalized for COVID, compared to 1.4 (95% CI, 1.3 to 1.5) among those who were not hospitalized. Similarly, the mean S1 MFI index values for hospitalized and nonhospitalized HCWs were 21.8 and 1.9, respectively. DISCUSSION To assist with NYS's public health response to the COVID-19 pandemic, we developed a high-throughput immunoassay that is both sensitive and specific for detection of SARS-CoV-2 IgG antibodies in DBS samples. A strength of our study is the data from .1,100 individuals with confirmed SARS-CoV-2 infection prior to DBS collection, including dates of laboratory diagnosis and self-reported presence or absence of COVID-19 symptoms. This analysis showed that for both N and S1, sensitivity increased with the time from a positive diagnostic test, but S1 was significantly more sensitive Reactivity for individual nucleocapsid (N) and spike (S1) bead sets and reactivity based on "OR" and "AND" result criteria by days after positive diagnostic test. (B) Reactivity for N and S1 bead sets by days after positive diagnostic test and presence/absence of COVID-19 symptoms. (C) Overall assay reactivity (reactive = N or S1 reactive) by days after positive diagnostic test and presence/absence of symptoms. Error bars = 95% CI; *, 0.01 , P , 0.05. than N for detecting SARS-CoV-2 antibodies at all time points. Using criteria in which reactivity to either the N or S1 antigen classifies a sample as seropositive provides the greatest sensitivity while maintaining a 99% specificity. For applications where maximal specificity is needed, defining seropositivity based on a reactivity result for both N and S1 raises the specificity to 100%; however, the sensitivity will be reduced. Our data also show that the assay's sensitivity is higher when testing people who experienced COVID-19 symptoms than for those who were asymptomatic, and this difference was significant for samples collected 21 to 40 days after a positive diagnostic test. This relationship between symptoms and antibody response is consistent with data from other studies (13)(14)(15). These data highlight the importance of using well-characterized samples from a heterogeneous population that includes asymptomatic as well as symptomatic individuals to obtain an accurate sensitivity assessment of SARS-CoV-2 serology assays. The suitability of DBS for SARS-CoV-2 serology has been reported (5, 6, 16-18); however, the multiplexing capacity of Luminex technology combined with the highthroughput capacity of our 384-well plate format makes our DBS assay particularly well suited for assessing population-based seroprevalence. Using a single 3.2-mm DBS punch, we can assess reactivity to multiple antigen-coupled bead sets simultaneously in a single well and produce separate, semiquantitative results for each antigen. Validation of a flow-cytometer-based multiplex microsphere immunoassay (MMIA) for DBS was recently described, and while this MMIA is likely to be suitable for large-scale SARS-CoV-2 serosurveys, the application of the assay in a serosurvey was limited to 264 participants (16). We have demonstrated sustained high-throughput use of our SARS-CoV-2 MIA by testing 56,189 DBS over an 8-week period in the spring of 2020. During that period, our assay was performed manually and each technologist was able to manually test 696 samples in an 8h shift. Our capacity has since increased to 2,784 samples per day by incorporating two automated liquid handlers into the protocol. The ease with which antigen-coupled bead sets can be added and removed is especially useful in rapidly evolving outbreak situations. The ability to multiplex and simultaneously FIG 5 Percent reactive and mean index values for spike S1 and nucleocapsid by symptom category for health care workers. The numbers who reported "yes" and "no" for each symptom are listed. Error bars indicate 95% confidence intervals. obtain separate measurements of antibody reactivity for different antigens has proven valuable. As the pandemic began to surge in NYS, we relied on an existing supply of SARS-CoV N protein, prepared in-house during the 2003 SARS outbreak, to develop an initial SARS-CoV-2 immunoassay (8,11). As SARS-CoV-2 antigens became commercially available, we were able to multiplex new and old bead sets in the same test well, which allowed us to continue testing samples while concurrently validating new antigens. The ability to independently measure antibodies to different antigens has additional implications as the COVID-19 vaccination phase unfolds. We are currently modifying our multiplex SARS-CoV-2 assay by incorporating additional antigens and standards to enhance its ability to distinguish antibody responses due to natural infection from those induced by vaccination. Health outcome disparities have been observed during the COVID-19 pandemic (19), and the importance of adequately representing population diversity, including underserved and vulnerable members, in COVID-related research and surveillance has been emphasized. This includes serosurveys, which provide valuable data on disease prevalence, as well as insight into the characteristics of antibody response and potential immunity within the population. By limiting studies to specimens collected in clinical settings, many important populations may be excluded, leading to gaps in scientific knowledge (1). DBS provide a minimally invasive sample collection method that is amenable to nonclinical settings (3). Public health programs have turned to DBS testing in community settings as a means to increase infectious disease testing in underserved and vulnerable populations (20)(21)(22). During the NYS SARS-CoV-2 serosurveys, fingerstick DBS samples were collected from a demographically diverse group of individuals in a variety of settings, including community-based pop-up sites, grocery stores, and community college gymnasiums. Although the DBS samples tested in our serosurvey were collected by trained individuals, self-collection of DBS at one's home has been demonstrated to be a feasible method of obtaining samples for COVID-19 serosurveys (5-7). Our high-throughput SARS-CoV-2 IgG MIA offers many advantages for obtaining large-scale seroprevalence data through community sampling; however, some limitations should be noted. Each antigen exhibits low-level cross-reactivity with other coronaviruses. False reactivity can be largely eliminated if reactivity to both N and S1 is required for seropositivity, but this may reduce sensitivity, especially for detection of asymptomatic cases. Although MFI values are considered proportional to antibody reactivity, the assay is only validated for reporting qualitative results. While serology assays can be calibrated by generating standard curves from monoclonal antibodies, this method is prone to variability and quantitative values will not be consistent between methods. The first WHO International Standard of anti-SARS-CoV-2 immunoglobulin recently became available from the National Institute for Biological Standards and Control. We intend to incorporate this calibration standard into our assay and transition to reporting quantitative SARS-CoV-2 IgG results. This will allow us to assess the variation in antibody levels detected using our assay and will allow for a more meaningful comparison of results across studies. Our current method indicates reactivity to S1, but we currently do not have a method to further characterize functionality, such as neutralization capacity, with DBS samples. However, the development of serological methods to characterize the neutralizing capacity of SARS-CoV-2 antibodies using DBS is under way. In conclusion, we have developed a high-throughput, multiplex SARS-CoV-2 IgG immunoassay for DBS with well-defined performance characteristics. This assay was deployed at large scale during a period of surging SARS-CoV-2 infections in New York State, where the use of fingerstick-collected DBS was the key to a rapid, representative sampling of the population that was needed to help inform public health decision making. The multiplexing capacity of this system, which allows rapid modification of assay components, was critical during the early pandemic response period when reagents were limited, and it continues to be valuable as we contend with new viral variants and assessment of vaccine response. MATERIALS AND METHODS Specimens. Dried blood spots (DBS) collected by fingerstick and submitted to the Wadsworth Center for clinical testing before December 2019 were used to determine background reactivity and assay specificity. Mock DBS samples used in validation studies and as positive controls were created by centrifuging SARS-CoV-2 antibody-negative EDTA whole blood, removing the plasma, and adding an equal volume of SARS-CoV-2 IgG-positive serum or plasma to the blood cells. After mixing, 50 ml of spiked blood was spotted onto Whatman 903 filter paper and dried at room temperature for 4 h. Fingerstick-collected DBS were collected as previously described (8) from serosurvey participants at pop-up sites, grocery stores, health care facilities, and educational institutions in NYS between 17 April and 12 June 2020 and transported at ambient temperature to the Wadsworth Center Laboratory (Albany, NY) by courier. All participants were at least 18 years old and provided general consent for SARS-CoV-2 IgG testing. Serum specimens shared with us by William Lee included specimens (i) submitted to the Wadsworth Center for clinical testing, (ii) collected in New York from healthy individuals prior to December 2019 at the New York Blood Center and the American Red Cross, and (iii) collected at Weill Cornell Medical Center and Columbia University Medical Center following molecular testing-confirmed respiratory infections, as well as (iv) sera confirmed as positive for antibodies to non-SARS-CoV-2 human coronaviruses obtained from Regeneron (Tarrytown, NY). Commercial serum panels (COVID-19 30-member panel, COVID-19 20member panel, and COVID-19 seroconversion panel) were obtained from Access Biologicals (Vista, CA). Data collection. Data were analyzed from testing conducted on DBS collected from the (i) general public sampled at grocery stores (19 April to 28 April 2020 and 9 June to 12 June 2020), (ii) health care workers (17 April to 4 June 2020), (iii) NYC Fire Department and Police Department employees (27 April 2020), (iv) New York State Police (1 May to 4 May 2020), (v) NYS civil service employees designated essential (8 May to 18 June 2020), (vi) food service workers (8 May to 6 June 2020), (vii) grocery store workers (11 May to 6 June 2020) and (viii) pharmacy workers (23 May to 5 June 2020). The New York State COVID-19 Antibody Testing System (NYSCATS), a Microsoft Dynamics 365 customer relationship management (CRM) application developed by the NYSDOH and Microsoft Corporation, was used to collect participant data, schedule antibody testing, and report results. For samples collected from 24 April to 1 May 2020, data were first collected using Microsoft Excel spreadsheets and then loaded into the NYSCATS system once it was formatted and cleaned. Personal identifying information and demographic data were collected on all participants, including region of residence, age, gender, race, and ethnicity. Customized questionnaires within the NYSCATS application were used to collect additional data, including COVID-19 testing history, symptoms, and disease severity. These questionnaires changed over time and varied by cohort, and thus, not all information is available across all tested cohorts. Serosurvey participant data were exported from NYSCATS and merged with clinical laboratory-reported SARS-CoV-2 diagnostic testing data from the NYSDOH Electronic Clinical Laboratory Reporting System (ECLRS) database (https://www.health.ny.gov/professionals/reportable_diseases/eclrs). The match was completed based on the last name, first name, and date of birth of those with serological data in NYSCATS to retrieve ECLRS SARS-CoV-2 diagnostic testing results and collection dates for serosurvey participants. After matching and deduplicating the merged NYSCATS/ECLRS data set, individual records were deidentified for data analysis. Assay procedure. Magplex-C microspheres (Luminex Corp., Austin, TX) with different bead regions were coupled to SARS-CoV-2 nucleocapsid (N) and spike subunit S1 (S1) antigens (Sino Biological, Wayne, PA). Microspheres were washed using activation buffer (0.1 M monosodium phosphate, pH 6.2) and activated by adding 50 mg/ml sulfo-NHS (N-hydroxysulfosuccinimide) and EDC [1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride] (Thermo Scientific Pierce). Beads were then incubated with antigen at a concentration of 5 mg antigen/1 Â 10 6 beads in coupling buffer [0.5 M 2-(N-morpholino) ethanesulfonic acid, pH 5.0]. Coupled beads were diluted in storage buffer (phosphate-buffered saline [PBS] with 1% bovine serum albumin [BSA], 0.02% Tween 20, 0.05% azide, pH 7.4) to a concentration of 1 Â 10 6 beads/ml. A 3.2-mm DBS punch was added to 250 ml elution buffer (Tris-buffered saline, 1% casein blocker) (Bio-Rad Laboratories, Hercules, CA) in round-bottom, nontreated, polystyrene 96-well plates at room temperature (19°C to 22°C) for 1 h. Eluate (25 ml) was transferred to nonbinding 384-well plates (Greiner Bio-One, Monroe, NC) along with 25 ml of beads (1,250 beads/bead set/well). Eluate and beads were incubated together for 30 min at 37°C with shaking (300 rpm) in the dark. Samples were washed three times using wash buffer (PBS, 2% BSA, 0.02% Tween 20, 0.05% azide, pH 7.5) on a BioTek 405 TSUS microplate washer. After washing, samples were incubated with 50 ml phycoerythrin-tagged goat-anti human IgG (Invitrogen, Thermo Fisher) in the dark at 37°C and 300 rpm for 30 min. Plates were washed as described above. Beads were resuspended in 90 ml xMap sheath fluid (Luminex Corp., Austin, TX) and incubated for 1 min at room temperature at 300 rpm in the dark. Serum specimens were tested as described above except that serum was diluted 1:101 in PBN buffer (PBS, 1% BSA, 0.05% sodium azide, pH 7.4) and 25 ml was used in the assay. Samples were analyzed using a FlexMap 3D instrument (Luminex Corp., Austin, TX), which produces a median fluorescence intensity (MFI) for each bead set. Based on validation and optimization studies, the mean MFI of at least 92 negative DBS was used to set cutoffs. Each bead set is evaluated separately. Results that are less than the mean MFI 1 3 standard deviations (SD) were classified as nonreactive, between mean MFI 1 3 and 1 6 SD is indeterminate, and greater than the mean MFI 1 6 SD is considered reactive. The MFI index was calculated for each bead set by dividing the MFI value by the reactive cutoff value; values of .1.0 indicate a reactive result for that bead set. Reactivity was determined separately for each bead set and for both bead sets, with "N or S1" defined by having a reactive result for either the N or S1 bead set and "N and S1" defined by having a reactive result for both the N and S1 bead sets.
6,280.4
2021-07-28T00:00:00.000
[ "Medicine", "Biology" ]
SOLVABILITY OF NONLINEAR DIRICHLET PROBLEM FOR A CLASS OF DEGENERATE ELLIPTIC EQUATIONS We prove an existence result for solution to a class of nonlinear degenerate elliptic equation associated with a class of partial differential operators of the form L u ( x ) = ∑ i , j = 1 n D j ( a i j ( x ) D i u ( x ) ) , with D j = ∂ / ∂ x j , where a i j : Ω → ℝ are functionssatisfying suitable hypotheses. Introduction In this paper, we prove the existence of solution in D(A) ⊆ H 0 (Ω) for the following nonlinear Dirichlet problem: −Lu(x) + g u(x) ω(x) = f 0 (x) − n j=1 D j f j (x) on Ω, u(x) = 0 on ∂Ω, (1.1) where L is an elliptic operator in divergence form D j a i j (x)D i u(x) , with D j = ∂ ∂x j (1.2) and the coefficients a i j are measurable, real-valued functions whose coefficient matrix (a i j (x)) is symmetric and satisfies the degenerate ellipticity condition for all ξ ∈ R n and almost every x ∈ Ω ⊂ R n a bounded open set with piecewise smooth boundary (i.e., ∂Ω ∈ C 0,1 ), and ω and v two weight functions (i.e., locally integrable nonnegative functions). Theorem 1.1.Suppose that the following assumptions are satisfied. (H1) Dual pairs.Let the dual pairs {X, X + } and {Y ,Y + } be given, where X, X + , Y , and Y + are Banach spaces with corresponding bilinear forms •, • X and •, • Y and the continuous embeddings Y ⊆ X and The dual pairs are compatible, that is, Moreover, the Banach spaces X and Y are separable and X is reflexive.(H2) Operator A. Let the operator A : D(A) ⊆ X → Y + be given, and let K be a bounded closed convex set in X containing the zero point as an interior point and K∩Y ⊆ D(A). (H3) Local coerciveness.There exists a number α ≥ 0 such that Av,v Y ≥ α for all v ∈ Y ∩ ∂K, where ∂K denotes the boundary of K in the Banach space X. (H4) Continuity.For each finite-dimensional subspace Y 0 of the Banach space Y , the mapping u → Au,v Y is continuous on K∩Y 0 for all v ∈ Y 0 . (H5) Generalized condition (M).Let {u n } be a sequence in Y ∩K and let T ∈ X + .Then, from it follows that Au = T. (H6) Quasiboundedness.Let {u n } be a sequence in Y ∩K .Then, from (1.6) and Au n , u n Y ≤ C u X for all n, it follows that the sequence We will apply this theorem to a sufficiently large ball K in the Banach spaces X = H 0 (Ω), X + = (H 0 (Ω)) * , and Y + = Y * . We make the following basic assumption on the weights ω and v. The weighted Sobolev inequality (WSI). Let Ω be an open bounded set in R n .There is an index q = 2σ, σ > 1, such that for every ball B and every f ∈ Lip 0 (B) (i.e, f ∈ Lip(B) whose support is contained in the interior of B), where C B,ω,v is called the Sobolev constant and (1.10) For instance, the WSI holds if ω and v are as in [6, Chapter X, Theorem 4.8], or if ω and v are as in [1,Theorem 1.5]. The following theorem will be proved in Section 3. Theorem 1.2.Let L be the operator (1.2) and satisfy (1.3).Suppose that the following assumptions are satisfied: where Definitions and basic results Let ω be a locally integrable nonnegative function in R n and assume that 0 < ω < ∞ almost everywhere.We say that ω belongs to the Muckenhoupt class 208 Solvability for a class of nonlinear Dirichlet problem for all balls B ⊂ R n , where | • | denotes the n-dimensional Lebesgue measure in R n .If 1 < q ≤ p, then A q ⊂ A p (see [4,5] for more information about A p -weights).The weight ω satisfies the doubling condition if ω(2B) ≤ Cω(B), for all balls B ⊂ R n , where ω(B) = B ω(x)dx and 2B denotes the ball with the same center as B which is twice as large.If ω ∈ A p , then ω is doubling (see [5,Corollary 15.7]).We say that the pair of weights (v,ω) satisfies the condition A p (1 < p < ∞ and (v,ω) ∈ A p ) if and only if there is a constant C 2 such that 1 for every ball B⊂R n . Given a measurable subset Ω of R n , we will denote by L p (Ω,ω), 1 ≤ p < ∞, the Banach space of all measurable functions f defined on Ω for which We will denote by W k,p (Ω,ω), the weighted Sobolev spaces, the set of all functions u ∈ L p (Ω,ω) such that the weak derivatives D α u ∈ L p (Ω,ω), 1 ≤ |α| ≤ k.The norm in the space W k,p (Ω,ω) is defined by When k = 1 and p = 2, the spaces W 1,2 (Ω,ω) and W 1,2 0 (Ω,ω) are Hilbert spaces.We will denote by H 0 (Ω) the closure of C ∞ 0 ( Ω) with respect to the norm where Ꮽ(x) = [a i j (x)] (the coefficient matrix) and the symbol ∇ indicates the gradient. We introduce the following definition of (weak) solutions for problem (1.1). Definition 2.4. A function u Remark 2.5.Using that p > 4, we have that v ∈ A 2 ⊂ A p/2 and Step 5. Global coerciveness of operator A. Using the condition (1.3) and hypothesis (ii), we obtain Step 6. Generalized condition (M).Let T ∈ (H 0 (Ω)) * and let {u n } be a sequence in Y with We want to show that this implies that Au = T. Using that the operator A 1 is linear and continuous, we obtain Because of (3.16), it is sufficient to prove that u ∈ D(A) and Therefore, it is sufficient to show that Using the same argument in Step 3, we obtain Therefore, it is sufficient to show that Albo Carlos Cavalheiro 213 The continuity of g implies that g(u n (x))u n (x)ω(x) → g(u(x))u(x)ω(x) for almost all x ∈ Ω.Therefore, by Fatou lemma, we have that is, u ∈ D(A).Now we want to show that g(u n (x)) → g(u(x)) in L 1 (Ω,ω).Let a > 0 be fixed.For each x ∈ Ω, we have either Hence, for all ε > 0, we have if a is sufficiently large and ω(X) is sufficiently small.Therefore, for all ε > 0, there exists with ω(X) < δ.Thus, the Vitali convergence theorem tells us that (3.22) holds. Step 7. Quasiboundedness of the operator A. Let {u n } be a sequence in Y with u n u in H 0 (Ω) and suppose that for all u,v ∈ D(A).Therefore, if u,v ∈ D(A) and Au = Av = T, we obtain that u = v. Au n ,u n Y ≤ C u n H0(Ω) , ∀n.(3.33)We want to show that the sequence {Au n } is bounded in Y * .In fact, the boundedness of {u n } in H 0 (Ω) implies that lim n→∞ Au n ,u n Y ≤ C. (3.34) Suppose by contradiction that the sequence {Au n } is unbounded in Y * .Then there exists a subsequence, again denoted by {u n }, such that Au n Y * −→ ∞ as n −→ ∞.(3.35)By the same arguments as in Step 6, we obtain that Au n ,ϕ Y −→ Au,ϕ Y as n −→ ∞, ∀ϕ ∈ Y. (3.36)The uniform boundedness principle tells us that the sequence {Au n } is bounded (which is a contradiction with (3.35)).Therefore, by Theorem 1.1, the equation Au = T, with T ∈ (H 0 (Ω)) * , has a solution u ∈ D(A) ⊆ H 0 (Ω), and it is the solution for problem (1.1).(II) Uniqueness.If the function g : R → R is monotone increasing, we have that (g(a) − g(b))(a − b) ≥ 0, for all a,b ∈ R. Then Au − Av,u − v Y = Ω Ꮽ∇(u − v),∇(u − v) dx that is, the equation Au = T has a solution u for each T ∈ X + . .22)Note that it is sufficient to prove (3.22) for a subsequence of {u n }.If (v,ω) ∈ A 2 and ω ≤ v, then ω ∈ A 2 (see Remark 2.1).By Lemma 2.3, This implies u n →u in L 2 (Ω,ω).Using again that ω ∈ A 2 , we have u n → u in L 1 (Ω).Thus, there exists a subsequence, again denoted by {u n }, such that u n (x) → u(x) for almost all x ∈ Ω.The continuity of g implies that g(u n (x)) → g(u(x)) for almost all x ∈ Ω.Moreover, since u n u in H 0 (Ω), it follows that with C independent of n.
2,181.2
2004-01-01T00:00:00.000
[ "Mathematics" ]
Bidirectional Relation Between Parkinson's Disease and Glioblastoma Multiforme Cancer and Parkinson's disease (PD) define two disease entities that include opposite concepts. Indeed, the involved mechanisms are at different ends of a spectrum related to cell survival - one due to enhanced cellular proliferation and the other due to premature cell death. There is increasing evidence indicating that patients with neurodegenerative diseases like PD have a reduced incidence for most cancers. In support, epidemiological studies demonstrate an inverse association between PD and cancer. Both conditions apparently can involve the same set of genes, however, in affected tissues the expression was inversely regulated: genes that are down-regulated in PD were found to be up-regulated in cancer and vice versa, for example p53 or PARK7. When comparing glioblastoma multiforme (GBM), a malignant brain tumor with poor overall survival, with PD, astrocytes are dysregulated in both diseases in opposite ways. In addition, common genes, that are involved in both diseases and share common key pathways of cell proliferation and metabolism, were shown to be oppositely deregulated in PD and GBM. Here, we provide an overview of the involvement of PD- and GBM-associated genes in common pathways that are dysregulated in both conditions. Moreover, we illustrate why the simultaneous study of PD and GBM regarding the role of common pathways may lead to a deeper understanding of these still incurable conditions. Eventually, considering the inverse regulation of certain genes in PD and GBM will help to understand their mechanistic basis, and thus to define novel target-based strategies for causative treatments. There is now accumulating evidence for an inverse association between Parkinson's Disease (PD) and cancer (1)(2)(3). Studies suggest that people affected by a neurodegenerative disorder have a reduced incidence for most cancers (4,5). Molecular studies showed that there is an inverse correlation of the expression of shared genes in PD and cancer: genes down-regulated in PD can be up-regulated in cancer and vice versa (6,7). These inversely correlated gene expression may affect the same pathways in opposite ways, either involving genetic or environmental factors (5,8,9). Shared genetic pathways deregulated in opposite ways are a major focus, particularly those favoring apoptosis and cell proliferation, influencing cell cycle control, DNA repair, and kinase signaling (4). Common mechanisms such as chronic inflammation (10) and immunosenescence, and common risk factors like diabetes and obesity, have been implicated in both conditions (11,12). Parkinson's Disease PD is a neurodegenerative disease characterized by three cardinal motor symptoms: tremor, rigidity and bradykinesia resulting from loss of dopaminergic neurons in the substantia nigra pars compacta (13). PD affects 1-2% of the population over 60 years (14). Age of onset before the age of 40 is seen in <5% of the cases in population-based cohorts, which is typical of familial cases of PD with underlying genetic cause like mutations in SNCA, Parkin, PINK1, DJ-1, LRRK2, ATP13A ( Table 1). Monogenic forms of PD are rare. In general, genetic factors are claimed to be involved in 5-10% of the cases (14). Histopathological hallmarks of PD are proteolytic inclusions called Lewy bodies (LB) and Lewy neurites containing α-synuclein (47). Cellular hallmarks of PD are an impairment of proper functioning of molecular and organelle degradation pathways like the ubiquitin-proteasome system and autophagy (48). In particular, the process of removing defective mitochondria from the cells is known to be impaired in PD (49). This process is a special form of autophagy, called mitophagy (50), and is regulated by the PD-linked proteins PINK1 and Parkin (51). The impairment of autophagy, lysosomal and mitochondrial function in PD can lead to the accumulation of α-synuclein and defective mitochondria (52) and, ultimately, to neurodegeneration. The diagnostic of PD is mostly a clinical diagnosis as it is based on neurological tests when the PD patients already show motor symptoms. Due to the complexity and heterogeneity of PD, the etiology is not yet fully understood. Therefore, there is no cure for PD and no treatment that will stop the progress of the disease and treatment is only symptomatic, e.g., levodopa therapy. This is why it is important to investigate underlying mechanisms of PD to stratify causative treatments. Glioblastoma Multiforme Glioblastoma multiforme (GBM) is the most malignant tumor of the central nervous system. GBM tumors are most likely developing from astrocytes (53). Based on their histological and clinical features, astrocytomas are classified into four different subtypes according to the WHO classification: Pilocytic astrocytoma, diffuse astrocytoma, anaplastic astrocytoma, and GBM. Pilocytic and diffuse astrocytoma are characterized by a rather low growth rate, while anaplastic astrocytoma and GBM show common uncontrolled proliferation and diffuse tissue penetration (54). GBM is characterized by poor prognosis, low survival rates, and extremely limited opportunities for therapy. Symptoms of GBM are rather unspecific like increased intracranial pressure, including headache and focal or progressive neurologic deficits. Seizures are the presenting symptom in 25% of patients and can occur at a later stage of the disease in 50% of patients (55). Malignant gliomas are the third leading cause of cancer death for people aged between 15 and 34, accounting for 2.5% of the global cancer death toll. GBM has a maximum incidence in patients aged more than 65 years, and is mainly affecting the cerebral hemispheres (54). A cellular hallmark of GBM and all cancers is the so-called Warburg effect which describes the phenomenon that cancer cells use aerobic glycolysis to produce ATP (56). GBM cells are characterized by increased glucose uptake and lactate production (57). GBM cells also use oxidative phosphorylation (OXPHOS) (57). The hypoxic GBM tumor environment allows the constant expression of hypoxia inducible factors 1 alpha and 2 alpha (HIF-1α, HIF-2α). Hypoxia and hypoxia-stabilized HIFs regulate GBM metabolism by stabilizing genes involved in metabolism like the glucose transporters GLUT1 and GLUT3, thereby sustaining an increased glucose uptake of the GBM cells (57). Also, the enzyme catalyzing the first step in glycolysis, hexokinase, is hypoxia/HIF regulated (57). As for PD, the diagnosis of GBM is typically made when first symptoms occur and rely on clinical examination and neuroimaging methods. However, mostly both diseases are diagnosed at an advanced stage of tumor growth or neurodegeneration, respectively. Treatment strategies of GBM are based on a multidisciplinary approach. Current standard therapy is a combination of maximal safe surgical resection of the tumor and subsequent radiation and chemotherapy with temozolomide (Temodar R ), an oral alkylating agent. However, even with advances in surgical resection, the prognosis for GBM patients remains poor, with a median survival of 15 months (55). COMMON GENES IN PD AND GBM A common set of genes like the tumor suppressor p53, epidermal growth factor and its receptor EGF(R), the glyoxalase and deglycase DJ-1 and biological processes are deregulated in opposite directions in PD and GBM (6). Particularly, there is evidence that PD-associated genes are involved in GBM pathogenesis ( Table 1). A summary of publications examining and exhibiting the involvement of PD-associated genes in GBM is shown in Table 1. Consistent with PD-associated genes being involved in GBM, it is important to note that mutations in the same gene can behave differently if they are germline or somatic mutations. For example, mutations in PARK2 affecting the Parkin protein can cause neuronal cell death in PD if they are present in the germline, or increased cell survival in GBM if they are present in somatic cells like astrocytes (Figure 1). (25). Pathways that are affected in PD and GBM are overlapping but are regulated inversely by alternatively regulated genes. These pathways are regulating cell proliferation and cell metabolism as well as mitochondrial clearance (1). In the following, examples for inversely regulated pathways in PD and GBM are illustrated and the role of commonly involved genes in both diseases in the regulation of these pathways will be outlined. Pro-Survival Signaling Pro-survival signaling is one of the most important pathways regulating and sustaining cell proliferation. Once dysregulated, uncontrolled cell proliferation can lead to tumorigenesis. This is why cell proliferation and apoptosis need to be in a tight equilibrium, which is well controlled by many mediators. P53-The Master Controller of Cell Proliferation and Its Regulation in PD and GBM One key player in the regulation of cell proliferation is the tumor suppressor p53. p53 is upregulated in PD, but downregulated in GBM (Figure 2A) (58)(59)(60). p53 inhibits cell proliferation by both blocking cell cycle progression and promoting apoptotic cell death (Figure 2A). This way, p53 provides a clear prevention from stem cell tumor growth and thereby GBM development. p53 itself is also regulated via several stress signals occurring during malignant progression like genotoxic damage, oncogene activation, loss of normal cell contacts, and hypoxia (Figure 2A). This leads to a model where growth inhibitory functions of p53 are normally held dormant, to be unleashed only in nascent cancer cells (61). In PD, the level of p53 and its activity in neurons can increase not only as a result of oxidative stress and DNA damage, but FIGURE 2 | Graphical representation of common cellular pathways described in literature to be dysregulated in PD and GBM. Dysregulation (up-or downregulation) of mediators and proteins of commonly involved mediators and proteins in PD and GBM is illustrated with blue and red arrows, while blue arrows correspond to the situation in PD, red arrows indicate the regulation in GBM. Differential regulation of discussed mediators regarding pro-survival signaling (A) immune signaling (B) and their involvement in mitochondria and metabolism (C). UPS, ubiquitin proteasome system; ox. stress, oxidative stress; mito dysfunction, mitochondrial dysfunction. Frontiers in Neurology | www.frontiersin.org also due to aberrant regulation of its expression for example by mutated or incorrectly cleaved proteins involved in the process of neurodegeneration (58). An increase in p53 expression and its activation results in enhanced expression of genes that are responsible for apoptosis and/or cell cycle arrest and may trigger neuronal cell death (58). In line, Mogi et al. found increased levels of p53 protein in the nigrostriatal dopaminergic region in PD patients compared to controls (62). It was shown that p53 regulates α-synuclein expression since the α-synuclein promoter harbors a p53 responsive element (63). Therefore, an increase in p53 in PD could not only lead to increased apoptosis induction but also to an increase in expression of potentially dysfunctional α-synuclein and to its subsequent aggregation (63). Kato et al. found that DJ-1 inhibits the transcriptional activity of p53 (Figure 2A) (64). Loss of DJ-1 protein in PD could thereby lead to increased expression of p53 target genes leading to cell death. In GBM, p53 is frequently downregulated or inactivated by mutations leading to a reduction in apoptosis induction (Figure 2A) (65) and p53 inactivation positively correlates with GBM tumor invasiveness (66). Zheng et al. showed that central nervous system (CNS)-specific deletion of p53 and Phosphatase And Tensin Homolog (PTEN) in the CNS of mice leads to a high-grade malignant glioma phenotype resembling human GBM (67). These results are in line with the data found at The Cancer Genome Atlas in the exploration mode when looking at the TCGA-GBM data set, which reports PTEN, p53 and EGFR as the most frequently mutated tumor suppressor genes in GBM (https://portal.gdc.cancer.gov). EGFR Signaling in PD and GBM EGFR is downregulated in PD and upregulated in GBM (Figure 2A). EGFR activates the phosphoinositide 3-kinase (PI3K)-Akt pathway (Figure 2A). The PI3K/Akt signaling pathway is known as one of the most important kinase cascades that mediates crucial cellular functions such as survival, proliferation, migration, and differentiation (68). Activated receptor tyrosine kinases (RTKs) like EGFR activate PI3K through direct binding or through tyrosine phosphorylation of scaffolding adaptors, which can then bind and thereby activate PI3K (Figure 2A). PI3K phosphorylates phosphatidylinositol-4,5-bisphosphate (PIP2) to generate phosphatidylinositol-3,4,5trisphosphate (PIP3), in a reaction that can be reversed by the PIP3 phosphatase PTEN. AKT can then activate its downstream targets like mTOR, eventually leading to cell proliferation (Figure 2A). It was shown that EGFR endocytosis and degradation are accelerated in Parkin-knockout cells from mouse brain, and EGFR signaling via the PI3K/Akt pathway is reduced (69). Fallon et al. propose that Parkin delays EGFR internalization and degradation, thereby promoting PI3K/Akt signaling (69). Therefore, by decreasing the efficiency of EGFRmediated Akt signaling in neurons, the loss of Parkin leads to neuronal degeneration (69). In post-mortem brains of idiopathic PD patients, protein levels of EGF and EGFR were shown to be decreased in the prefrontal cortex and the striatum (70). Mutations in EGFR are commonly occurring in GBM (71). These mutations result in EGFR gene amplification and intrinsic alterations of the EGFR structure (71). Brennan et al. showed that gene amplification and mutation of EGFR results in enhanced EGFR activation and is found in about 60% of GBM (72). The most common EGFR mutation in GBM is EGFRvIII, which is caused by the deletion of exon 2-7 leading to constitutively activated EGFR (71,73,74). It was shown that EGFR is overexpressed in most of primary GBM and some of the secondary GBM and that EGFR overexpression is associated with more aggressive GBM (75). PTEN/PI3K/Akt Signaling in PD and GBM In PD, PTEN/PI3K/Akt signaling is down-regulated and therefore causes decreased pro-survival signaling (76). In GBM, PTEN/PI3K/Akt signaling is upregulated (77)(78)(79). PTEN negatively regulates PI3K (Figure 2A), thereby inhibiting PI3K/Akt mediated proliferation and cell survival. In PD patient-derived post mortem brains, Sekar et al. found an increase in PTEN levels (80). Absence of PTEN protected dopaminergic neurons in PTEN knockout mice from neuronal death after neurotoxin treatment (81). In another mouse model, depletion of PTEN attenuated the loss of tyrosine hydroxylase-positive (dopaminergic) cells after neurotoxin treatment (82). An increase in PTEN in PD results in decreased pro-survival signaling leading to increased neuronal cell death. In line, it was shown that the ratio of phospho-Akt/total-Akt decreases in dopaminergic neurons indicating a decrease in activation of the pro-survival signaling mediated by Akt upon phosphorylation (83). Overall, an impaired PTEN/PI3K/Akt signaling in PD leading to neuronal cell death can be due to mutations in PD-associated genes regulating Akt signaling [e.g., DJ-1 (84), (Figure 2A)], excessive Akt dephosphorylation, inhibition of Akt activation or oxidative stress (85). In GBM, PTEN/PI3K/Akt signaling is upregulated due to EGFR overexpression or loss of PTEN (78). Mutations or homozygous deletions of PTEN were shown in 36% of the GBM cases that were studied by McLendon et al. and 86% of the GBM harbored at least one genetic event in the receptor tyrosine kinase PI3K pathway (86). High level of phosphorylated Akt was shown to correlate with a poor prognosis for patients with GBM (87). Mutations in the phosphatidylinositol-4,5-bisphosphcxate 3-kinase catalytic subunit alpha (PIK3CA), which is one subunit of PI3K, were shown to induce gliomagenesis (77). The PD-Associated Oncogene DJ-1 and Regulation of Cell Proliferation in PD and GBM The protein DJ-1 was shown to be inversely regulated in PD and GBM. (Figure 2A). Homozygous mutations in PARK7 (DJ-1) resulting in loss of protein lead to PD (88). DJ-1 expression was shown to be increased in GBM (38,89,90). Wang et al. found that high DJ-1 and high β-catenin expression in GBM were significantly associated with high grade and poor prognosis in glioma patients, suggesting DJ-1 levels in GBM as a strong independent prognostic factor (89). DJ-1 also accelerates transformation of tumor cells by c-Myc activating the Erk pathway (91). Hinkle et al. found that GBM tumor tissue expressed DJ-1 protein at significant levels, and typically in a cytoplasmic, non-nuclear manner. They found that immunostaining intensity of DJ-1 varied directly with strong nuclear p53 expression and inversely with EGFR amplification (38). In addition to the fact that DJ-1 negatively regulates pro-apoptotic p53 (Figure 2A) (92), and EGFR signaling is crucial for gliomagenesis (72), these observations suggest that DJ-1 might be involved in tumorigenesis of GBM (38). Toda et al. found that in a serial transplantation study, DJ-1 knockdown resulted in a prolonged survival of mice in secondary transplantation (39). DJ-1 is known to counteract ROS, among others via Nrf2 stabilization leading to the expression of endogenous antioxidant synthesis and ROS-eliminating enzymes like glutathione (Figure 2A) (93,94). It was shown that a reduction in DJ-1 protein is associated with reduced Nrf2 transcriptional activity and that in PD patients, Nrf2 activation is associated with dysregulated downstream gene expression (93,95). In contrast, it was found that Nrf2 overexpression accelerates proliferation and oncogenic transformation of glioma cells and that GBM patients have reduced overall survival when Nrf2 levels are upregulated (Figure 2A) (96). Immune-Signaling The innate immune system obtains various functions in health and disease. It represents the first line of defense against infection and it is involved in many different processes like tissue repair, wound healing and the clearance of apoptotic cells and cellular debris. An excessive or non-resolving activation of the innate immune system can result in systemic or local inflammatory complications and cause or contribute to the development of neurodegeneration and cancer. In the brain, the innate immune cells are represented by microglia, which regulate brain development, brain maturation, and homeostasis. An impairment of functional microglia through abnormal activation or decreased functionality can occur during aging and during neurodegeneration and the resulting inflammation was shown to be involved in neurodegenerative diseases and cancer (97). Hypoxia and HIF-1α in PD and GBM It is well known that hypoxia-inducible factor-1α (HIF-1α) plays an important role in gliomagenesis due to its angiogenesispromoting effects (98). While HIF-1α is upregulated in GBM, it was shown that HIF-1α is impaired in PD ( Figure 2B) (99,100). Treatment with MPTP, a prodrug to the neurotoxin MPP+, which causes Parkinsonism symptoms by destroying the dopaminergic neurons, was shown to inhibit HIF-1α accumulation in mice and in dopaminergic cell lines (99). Moreover, Milosevic et al. found that a conditional knockdown of HIF-1α in mice resulted in a 40% decrease in expression of tyrosine hydroxylase, a known marker for dopaminergic neurons, in the substantia nigra of mice (101). In healthy individuals, HIF-1α mediates protection of dopaminergic neurons by regulation of iron homeostasis, improved defense against oxidative stress by upregulation in response to reactive oxygen species (ROS) ( Figure 2B) and mitochondrial dysfunction (100). PD is characterized by an accumulation of iron in dopaminergic neurons of the substantia nigra (102). Free cytosolic iron can lead to oxidative stress and trigger α-synuclein aggregation (102). HIF-1α influences iron homeostasis by expression of its target genes ferroportin and heme oxygenase in the substantia nigra which are known to be involved in the attenuation of iron accumulation (100). This way, HIF-1α can counteract iron accumulation ( Figure 2B). However, in PD, downregulation of HIF-1α can lead to a dysregulation in iron homeostasis eventually leading to iron accumulation ( Figure 2B). In turn, iron accumulation decreases HIF-1α activity, because iron is a necessary cofactor for prolyl hydroxylases that inactivate HIF-1α via subsequent ubiquitinylation through von Hippel-Lindau factor (VHL) (Figure 2B) (102, 103). HIF-1α target genes Erythropoietin (EPO) and vascular endothelial growth factor (VEGF) (Figure 2B) have been shown to contribute to the protection of neurons from PD pathogenesis (100). EPO was shown to be neuroprotective against dopaminergic neurotoxins (104). In rat explants of the ventral mesencephalon, VEGF treatment was shown to be mitogenic for endothelial cells, astrocytes, and could promote growth and survival of neurons and specifically dopaminergic neurons (105). There are accumulating data which suggest that the activation of HIF-1α can exert neuroprotective effects through the induction of intrinsic adaptive mechanisms in neuronal and non-neuronal cells (106). Lee et al. showed that stabilization of HIF-1α leads to the upregulation of several proteins involved in iron efflux and mitochondrial integrity and bioenergetics, cell components that are compromised in PD. This is why Lee's data emphasize the concept that the pharmacological induction of HIF-1α could have neuroprotective effects in PD cells and mice models, with a beneficial impact on dopamine synthesis, iron homeostasis, antioxidant defenses and mitochondrial dysfunction (107). In contrast to these observations in PD, in GBM, HIF-1α levels are increased ( Figure 2B) (108). Liu et al. found that HIF-1α expression was associated with high grade glioma and the overall survival of glioma patients, which indicates that HIF-1α could predict prognosis and provide clinical insights into the therapeutic strategy for GBM patients (109). The lack of oxygen in the GBM microenvironment results from inappropriate neovascularization, irregular blood flow, and excessive consumption of oxygen from the uncontrolled proliferating GBM cells (110). The hypoxia in the GBM tumor induces the expression of genes involved in tumor cell growth and angiogenesis like the signal transducer and activator of transcription 3 (STAT3), which triggers the synthesis of HIF-1α that subsequently induces activation of Tregulatory cells (Tregs) and the production of VEGF (111). Tregs are important modulators of the immune response, and VEGF has known immunosuppressive effects. Moreover, the hypoxic microenvironment causes the transformation of CNS macrophages into tumor-associated macrophages (TAMs), which are capable of adopting immunosuppressive and tumor-supportive phenotypes. Via the STAT3 pathway, this transformation triggers TAMs to enhance angiogenesis and tumor cell invasion (26,112). Furthermore, HIFs are critical for the upregulation of glycolysis ( Figure 2B) (113). Hypoxia is also a known regulator of many other innate immunological functions like cell migration, apoptosis, phagocytosis of pathogens, antigen presentation and production of cytokines, chemokines, and angiogenic and antimicrobial factors (113). In summary, HIF is an important factor in the regulation of the tumor microenvironment due to its central role in promoting proangiogenic and invasive properties. Since HIF activation results in angiogenesis and the emerging vasculature is often abnormal, this leads to a vicious cycle that causes further hypoxia and HIF upregulation in GBM (98). Interleukins and Immune Escape In PD, increased cytokine levels in response to cellular stress can lead to neuronal cell death whereas in GBM, cytokines like interleukins IL-1β, IL-6, and IL-8 released by the tumor cells, inhibit the immune response and allow the tumor cells to escape the eradication by the immune system ( Figure 2B). IL-6 was found to be increased in the nigrostriatal region and in the cerebrospinal fluid of patients with PD (114). Further, Hofmann et al. found that patients with more severe PD had higher IL-6 levels compared to patients with a milder phenotype (114). In addition, a study from Chen et al. found that patients with PD had elevated levels of transforming growth factor-beta 1 (TGF-β1), IL-6, and IL-1β in cerebrospinal fluid compared to controls (115). In line, it is described that, in autopsy brains of PD, the number of activated microglia, which were among others TNFα, and IL-6-positive, increased in the substantia nigra and putamen during the progress of PD (116). The activated microglia in PD was observed in various brain regions like the nigro-striatal region, the hippocampus and the cerebral cortex. The levels of IL-6 and TNFα mRNAs increased in the hippocampus of PD patients (116). It is postulated that cytokines (IL-1β, TNF-α, IL-6) from activated microglia (117) in the substantia nigra and putamen may be initially neuroprotective, but may later turn to be neurotoxic during PD pathogenesis (116). In contrast to PD, in GBM, the cells can profit from the cytoprotective effects of specific cytokines like IL-1β, IL-6, and IL-8 leading to increased robustness regarding cellular stress (118). As already mentioned, GBM arises from glial cells with surrounding brain parenchyma that contains CNS cells like astrocytes, neurons and microglia, as well as a distinctive extracellular matrix composition. GBM induces a tumor microenvironment characterized by immunosuppressive cytokines secreted by tumor cells, microglia and tumor macrophages. IL-6, IL-10, and TGF-β, and prostaglandin-E collectively inhibit both the innate and adaptive immune systems leading among others to the suppression of natural killer cell activity, T-cell activation and proliferation and induction of Tcell apoptosis (119). IL-1β is a known master pro-inflammatory cytokine that triggers various malignant processes driving oncogenic events such as proliferation and invasiveness (118,120). Elevated levels of IL-1β were observed in many different GBM cell lines (121) and in human GBM tumor specimens (122). IL-6 was shown to be overexpressed in GBM clinical samples and cell lines and IL-6 gene expression seems to correlate with the aggressiveness of the tumor (123). It was shown that IL-6 is secreted by GBM cells and sustains the cell proliferation by activation of STAT3 pro-survival pathway (124). IL-6 is produced by GBM cells in response to external stimuli or intrinsic factors, for example oncogenic mutations (118). IL-1β and TNF-α induce stabilization of IL-6 mRNA and increase IL-6 biosynthesis (125). Like IL-6, IL-8 is highly expressed and secreted from GBM cell lines, tumor stem cells and human specimens (118). It was shown that the expression of the constitutively active mutant EGFRvIII is associated with significantly higher expression of IL-8 induced by nuclear factor kappa B (NF-κB) (Figure 2B) in human GBM specimens and GBM cell lines (126). In a similar manner as the regulation of IL-6, IL-8 expression can be enhanced by TNF-α, IL-1β or macrophage infiltration (127). Thus, elevated levels of one cytokine like TNF-α for example can lead to an increase in other cytokines. These findings of elevated cytokines and their associated roles in GBM underline the importance of specific cytokines for immune escape mechanisms and tumor proliferation and invasiveness observed in GBM pathogenesis. Toll-Like Receptors in PD and GBM Toll-like-receptors (TLRs) are receptors that recognize distinct molecular patterns like lipopolysaccharides, single and double stranded RNAs, hemagglutinin, viral proteins etc. (128), and allow an appropriate immune response to be initiated. The TLR family consists of 10 members (TLR1-10) in humans with different expression profiles and ligands (129). TLR2 is essential for the recognition of peptidoglycans and lipoproteins, whereas TLR4 recognizes bacterial lipopolysaccharide (LPS) (130). TLR2 and TLR4 are both the most important TLRs with regard to innate immune response as they are both implicated in the recognition of endogenous ligands involved in the inflammatory response regardless of the source of infection (131). This is why the implication of TLR2 and TLR4 in PD and GBM will be discussed in the following. TLR2 and TLR4 are frequently upregulated in PD and downregulated in GBM allowing the tumor cells to escape clearance by the innate immune system. TLR2 and TLR4 were shown to be upregulated in many α-synuclein-overexpressing or toxin-induced animal models (132)(133)(134)(135), and accumulating evidence from human studies further implicates these receptors in the pathogenesis of PD (136). Clinical studies revealed that TLR2 expression is increased in PD (137). It was shown that microglial TLR2 is increased in the substantia nigra and the hippocampus in the early stages of PD, but not during the late stages (138), while another study found that TLR2 is increased in the striatum of advanced PD patients (135). In contrast, GBM cancer stem cells downregulate TLR4 to evade immune suppression (139). Alvarado et al. showed that in GBM, cancer stem cells have low TLR4 expression which enables cell survival by avoiding inhibitory innate immune signaling (e.g., clearance by dendritic cells, cytotoxic T cells, and natural killer cells) that aims to suppress self-renewal of the GBM stem cells (140). This is why TLR agonists that trigger antitumoral immune signaling are being discussed as therapy for GBM (141). Mitochondria and Metabolism Mitochondria and cellular metabolism are closely linked. Mitochondria host many enzymatic reactions of cellular metabolism like the tricarboxylic acid (TCA) cycle and oxidative phosphorylation (OXPHOS) which generate ATP from pyruvate in the presence of oxygen (Figure 2C). In age-related disease, like PD and GBM, damaged mitochondria lead to impaired cellular metabolism (142). Cellular Metabolism in PD and GBM The human brain, even though constituting only 2% of the total body weight, uses ∼20% of the body's total oxygen consumption and 60% of our daily glucose intake (143). Furthermore, the brain needs a constant supply of glucose since it lacks fuel stores and cannot store glycogen. This is why cellular changes in glucose metabolism can have high impact on brain cell homeostasis, proliferation and viability. It was shown that glycolysis and mitochondrial function like respiration are decreased in individuals with PD ( Figure 2C) (144)(145)(146). In GBM, increased glycolytic activity results from certain oncogenic alterations like c-Myc amplification, PTEN deletion or mutations in p53 ( Figure 2C) (147,148). While mitochondrial dysfunction in PD can cause increased generation of ROS and subsequent oxidative damage (Figure 2C), it can also result in failing neuronal compensation of their insufficient ATP generation (149). Activation of glycolysis in neurons leads to excessive oxidative stress and apoptosis, suggesting that neurons are predominantly restricted to OXPHOS (150). In line, Hall et al. showed that the majority of ATP used by neurons is produced by OXPHOS (151). Powers et al. found that overexpression of α-synuclein in N27 dopaminergic cells resulted in an impairment in glycolysis, a reduction in glycolytic capacity and mitochondrial respiration (152). This is why an increase in glycolysis as counteract mechanism to neuronal energy failure induced by mitochondrial dysfunction in PD eventually leads to neuronal cell death (153)(154)(155). Neurons also metabolize glucose via the pentose phosphate pathway (PPP) to maintain their antioxidant status (156). It was shown that inhibition of the PPP in neuronal cell models causes cell death (157). In rodents, PPP inhibition caused dopaminergic cell death causing motor deficits that resemble Parkinsonism (158). Using postmortem human brain tissue, Dunn et al. characterized glucose metabolism via the PPP in early sporadic PD and controls and observed a down-regulation of PPP enzymes in patients compared to controls (156). This observation suggests that the impairment of the PPP is an early event in sporadic PD (156). In the absence of oxygen, pyruvate can be metabolized into lactate, a process known as glucose fermentation or anaerobic glycolysis. Rapidly proliferating cells, such as cancer cells, also have the ability to ferment glucose into lactate, even in the presence of abundant oxygen; this process is called aerobic glycolysis. It has been observed already decades ago, that cancer cells, even in aerobic conditions, tend to favor metabolism via glycolysis rather than OXPHOS, which is preferred by most other cells. This phenomenon is called the Warburg effect (56,159). This is why, in contrast to PD neurons, GBM cells ferment glucose into lactate, even in the presence of abundant oxygen (Figure 2B). Even though ATP production is less efficient in aerobic glycolysis when compared to ATP production via complete oxidative metabolism of glucose, it is being hypothesized that GBM cells use aerobic glycolysis to generate precursors for anabolism to grow and are able to generate enough ATP to sustain their cellular function (160). By modulating glycolysis and altering mitochondrial metabolism, GBM cells generate biomass, namely nucleotides, lipids, proteins, and NADPH by using glycolytic/TCA intermediates (160). Knockdown of glycolytic genes strongly inhibits GBM growth further emphasizing that glycolytic enzymes are essential for GBM growth (148). GBM cells also generate large amounts of lactate for several pro-tumor growth functions (161). Li et al. found that EGFR activation in GBM cells promotes the translocation of phosphoglycerate kinase (PGK1) into mitochondria (162,163). In the mitochondria, PGK1 phosphorylates and activates pyruvate dehydrogenase kinase that phosphorylates and thereby inhibits pyruvate dehydrogenase and thus mitochondrial pyruvate consumption which eventually leads to enhanced lactate production (162,163). In addition to the aerobic glycolysis, GBM cells also utilize TCA and OXPHOS (160). The differential expression of metabolic genes in neurons and astrocytes might explain the differences in glycolysis and OXPHOS rates. For example, neurons lack 6-phosphofructose-2-kinase/fructose-2,6-bisphosphatase-3 (PFKFB3) since it is continuously degraded by the ubiquitin-proteasome pathway. PFKFB3 regulates the biogenesis and degradation of fructose-2,6-bisphosphate, a known glycolytic activator. In contrast, in astrocytes, PFKFB3 is activated by adenosine monophosphateactivated protein kinase (AMPK) and promotes glycolysis (149). In line, it was shown that the expression of PFKFB3 is higher in mouse astrocytes than in murine neurons due to proteasomal degradation in the neurons (164). In neurons, the activation of PFKFB3 results in enhanced glycolysis but eventually leads to cell death since neurons lose their ability to generate glutathione, an essential antioxidant involved in the management of oxidative stress. This means that unlike astrocytes, neurons use glucose to maintain their antioxidant status and not for bioenergetic purposes (164). These findings might help to explain why PD neurons fail to increase their glycolysis rates and why increased glycolysis leads to sustained cell proliferation in astrocyteoriginating GBM cells. EPIDEMIOLOGY OF PD AND CANCER Epidemiological evidence suggests that patients with PD have a reduced incidence of primary CNS tumors (165,166). In contrast, there are a few epidemiological studies that show a positive association of PD with benign and malignant brain tumors, but not specifically with GBM (167)(168)(169). However, the problem with these studies is that they do not distinguish between the types of brain cancer, e.g., meningioma or astrocytoma. The described increased risk of all types of brain cancers in PD might be caused by diagnostic misclassification and detection bias. Increased incidence of meningioma in PD patients for example might result from the fact that the symptoms can be wrongly diagnosed as a sign of PD, if the intracranial tumor leads for example to a compression of the basal ganglia resulting in PD symptoms (170)(171)(172)(173). Moreover, a positive association of brain tumors and PD can be caused by detection bias as brain tumors can be diagnosed during the clinical work-up for PD (174). Since patients diagnosed with parkinsonism are more likely to have a Magnetic Resonance Imaging at the time of diagnosis, this may explain a higher risk of detecting silent brain tumors (173,175). The close temporal association between diagnosis of PD and the incidence of brain tumors further leads to the suggestion that brain tumors might be misdiagnosed as PD or vice versa (176). Specifically, for GBM, as it is lethal, it is difficult to study PD in individuals who survived GBM. This is why future studies should focus on evaluating the risk of GBM in PD patients. Interestingly, there is an increased risk of melanoma in PD patients compared to controls (177)(178)(179). In 1985, Dr. Rampen reported a 55-year-old male with PD who developed a local recurrence of a primary melanoma and multiple primary melanomas 4 years after primary excision and 4 months after starting levodopa (180). An increased risk of malignant melanoma in PD patients has been confirmed since in many studies (8,176,181,182). Several hypotheses could account for this association. Since levodopa is a metabolite in the biosynthesis of dopamine and melanin which involves the enzyme tyrosinase, and increased tyrosinase activity is found in melanoma, it was initially hypothesized that levodopa could enhance and stimulate growth on any residual melanoma tissue (183). However, recent studies have refuted a causal association for several reasons (178,184). In particular, the observation that the risk of melanoma is increased in PD patients before diagnosis argues against an effect of levodopa. Additional explanations may be the existence of shared genetic or environmental factors, or the common embryonic origin of melanocytes and neurons from neural crest cells (178,185). In addition, mechanistic links caused by common mutations or other alterations in a number of genes or proteins in PD and melanoma could explain the co-occurrence of PD and melanoma (184). Common mechanisms that are dysregulated in PD and melanoma are for example cellular detoxification, melanin biosynthesis or oxidative stress response (184). Future studies should investigate underlying mechanisms of decreased risk of some cancers and increased risk of other cancers like melanoma in PD patients. CONCLUSION PD and GBM are two highly complex disease entities characterized by multiple cellular changes. Similar mutations within the same gene, for example Parkin (25), can have inverse effects, depending on whether they are germline or somatic mutations and depending on the type of cell in which they occur: a dividing cell in GBM or a post-mitotic neuron in PD. One could hypothesize that neurons are primarily unaffected in GBM due to their postmitotic state. On the contrary, somatic mutations causing tumorigenesis can spread through proliferative astrocytes. Another inverse association of PD and GBM that requires future causal investigation is the time frame of the pathophysiology of both diseases. While PD is a chronic, generally slowly progressing neurodegenerative disease characterized by gradual neuronal loss, GBM is a rapidly progressing disease with rapid proliferation of glial cells in a much shorter time frame. Possible explanations for these observations are that in PD, the neuronal loss can be compensated for a long time whereas the aggressiveness of GBM due to highly infiltrative growing and metastasizing cells that also display a vast cell heterogeneity leads to a rapid disease progression. In this review, we showed that there are common pathogenic mechanisms involved in PD and GBM including inversely deregulated pro-survival and immune signaling, mitochondrial dysfunction and metabolic alterations. There is an inverse regulation for p53, EGF(R), PTEN/PI3K/Akt, DJ-1, HIF-1α in PD and GBM. Due to the complexity of both PD and GBM etiology and pathogenesis, future studies need to unveil so far unknown mechanisms of both diseases that will help to better understand and to compare both diseases and to explain why common inverse dysregulated cellular pathways can lead to two such different diseases. Eventually, a deeper understanding of the pathological mechanisms underlying PD and GBM will guide the identification of possibly shared drug targets that need to be modulated inversely for causative treatment of both diseases. AUTHOR CONTRIBUTIONS PM wrote the review. ZH, IB, AE, P-ES and RK advised, structured, and reviewed. All authors contributed to the article and approved the submitted version.
8,626.8
2020-08-20T00:00:00.000
[ "Biology" ]
Generation of highly pure Schr\"odinger's cat states and real-time quadrature measurements via optical filtering Until now, Schr\"odinger's cat states are generated by subtracting single photons from the whole bandwidth of squeezed vacua. However, it was pointed out recently that the achievable purities are limited in such method (J. Yoshikawa, W. Asavanant, and A. Furusawa, arXiv:1707.08146 [quant-ph] (2017)). In this paper, we used our new photon subtraction method with a narrowband filtering cavity and generated a highly pure Schr\"odinger's cat state with the value of $-0.184$ at the origin of the Wigner function. To our knowledge, this is the highest value ever reported without any loss corrections. The temporal mode also becomes exponentially rising in our method, which allows us to make a real-time quadrature measurement on Schr\"odinger's cat states, and we obtained the value of $-0.162$ at the origin of the Wigner function. Introduction Generation of highly nonclassical states with high purity and fidelity is challenging but necessary for quantum information processing. One of the states with such high nonclassicality is a quantum superposition of macroscopically distinguishable states; so-called Schrödinger's cat states with the name taken from the famous Schrödinger's cat paradox. The optical Schrödinger's cat states refer to the superposition of coherent states with phase difference of π and can be written as |Ψ cat,± ∝ |α ± | − α , where |Ψ cat,± corresponds to plus and minus Schrödinger's cat states, respectively. Cat states do not only pose interest in fundamental quantum physics, but also possess many potential applications, such as quantum computation [1][2][3][4], entanglement distribution and quantum key distribution [5][6][7], and quantum metrology [8]. In optical continuous-varible (CV) quantum information processing, the measurement-based quantum computation (MBQC) is currently the most promising method in terms of operation implementation and scalability [9,10]. Therefore, the generation of high fidelity and large amplitude α cat states and the combination of such states with MBQC will take us a step closer to the realization of quantum information processing. The first proposition for the generation of cat states was via the Kerr effect in a nonlinear medium [11]. However, the Kerr effect is usually weak. Instead, the generation by photon subtraction [12,13] is widely used due to its simplicity in the implementation. In photon subtraction, a single photon is tapped from a squeezed vacuum, and the generated state is heralded by the detection of subtracted single photon ( Fig. 1(a)). One can generate states with high fidelity to single photon states, cat states, and squeezed single photon states, depending on the initial squeezing level. The cat states generated by this method are usually limited to the case where the amplitude is small, i.e. |α| 2 ≈ 1. The generation of cat states using photon subtraction has already been demonstrated in both pulsed regime [14] and continuous-wave (CW) regime [15,16]. Two photons subtraction [17] and three photons subtraction [18] have also been experimentally demonstrated. Moreover, more complex quantum states, such as parity qubits based on cat states [4] and entanglement between single-rail qubits and cat states [19] have also been generated by method which involves photon subtraction. However, in the recent work by Yoshikawa et al. [20], it was shown that the frequency bandwidth of the subtracted photons, which has not been given much attention, affects the purity of the generated cat states. In the previous demonstration of generation of cat states [14][15][16], where they used the whole frequency bandwidth of squeezed vacua for photon subtraction ( Fig. 1(b)), the achievable purities are limited by the inherent impurities of the initial squeezed vacua. This is due to the fact that when the cat states are heralded by the detection of subtracted single photons, the heralded states exist in a wave packet. In frequency domain, the squeezed vacuum is pure for all frequency. However, when we consider a squeezed state in a wave packet in time domain, the squeezed vacua become impure, and this impurity affects the subsequently generated cat states. Also, the inherent impurity in squeezing level due to the shape of the wave packets limits the squeezing level before the photon subtraction. In typical photon subtraction, where the temporal mode is double-side exponential, such impurity is calculated to be equivalent to 10% loss [20]. Therefore, roughly speaking, the conventional photon subtraction can be performed on squeezing vauca with squeezing level of atmost about 10 dB. On the other hand, our method does not have such limitation. As mentioned before, MBQC is currently the most realistic method for the realization of practical quantum information processing. In MBQC, the input states are entangled with the resource state called cluster state, and the operations are carried out by measurement on each modes of cluster state. When the modes are measured, the measurement results need to be fed forward to the next step of the operations. Linear feed-forward operations for each measurements can be postponed and performed after all measurements have been finished [21]. On the other hand, nonlinear feed-forward operations cannot be postponed and have to be done before the next operation step. This means that the acquisitions of the quadrature values must be performed in real time in order to realize efficient non-linear feed-forward operations. Therefore, real-time quadrature measurement is an important piece in the universal MBQC since such non-linear feed-forward operations are necessary for non-Gaussian operation [22], which will allow us to perform quantum information processing that surpass classical computation [23, 24]. Real-time quadrature measurement of single photon states has already been demonstrated by generating single photon states with an exponentially rising temporal mode [25]. However, in their method, non-degenerated and asymmetric optical parametric oscillator (OPO) is used for the temporal mode shaping. Therefore, their method is not applicable to quantum states that need degenerated OPO in the generation, such as cat states. Another method of temporal mode shaping using optical cavities has already succeeded in the case of single photon states [26,27]. However, there has not been any demonstration of the temporal mode shaping of the quantum states with multiple photons or with phase information. In this paper, we use narrowband filtering cavity in the subtracted photon path to limit the frequencies of the subtracted photons to where the squeezing level is almost constant (Fig. 1(c)). By doing so, we can generate pure cat states [20]. In the previous demonstration of cat states [14][15][16], filtering cavities are already used for filtering out non-degenerated photon pairs. However, the frequency bandwidth of the filtering cavity is much wider than that of OPO, so that raw photon correlation of OPO can be utilized. In the time domain, spectral filtering of subtracted photons is equivalent to the temporal mode shaping of the cat states. In previous demonstrations, the temporal mode of cat states become double-sided exponential due to photon correlation of OPOs. On the other hand, in our method, the temporal mode becomes exponentially rising due to the frequency response of the narrowband filtering cavity. By utilizing the exponentially rising temporal mode, we can realize the real-time measurement of cat states. The realization of real-time measurement of cat states will facilitate the usage of cat states in MBQC, thus expanding the potential of MBQC. Experimental setup We used a broadband OPO and a narrowband filtering cavity to demonstrate the generation of highly pure cat states by optical filtering (Fig. 1(c)). In [28], cat state was generated with an OPO with bandwidth of approximately 10 MHz. Therefore, in order to demonstrate photon subtraction with narrow bandwidth using the OPO with the same specification, we have to use filtering cavity which has much narrower bandwidth (for example, 1 MHz) . In that case, the generation rate will drop drastically which is not favorable experimentally, and the effect of low frequency noises such as laser noise might also become more apparent. To avoid these problems, we made a broadband OPO and made a filtering cavity whose bandwidth is narrow compare to this OPO ( Fig. 1(c)). The schematic diagram of the experiment setup is shown in Fig. 2. A CW Ti:Sapphire laser (M-squared, Sols:TiS) whose wavelength is 860 nm is used as a light source for this experiment. A 430 nm CW pump beam for the OPO is produced from a bow-tie shaped second harmonic generator. A rectangular shaped reference cavity (not shown in Fig. 2) between the second harmonic generator and the OPO is provided for matching the transversal modes of the pump beam and the OPO. The OPO used in this experiment is a triangle-shaped cavity with the same design as in (PPKTP) crystal is placed inside the OPO and used as a nonlinear medium. A beamsplitter with reflectivity of R = 0.97 is used for tapping a single photon from squeezed vacuum. There are three filtering cavities (FC-1, FC-2, FC-3) on the subtracted photon path. FC-1 and FC-3 are Fabry-Pèrot cavities with large free spectral range (FSR) and acts as frequency filters that filter out unwanted non-degenerated photon pairs. On the other hand, FC-2 is a triangle-shaped narrowband cavity with full-width half-maximum (FWHM) much narrower than the OPO. This cavity is used to limit the frequency bandwidth of the subtracted photons. The detailed parameters of all the filtering cavities are shown in Table 1. We put an isolator between FC-1 and FC-2 to prevent coupling between FC-1, FC-2 and FC-3. We also put another isolator between the OPO and FC-1 to prevent lock beams of filtering cavities from reaching the OPO. To characterize the generated states, we performed homodyne measurement. The homodyne detector used here has a wide bandwidth of approximately 100 MHz with photodiodes that have the quantum efficiency higher than 98% [29]. The characteristic of the homodyne detector used in this experiment is shown in Fig. 3. Matching transversal modes between a local oscillator (LO) and measured beams is important in homodyne detection. To do so, we put the LO beam through a mode cleaning cavity which filters the transversal mode of the LO into TEM 00 and resulted in interferometric visibility of 99% in homodyne measurement. A phase reference beam is introduced into the OPO for locking the phase between generated states and the LO at homodyne detector. We used two acousto-optic modulators (AOM) to shift the frequency of the phase reference beam by 100 kHz for phase locking. When the phase reference beam goes through the OPO, due to parametric amplification, the electric fields of frequency ±100 kHz relative to the carrier frequency will be generated, and the resultant electric field vector will elliptically oscillate in the complex plane of the frame rotating with the carrier frequency, where the electric field will circularly rotate in the case of detuning without parametric amplification. If we look at the intensity of the amplified probe beam, the intensity oscillates at 200 kHz. By demodulating this intensity, we can lock the relative phase between phase reference beam and pump beam. Moreover, when parametrically amplified detuned phase reference beam and LO interfere the resultant intensity will oscillate with frequency of 100 kHz. We can lock the phase between probe beam and LO by properly demodulating the interference signal. This demodulation is not so trivial because the complex amplitude of phase reference beam is elliptic due to parametric amplification. By properly locking the relative phase between the phase reference beam and the LO and between the phase reference beam and the pump beam, we indirectly lock the relative phase between a cat state and the LO. Since we are locking relative phases by using demodulation of the beat signals, we can lock at any arbitrary relative phase by changing the phase of the demodulation signal. This locking method using parametrically amplified detuned probe beam gives error signal with better S/N ratio than the usual method using phase modulation because the size of the error signal is proportional to the strength of the carrier not the strength of the phase modulation. We used Pound-Drever-Hall technique [30] for locking the second harmonic generator. An electro-optic modulator (EOM) is put in front of the second harmonic generator for generation of the error signal. For the OPO and filtering cavities, we used tilt locking technique [31]. We also shifted the frequency of the cavity lock beams by 200 kHz to prevent unwanted interference with other beams. In this experiment, the phase reference beam and cavity lock beams (collectively called control beams) are turned on and off periodically by AOMs. Beside two AOMs in the path of phase reference beam and cavity lock beams, there are two more AOMs that are common to both beams in order to increase the extinction rate of the control beams. All the aforementioned cavity locking and phase locking is performed by electrical feedback using servo amplifiers when the control beams are turned on. When the control beams are turned off, the optical system is held in the same state before the control beams are turned off. Using this periodic switching, we prevented the control beams from reaching the APD, which will result in fake clicks that degrade the purity of the cat states. Experimental results For the rest of the paper, we refer to the measurement where we digitally integrate the electric signal with the temporal mode as post processing, as opposed to real-time measurement where the electric signal is continuously integrated with the temporal mode by a low-pass filter (LPF) [25]. In this paper, we performed photon subtraction from squeezed vacua with three different squeezing level, and generated three types of quantum states: a single photon state, a cat state, and a squeezed single photon state. To characterize each state, we performed homodyne measurement for 37 phases from 0 degrees to 180 degrees by 5 degree step and we collected data of 10,000 events for each phase. The electric signal from a homodyne detector is split into two paths and we put a LPF for real-time measurement in one of the paths and simultaneously record unfiltered and filtered electric signal using an oscilloscope. To extract the quadrature values from CW homodyne measurement, we need to integrate the electric signals from homodyne detector with the temporal mode [32]. Also, in order to design and make a LPF for real-time measurement, we need to know the shape of the temporal mode of cat states beforehand. This means that it is important that we estimate the temporal mode correctly. Using the obtained quadrature values, we estimate the density matrices and Wigner functions for both measurement using maximum likelihood method [33,34]. In this section, we show the experimental results in the following order. First, we show the estimation of the temporal mode of the generated states. Then, we show the Wigner functions estimated from the quadrature values which are obtained by using the estimated temporal mode. We can evaluate the quality of the states generated via photon subtraction by the value at the origin of the Wigner functions, which we call Wigner negativity. In the ideal case, the Wigner negativity is W(0, 0) = −1/π ≈ −0.318 which is due to the fact that the states generated by subtracting a single photon from squeezed vacuum have only odd photon numbers [12]. We also evaluate whether the effects of the experimental imperfections are consistent with degradation of the Wigner negativities of the estimated Wigner functions. Finally, to verify the success of the real-time measurement, we show the correlation plot between the quadrature values of post processing and real-time measurement. As a qualitative indicator, we also show the screen capture of oscilloscope recording the electric signals from homodyne detector. First, let us look at the temporal mode of the generated states. The temporal mode of the states generated by photon subtraction depends on the response functions of the OPO and filtering cavities. For the OPO, the temporal mode localized around time t 0 is derived from the time correlation function and has the following form [35]; where γ OPO corresponds to bandwidth of the OPO, and can be written as following; where, f HWHM is half-width half-maximum of the OPO. For filtering cavities, the response function is equivalent to an ideal Lorentzian filter and can be expressed as following in time domain [35]; where Θ(t) is Heaviside step function, and γ filter corresponds to the bandwidth of filtering cavities. In our case, we have three filtering cavities. Therefore, the temporal mode of the generated states will become a time convolution between the response of the OPO in expressed by Eq. (1) and three filtering cavities expressed by Eq. (3) with γ filter which corresponds to parameter of each cavity. After performing time convolution of response of the OPO and three filtering cavities, the ideal temporal mode f ideal (t; t 0 ) can be expressed as following; N is a normalization constant, i = 1 − 3 corresponds to filtering cavities, i = 4 corresponds to the OPO, and γ i corresponds to bandwidth of each cavity. c 1 = 2γ 4 (γ 3 − γ 2 )/(γ 2 4 − γ 2 1 ) and c 2 , and c 3 are the cyclic permutation. The expression for c 4 and N are From Eq. (4) it is clear that f ideal (t; t 0 ) is not a perfect exponentially rising temporal mode. However, when one of the γ i of the filtering cavities is much smaller than that of the OPO, which corresponds to the case where there is a narrowband filtering cavity, that term becomes dominant in Eq. (4) and the temporal mode becomes close to exponentially rising. We used independent component analysis (ICA) [36], where the non-Gaussianity induced via photon subtraction is utilized to find a set of independent modes and estimate the temporal modes of the generated states. We measured the quadrature for the phase that corresponds to largest variance and used the quadrature data of 10,000 events at that phase in the temporal mode estimations. Another method similar to ICA is the estimation via principal component analysis (PCA) [37, 38], which make use of the fact that the variances of the heralded states are larger than the initial squeezed vacua. The temporal mode of the cat state is shown in Fig. 4. The solid curve is the estimated temporal mode and the dashed curve is the theoretical prediction from the experimental parameters using Eq. (4). The inner product between these two curves is 0.996 . The dotted curve is the temporal response of the 3rd-order LPF that is designed for real-time measurement. The inner product of this to the estimated temporal mode is approximately 0.988 . From these results, it is clear that the temporal mode of the generated states is consistent with the theoretical prediction and the designed LPF is also consistent with the temporal mode of the generated states. We integrated the electric signal from homodyne detector with this estimated temporal mode to obtain the quadrature values for post processing. For real-time measurements, we used the LPF which is designed to have response as in Fig. 4. Figure 5 shows the squeezing spectra of the initial squeezed vacua used for generation of the states. The squeezing spectra are described by the following equation [35]; where ξ is the normalized pump power, L is total external loss, and f HWHM is the bandwidth of the OPO. In this experiment, we used the squeezed vacua with ξ =0.11, 0.25, 0.39 for generation of quantum states with high fidelity to single photon state, cat state, and squeezed single photon state, respectively. The solid lines in Fig. 5 show the theoretical plot of squeezing spectra, and the losses in this experiment are as follows: 1.2% propagation loss, 2% loss due to quantum efficiency of photodiodes, 1% loss due to circuit noise of the homodyne detector, 2% loss due to visibility at homodyne measurement, and 2.1% loss due to the escape efficiency of the OPO. Therefore, the total experimental loss in this experiment is 8.3%. In the measurement of the squeezing spectra there is an extra 3% loss due to R = 0.97 used in photon subtraction. Note, however, that we ignored the frequency dependence of the circuit noise of the homodyne detector. This is because the cat states generated in this experiment has bandwidth of approximately 10 MHz, which is much narrower than the bandwidth of the homodyne detector. Figure 6 shows the Wigner functions and photon number probability distributions for post processing and corresponding real-time measurements. Figure 6(a) shows the post processing measurement results of photon subtraction from squeezed vacua with ξ = 0.11, which corresponds to pump power of 5 mW, and initial squeezing level of 1.0 dB near DC component. The generation rate of this state is about 1,200 counts per second (cps). Because the squeezing level is low in this case, the generated state possesses high fidelity to single photon states. The ratio of single photon in the generated state is 0.74, and the Wigner negativity of the Wigner function is −0.176 . However, we can see from the Wigner function that this state is slightly squeezed. This means that despite the fact that the generated state consists of mostly single photon component, the generated state is not perfectly single photon and this slight squeezing is due to the initial squeezing level. Therefore, we need to lower the squeezing level even more to generate pure single photon with this method, which will result in a lower generation rate and the effect of fake counts will become more apparent. Figure 6(b) shows the post processing measurement results of photon subtraction from squeezed vacua with ξ = 0.25, which corresponds to pump power of 25 mW, and initial squeezing level of 3.0 dB near DC component for generation of quantum state with high fidelity to minus Schrödinger's cat state |Ψ cat,− (α) ∝ |α − | − α with |α| 2 = 1 after photon subtraction [13]. The generation rate of this state is about 4,800 cps. The Wigner function of our cat state possesses the Wigner negativity of −0.184 without loss corrections which is the highest value ever observed without loss correction. The fidelity F to the cat state is calculated by F = Ψ cat,− (α)|ρ|Ψ cat,− (α) , whereρ is the density matrix of our state. The resulting fidelity is F = 0.782 to Schrödinger's cat state with |α| 2 = 1.02, which is also the highest value ever observed without loss correction. Since a Schrödinger's cat state possesses only odd photon numbers, the highest fidelity possible is equal to the sum of the odd photon numbers probability, which in our case is 78.8%. This means that the odd photon subspace of this state has very high fidelity to Schrödinger's cat state. Figure 6(c) shows the post processing measurement results of photon subtraction from squeezed vacua with ξ = 0.39, which corresponds to pump power of 60 mW, and initial squeezing level of 5.0 dB near DC component. The generation rate is about 16,000 cps. The Wigner function of squeezed single photon state has the Wigner negativity of −0.121, which means that this state also possess high nonclassicality. The degradation of the Wigner negativity of this state with x = 0.39 is more than that of states with x = 0.11, 0.25. This is due to the fact that the state with x = 0.39 has more mean photon number than the other two states which we will address this in more details. In [16] the photon subtraction is taken out with initial squeezing level of 3.7 dB as a demonstration of photon subtraction from highly squeezed vacua. In that experiment the four photon probability is larger than five photon probability. On the other hand, even with large squeezing, the odd photon probabilities are larger than the adjacent even photon probabilities in our experiment, and we can observe large photon number components clearly. We believe that this is the effect of narrowband filtering [20], which becomes more apparent as the initial squeezing level increase. Next, we calculate whether the experimental imperfections and the Wigner negativities of the Wigner functions are consistent with each other. From the Wigner negativity of Wigner function, we calculate the sums of the even photon number probability to be 22.4%, 21.2%, and 31.0% for the ξ =0.11, 0.25, and 0.39, respectively. For the ideal states generated from photon subtraction, these values will be 0 and there are two reasons for the deviation from the ideal value: First, the photon losses and detection losses. Second, the imperfections of photon subtraction. The former corresponds to the contamination of the vacua in the generated states, while the latter corresponds to mixing between the generated states and initial squeezed vacua. We refer to the former as loss and the latter as mixedness to differentiate between these two types of degradation of the Wigner negativity. In case of losses, the sum of the even photon number probability will become more as the mean number photon of the generated states become higher. This is because the photon losses from Fock states scale with the number of photons, i.e.â|n = √ n|n − 1 , where |n is a photon number state andâ is an annihilation operator. On the other hand, the mixedness is how much the squeezed vacua is mixed with the generated states, thus the increment of the sum of the even photon number probability due to mixednesses does not depend on the mean photon number. As mentioned before, the total loss in this experiment is 8.3%. However, in addition to losses, there are mixednesses which arose from the imperfections of the photon subtraction. The main sources of mixedness are fake counts and two photon subtraction. Fake counts can be measured directly, and probability of two photon subtraction can be calculated using the normalized pump amplitude x and reflectivity of beamsplitter R used in photon subtraction [12]. Here, we ignore the probability of subtracting more than two photons. The mixedness are shown in Table 2. These imperfections mix the generated states with states with even photon number states, which degrades the Wigner negativities. From both losses and mixednesses, let us approximate the ratio of even photon numbers from experimental parameters. Using squeezing level (equivalently, normalized pump amplitude ξ) and the reflectivity of beamsplitter R, we can calculate the ideal states generated by photon subtraction [12]. Then, by applying photon loss and mixing with squeezed vacua due to the mixednesses on the ideal states, we can estimate the theoretical ratio of even photons in each state from experimental parameters. For the simplicity of the estimation, we assumed that one photon losses are dominant, and ignore multiple photon losses. Calculating this for each states, the estimation of the sum of even photon number probabilities are approximately 14%, 16%, and 23%, while the experimental results are 22.4%. 21.2%, and 31.0%, respectively. The value of the estimation and the experimental results are in agreement to each other, and the difference might come from parameters that we cannot measure directly or accurately such as quantum efficiency of photodiodes, or other factors we cannot take into account such as accuracy of state tomography. Also, the theoretical plots of squeezing spectra in Fig. 5, though in good agreement with measurement results, have less losses than the experimentally measured spectra. This indicates that the estimated losses are too little, which explains the difference in estimation of even photon number. Figures 6(d), 6(e), and 6(f) show the results of real-time measurement in the case of ξ =0.11, 0.25, 0.39, respectively. Qualitatively, the Wigner functions is in good agreement with results of post processing. The Wigner negativity of each state is −0.154, −0.162, and −0.102 . Although the Wigner negativity of real-time measurement is still high, it is smaller than the corresponding post processing results. However, the difference in the Wigner negativity, which is approximately −0.02, is the same for all states. This indicates that the degradation of the Wigner negativities should be caused by common sources. When the electric signal from the homodyne detector is integrated with the mode that is not perfectly matched to the temporal mode of the generated states, the unmatched portion will pick up squeezed vacua instead of the state generated from photon subtraction. In this experiment, mode mismatch between the LPF and the temporal mode corresponded to 1 − 0.988 2 = 2.4% extra mixedness in real-time measurements, which increase the Wigner negativity by 0.015 from post-processing results. Thus, mode mismatch between LPF and temporal mode adds an additional mixedness in real-time measurement. As mentioned before, we can qualitatively verify the success of the real-time measurement Conclusion We generated highly pure Schrödinger's cat states with a new method via optical filtering. It has been shown that this method has no theoretical limit on the purity of generated states as opposed to the previous photon subtraction. We succeeded in the generation of Schrödinger's cat state that has a new Wigner negativity record of −0.184 without any loss corrections. We also succeeded in the generation of single photon-like state and squeezed single photon state with this method. From another aspect, optical filtering can be considered as temporal mode shaping, and the temporal mode becomes exponentially rising with our method, which allows us to demonstrate a real-time quadrature measurement. In the real-time measurement, the states we generated showed high nonclassicality and the quadrature values are highly correlated with that of the post processing. This indicates that we succeeded in the real-time measurement of cat states, which is the first real-time quadrature measurement of quantum states with multiple photons and phase sensitivity. This method can be easily extended to any quantum states that can be generated by photon subtraction or heralding, and allows us to perform photon subtraction from highly squeezed vacua. Therefore, this method is the most promising and realistic in the generation of highly pure quantum states and, at the same time, temporal mode shaping for the applications in MBQC. We expect that photon subtraction with narrowband optical filtering will become a new basis for generation of even more complex highly pure nonclassical states, which will accelerate the development of quantum information processing. Funding This work was partly supported by CREST (JP-MJCR15N5) of Japan Science and Technology Agency (JST), KAKENHI of Japan Society for the Promotion of Science (JSPS), and APSA of Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT).
7,061.6
2017-08-14T00:00:00.000
[ "Physics" ]
Factors of HOMFLY polynomials We study factorizations of HOMFLY polynomials of certain knots and oriented links. We begin with a computer analysis of knots with at most 12 crossings, finding 17 non-trivial factorizations. Next, we give an irreducibility criterion for HOMFLY polynomials of oriented links associated to 2-connected plane graphs. Introduction Several properties of knots and links are encoded using polynomial invariants.Many of the properties of these polynomials are of a combinatorial nature, such as the degree, or coordinate dependent, such as special evaluations.For a few examples, see the Morton-Franks-Williams inequality [FW87,Mor86,Mor88], the slope conjecture [Gar11], some evaluations of link polynomials [LM86], degree computations [vdV]. In this paper, we propose to study a geometric property: irreducibility of the HOMFLY polynomial.Thus, we view the HOMFLY polynomial of an oriented link as a plane algebraic curve and we ask if the curve is irreducible.Since the HOMFLY polynomial is really a Laurent polynomial, we disregard the coordinate axes in our analysis. First, we perform a computer analysis of HOMFLY polynomials of the 2977 knots with at most 12 crossings: we find 17 non-trivial factorizations (Table 1).To obtain the polynomials, we consulted the databases KnotInfo [LM20] and KnotAtlas [BNM].To factor them, we used the computer algebra program Magma [BCP97]. Second, we give a sufficient criterion for irreducibility of the HOMFLY polynomials of oriented links associated to plane graphs by Jaeger in [Jae88].A standard construction of Jaeger ([Jae88, page 649]) associates to each connected plane graph G an oriented link diagram D(G).Jaeger shows that the HOMFLY polynomial P (D(G), x, y, z) can be computed from the Tutte polynomial T G (x, y) of G using the formula (1) Thus, ignoring powers of x, y, z, the irreducibility of T G (x, y) is a necessary condition for the irreducibility of P (D(G), x, y, z).The Tutte polynomial of a 2-connected graph is irreducible by a result of Merino, de Mier and Noy ([MdMN01, Theorem 1]).Hence, we reduce the study of the irreducibility of the HOMFLY polynomial P (D(G), x, y, z) to understanding how the substitution in (1) interacts with the Tutte polynomial. To achieve our goal, Proposition 1 simplifies formula (1).The verification of the identity is entirely mechanical, but our arguments hinge on the existence of such a simple final result.Next, Lemma 2 gives a sufficient criterion for irreducibility of polynomials, adapted to our needs.We combine these statements in Theorem 5, the main criterion for irreducibility of HOMFLY polynomials of this paper. Notation.Let K be a knot.To simplify our formulas, we denote by (K) the HOMFLY polynomial of the knot K, defined using the convention of [FYH + 85, Main Theorem].Thus, (K) is a homogeneous Laurent polynomial of degree 0 in Z[x ±1 , y ±1 , z ±1 ]: the numerator of (K) is a homogeneous polynomial in x, y, z and the denominator of (K) is a monomial of the same degree as the numerator.We denote by K the mirror image of the knot.Recall that the HOMFLY polynomial of the mirror image of a knot K satisfies the identity K (x, y, z) = (K) (y, x, z). To identify knots, we follow the notation of KnotInfo [LM20].For convenience, we reproduce the part of the convention that is relevant for us: "For knots with 10 or fewer crossings, we use the classical names, as tabulated for instance by Rolfsen, eliminating the duplicate 10 162 from the count.For 11 crossing knots, we use the Dowker-Thistlethwaite name convention, based on the lexigraphical ordering of the minimal Dowker notation for each knot." For instance, 4 1 is the Figure-eight knot, while x 2 y 2 is the HOMFLY polynomial of the left-handed Trefoil knot. Caution.The convention for the HOMFLY polynomial used in [LM20] differs from the one that we use.We obtain the HOMFLY polynomial (K) KI of the knot K tabulated in [LM20] by the substitution This happens in the background and plays almost no role in the arguments. Knots with up to 12 crossings We started this project wondering about irreducibility of HOMFLY polynomials.A quick calculation with a computer, shows that the HOMFLY polynomial of the knot 9 12 is the product of the HOMFLY polynomials of the knots 4 1 (Figure-eight knot) and 5 2 (3-twist knot): (9 12 ) = (4 1 ) (5 2 ) .Simlarly, also the identity (11a 175 ) = (3 1 ) (8 16 ) holds.Systematizing these results, we analyzed the knots with up to 12 crossings, using the database [LM20].Out of these 2977 HOMFLY polynomials, 17 are reducible.Each one of these 17 reducible polynomials is a product of previous members of the database.When checking for divisibility, we work in the Laurent polynomial ring Z[x ±1 , y ±1 , z ±1 ], that is, we disregard powers, positive or negative, of x, y, z.Still, the factorizations that we find are correct as stated: there is no need to adjust by multiplying by a unit.We collect this data in Table 1. Table 1. Factorizations of HOMFLY polynomials In particular, the HOMFLY polynomial (12n 462 ) is the only one having a repeated irreducible factor.We observe also that the Kauffman polynomials of the 2977 knots with at most 12 crossings are all irreducible. Graphs and oriented link diagrams In this section, we prove a criterion for the irreducibility of the HOMFLY polynomials of certain oriented links associated to plane graphs. To argue irreducibility, we exploit the morphism (C * ) 2 (C * ) 2 appearing in [Jae88, Proposition 1]: We simplify the expression of J 0 by changing coordinates on the domain and codomain of J 0 .Denote by Ξ : Denote by Σ : with birational inverse Define the morphism J : Proposition 1.The rational maps J and Σ • J 0 • Ξ coincide. Proof.This is a matter of a simple substitution, using the definition of the involved maps. The morphism J is finite of degree 2 and it is branched over the divisor R ⊂ P 1 C × P 1 C with equation y 0 y 1 = 0.In our argument for irreducibility, we exploit the following easy algebraic lemma.Lemma 2. Let C ⊂ P 1 C × P 1 C be an irreducible curve, defined by the equation F (x 0 , x 1 , y 0 , y 1 ) = 0. Assume that the polynomial F is bihomogeneous of degree a in x 0 , x 1 and of degree b in y 0 , y 1 .If the curve with equation F (x 0 , x 1 , y 2 0 , y 2 1 ) = 0 is reducible, then the two polynomials F (x 0 , x 1 , 1, 0) and F (x 0 , x 1 , 0, 1) are squares.In particular, the degree a is even. Proof.We cover P 1 C ×P 1 C by 4 standard affine charts isomorphic to A 2 C , by setting one among x 0 or x 1 to 1 and also one among y 0 or y 1 to 1. Fix one of these charts.The bihomogeneous polynomial F becomes an irreducible polynomial f (x, y) ∈ C[x, y].To prove the result, we show that if the polynomial f (x, y 2 ) is reducible, then f (x, 0) ∈ C[x] is a square. Let g(x, y) ∈ C[x, y] be an irreducible factor of f (x, y 2 ).Since f (x, y 2 ) is not irreducible, we deduce the inequality g(x, y) = f (x, y 2 ).Separating the terms of g(x, y) with an even and an odd exponent of y, we find polynomials g 0 (x, y 2 ) and g 1 (x, y 2 ) such that the identity g(x, y) = g 0 (x, y 2 ) + yg 1 (x, y 2 ) holds.If g 0 (x, y 2 ) vanishes, then we are done.Suppose therefore that g 0 (x, y 2 ) is not the zero polynomial.If g 1 (x, y 2 ) vanishes, then g 0 (x, y 2 ) is a proper factor of f (x, y 2 ); as a consequence, g 0 (x, y) is a proper factor of f (x, y), contradicting the irreducibility of f (x, y).It follows that g(x, −y) is a polynomial that is not proportional to g(x, y) and that also divides f (x, y 2 ).By irreducibility of g(x, y), we deduce that the product g(x, y)g(x, −y) divides f (x, y 2 ).By irreducibility of f (x, y), we deduce that the product g(x, y)g(x, −y) actually equals f (x, y 2 ).We therefore find f (x, y 2 ) = g 0 (x, y 2 ) 2 − y 2 g 1 (x, y 2 ).Setting y to 0, we conclude that the identity f (x, 0) = g 0 (x, 0) 2 holds, as needed. Remark 3. The statement above still holds replacing the complex numbers by any field k.If the characteristic of k is different from 2, then the given proof goes through essentially unchanged.If the characteristic of k is 2, then the statement follows from [Sta20, Tag 0BRA]: in this case, the morphism J is purely inseparable and hence a homeomorphism.Suppose that G is a connected plane graph and denote by T G (x, y) the Tutte polynomial of G. Denote by D(G) the associated link diagram constructed by Jaeger [Jae88].We do not reproduce here the construction of D(G): we refer the interested reader to [Jae88, Section 2].All that we need is that the identity We are interested in the irreducibility of the HOMFLY polynomial of the link diagram D(G). We view HOMFLY polynomials as elements of the Laurent polynomial ring L = C[x ±1 , y ±1 , z ±1 ].Thus, irreducibility of a non-zero element f ∈ L means that any factorisation f = gh, with g, h ∈ L, implies that either g or h has the form α x a y b z c , with α ∈ C and a, b, c ∈ Z. For a polynomial t(x g , y g ) in the coordinates x g , y g of (C * ) 2 , we want to read the information about the ramification of the morphism J. Thus, we take the strict transform under Σ −1 of the vanishing set of t(x g , y g ) and intersect the closure of this locus with y 0 = 0 and y 1 = 0. We summarize the outcome of this easy computation in the following lemma for future reference. Lemma 4. Let t(x g , y g ) = i,j t ij x i g y j g be a polynomial in C[x g , y g ] and let T ⊂ (C * ) 2 be the curve defined by the equation t(x g , y g ) = 0. Let d ∈ N be the largest exponent of y g among the monomials appearing in t(x g , y g ) with non-zero coefficient. • An equation for the intersection Σ(T ) ∩ {y 0 = 0, Proof.We obtain an equation vanishing of the curve Σ(T ) by setting to 0 the numerator of the evaluation t 1 − x 0 x 1 , 1 − x 1 y 0 x 0 y 1 .It is now a matter of a straightforward computation to check that the stated identities hold. Let G be a connected plane graph.We define two curves P G ⊂ (C * ) 2 and P G ⊂ P 1 C × P 1 C .We set and and Thus, P G and T G are, essentially, the vanishing of the HOMFLY polynomial of the oriented link D(G) and of the Tutte polynomial of G, respectively. As a consequence of the definitions and of [Jae88, Proposition 1], we deduce that there is a diagram Theorem 5. Let G be a 2-connected plane graph.If the HOMFLY polynomial (D(G)) is reducible then • the number of edges of G is even; • the number of vertices of G is even; • the polynomial T G (x, 1) is a square. Proof.Let T G (x 0 , x 1 , y 0 , y 1 ) ∈ Z[x 0 , x 1 , y 0 , y 1 ] be the numerator of the Laurent polynomial T G • Σ −1 .By construction, the polynomial T G is bihomogeneous of degree h 1 (G) in y 0 , y 1 and of degree Because the graph G is 2-connected, the Tutte polynomial T G (x, y) is irreducible by [MdMN01, Theorem 1]: even though the cited paper states irreducibility over Z, the authors mention, and their argument shows, that T G (x, y) is also irreducible in C[x, y].Since Σ is a birational map, the polynomial T G is also irreducible, and hence so is the curve T G .An equation of curve P G is T G (x 0 , x 1 , y 2 0 , y 2 1 ) = 0.If P G is reducible, then we are in a position to apply Lemma 2. We deduce that the number of edges of G, the degree of T G with respect to x 0 , x 1 , is even.Using Lemma 4, we evaluate T G (x, 1, 0, 1) and we find We obtain that the evaluation T G (1 − x, 1), or, equivalently, T G (x, 1), is a square.Finally, since the degree of T G (x, 1) is #E(G) − h 1 (G) − 1, and we already argued that #E(G) is even, we deduce that h 1 (G) − 1 is even.Since G is connected, the identity h 1 (G) − 1 = #E(G) − #V (G) holds.As we already showed that #E(G) is even, we conclude that #V (G) is even and the proof is complete.Remark 6.Let G be a finite graph.Define a simplicial complex F (G) on the edges of G by letting σ ⊂ E(G) be a face of F (G) if and only if σ contains no cycle.The evaluation T G (1 − x, 1) is the face polynomial of the simplicial complex F (G). Further directions We found that every reducible HOMFLY polynomial of a knot with at most 12 crossings is itself the product of irreducible HOMFLY polynomials of knots.We would find it surprising if this was always the case.Nevertheless, it would be interesting to study further the divisibility properties of HOMFLY polynomials of knots (or even of links).At an experimental level, extensive tables of HOMFLY polynomials of knots and links are available, so gathering further evidence is easily within reach.At a conceptual level, we would find it very interesting to predict factorizations of HOMFLY polynomials, without having to look them up in tables. We could not find a 2-connected plane graph G with an even number of vertices and of edges and such that the evaluation T G (x, 1) is a square, nor we could prove that they do not exist.Our expectation is that such graphs do not exist.If this were the case, then it would follow from Theorem 5 that the HOMFLY polynomials of the oriented links associated to 2-connected plane graphs are all irreducible.Using Remark 6 we can reformulate one of the conditions on the graph saying that the face polynomial of a simplicial complex is a square.We have never come across a similar condition.
3,610.6
2020-06-25T00:00:00.000
[ "Mathematics" ]
Special Geometry of Euclidean Supersymmetry IV: the local c-map We consider timelike and spacelike reductions of 4D, N = 2 Minkowskian and Euclidean vector multiplets coupled to supergravity and the maps induced on the scalar geometry. In particular, we investigate (i) the (standard) spatial c-map, (ii) the temporal c-map, which corresponds to the reduction of the Minkowskian theory over time, and (iii) the Euclidean c-map, which corresponds to the reduction of the Euclidean theory over space. In the last two cases we prove that the target manifold is para-quaternionic Kahler. In cases (i) and (ii) we construct two integrable complex structures on the target manifold, one of which belongs to the quaternionic and para-quaternionic structure, respectively. In case (iii) we construct two integrable para-complex structures, one of which belongs to the para-quaternionic structure. In addition we provide a new global construction of the spatial, temporal and Euclidean c-maps, and separately consider a description of the target manifold as a fibre bundle over a projective special Kahler or para-Kahler base. 1 Introduction and summary of results 1 .1 Background and motivation This paper completes the programme started in [1] and continued in [2,3], the purpose of which is to describe the scalar geometries of Euclidean N = 2 vector and hypermultiplets both without and with coupling to supergravity. Recall that with the standard (Minkowskian) spacetime signature the scalar manifolds of four-dimensional vector multiplets are affine special Kähler in the absence of supergravity and projective special Kähler in the presence of supergravity [4][5][6][7][8][9][10][11][12][13]. The scalar manifolds of hypermultiplets in d ≤ 6 space-time dimensions are hyper-Kähler in the absence of supergravity and quaternionic Kähler in the presence of it [14][15][16]. Together with the affine and projective special real target manifolds of five-dimensional vector multiplets [17,18], they form a family of related geometries which we refer to as special geometries. 1 In each case the corresponding special geometry exists in a 'rigid' or 'affine' version, which is realised in supersymmetric field theories not coupled to supergravity, and a 'local' or 'projective' version, which occurs when the respective matter supermultiplet is coupled to supergravity. When constructing supergravity theories using the so-called conformal calculus, see [21] for a review, it is manifest that the 'local' versions of the special geometries are related to special cases of their 'global' counterparts. In the field theoretic framework, one starts with a field theory invariant under rigid superconformal transformations, and then gauges the superconformal symmetry to obtain a theory which is 'gauge-equivalent' to a Poincaré supergravity theory. The scalar geometries of the superconformal and of the Poincaré supergravity theory are related by a so-called superconformal quotient. Geometrically, the target manifolds of superconformal field theories admit a certain homothetic action of the group Ê >0 , * and À * / 2 for five-dimensional vector multiplets, four-dimensional vector multiplets, and hypermultiplets, respectively. We refer to such affine special manifolds as conical, since their metrics have the form of a metric cone, at least locally. The corresponding 'local' special geometry is then obtained by dividing out this group action. This motivates the terminology of 'conic (affine)' and 'projective' special geometry, which was introduced in [13] and [12], respectively, and which we will use in the following. Another link between the special geometries is provided by dimensional reduction. Reducing five-dimensional vector multiplets to four-dimensional vector multiplets and four-dimensional vector multiplets to three-dimensional hypermultiplets induces maps between their scalar manifolds. These come in both a rigid and local (or supergravity) version, depending on whether the theory is coupled to supergravity. The (rigid/supergravity) r-map relates (affine/projective) special real to (affine/projective) special Kähler geome-try [1,22,23], while the (rigid/supergravity) c-map relates special Kähler geometry to hyper-Kähler or quaternionic Kähler geometry [24,25]. Throughout the programme [1][2][3] we have taken the approach of obtaining the scalar geometries of the Euclidean theories by dimensional reduction of Minkowskian theories over time, since this automatically ensures that the reduced theory is invariant under the Euclidean supersymmetry algebra. Thus our programme amounts to constructing and studying new versions of the r-map and c-map. It is well known that the spatial and temporal reduction of a given theory differ by relative signs in their Lagrangians, and in particular that temporal reduction can lead to scalar target spaces with indefinite Riemannian metrics. The central observation of [1] was that the scalar geometries of Minkowskian and Euclidean vector multiplets of the same dimension are related systematically by replacing complex structures by para-complex structures. 2 This is in contrast with four-dimensional Minkowskian and Euclidean hypermultiplets, which have the same target manifolds at least in the local case [27]. The scalar geometries of fourdimensional Euclidean vector multiplets are affine special para-Kähler in the rigid case and projective special para-Kähler in the local case, as shown in [1] and [3]. While para-Kähler manifolds had been defined previously in the mathematical literature [28,29] (we refer to [30] for a review of the history of para-complex geometry and further references), the two types of special para-Kähler geometry were described for the first time in these references. As explained in [1], the natural expectation is that after the dimensional reduction of four-dimensional vector multiplets over time the geometry of the resulting three-dimensional Euclidean hypermultiplets is para-hyper-Kähler in the rigid case and para-quaternionic Kähler in the local case. While rigid hypermultiplets were dealt with in [2], it remains to consider local hypermultiplets in order to complete the programme. As in the corresponding rigid case [2], we will obtain in this paper two new supergravity c-maps, since we can either reduce the Minkowskian theory over time, or the Euclidean theory (which was constructed in [3]) over space. We will refer to these constructions as the temporal c-map and the Euclidean c-map, respectively. Moreover, we will also revisit the standard, 'spatial', cmap and thus consider all possible spacelike and timelike reductions of both Minkowskian and Euclidean four-dimensional vector multiplets coupled to supergravity. The reason is that as a further main result we obtain a new global construction of the supergravity c-map, which we present in a uniform way for all three cases. The c-map was first described in the context of the T-duality between compactifications of type-IIA and type-IIB string theories with N = 2 supersymmetry [24]. Upon reduction to three dimensions as an intermediate step, four-dimensional vector multiplets become three-dimensional hyper-multiplets, so that the three-dimensional theories have two hypermultiplet sectors which only couple gravitationally. As a result there are two different decompactification limits, which can be used to relate the four-dimensional IIA and IIB theories to one another. The hypermultiplet metrics resulting from dimensional reduction were described explicitly in [25], and it was shown that they are quaternionic Kähler, as predicted by supersymmetry. In the construction of [25] it is assumed that the underlying projective special Kähler manifoldM is a projective special Kähler domain, that is defined by a single holomorphic prepotential, which is sufficient to obtain a local description of the resulting quaternionic Kähler manifoldN . This leaves open the question of how to describe the c-map globally ifM is not a domain, and how to characterise the resulting quaternionic Kähler metric globally in terms of the geometric data ofM . A global description is not only preferable mathematically but also needed for physical questions. In particular, in order to understand the full non-perturbative dynamics of N = 2 string compactifications, one would like to know under which conditions the resulting hypermultiplet manifolds are complete. Some results on these global questions will be discussed below. For the rigid r-map and c-map the global geometrical description is known. It was already observed in [24] that the image of an affine special Kähler domain M under the rigid c-map can be interpreted as its cotangent bundle T * M . More generally, affine special real and affine special Kähler manifolds are by definition equipped with a flat connection, which allows their tangent bundle to be decomposed into a horizontal and a vertical distribution. This can be used to show that the tangent bundle (equivalently, the cotangent bundle) of an affine special real or affine special Kähler manifold naturally carries the structure of an affine special (para-)Kähler or of a (para-)hyper-Kähler manifold, respectively [3,13,23]. Given that the affine and projective special geometries are related by superconformal quotients, one may ask whether it is possible to express the supergravity c-map in terms of the rigid c-map, applied to the associated conical affine special Kähler manifold. In physical terms this amounts to 'lifting the supergravity c-map to the superconformal level', which was investigated in [31] and [32]. Both constructions give rise to an off-shell realisation of the c-map in terms of tensor multiplets. Being off-shell means that supersymmetry is realised independently of the equations of motion by the inclusion of auxiliary fields. This has in particular the advantage that the problem of adding higher derivative terms is tractable. Tensor multiplets are related to hypermultiplets by a duality transformation. The corresponding relation between the Kähler and quaternionic Kähler metrics is as follows: The potential for the tensor multiplet metric is related to the prepotential of the special Kähler metric by a contour integral. Performing a Legendre transform on the tensor multiplet potential one obtains a hyper-Kähler potential for the hyper-Kähler cone (or Swann bundle) over the quaternionic Kähler manifold, which encodes the quaternionic Kähler metric [31]. Another approach to relating the supergravity c-map to the rigid c-map, and similarly, the supergravity r-map to the rigid r-map was described in [33]. Here the idea is to find a construction, dubbed 'conification', which allows one to obtain the image of the supergravity c-map (supergravity rmap) by conification of the image of the rigid c-map (or r-map) followed by a superconformal quotient. A general construction for the conification of Kähler manifolds and hyper-Kähler manifolds (satisfying certain technical conditions) was given. While the conification of (pseudo-)Kähler manifolds leads to a new Kähler/Kähler ('K/K') correspondence, the conification of (pseudo-)hyper-Kähler manifolds leads to a general (indefinite) version of the hyper-Kähler/quaternionic Kähler ('HK/QK') correspondence of [34], which was also discussed by [35] and [36][37][38][39]. Moreover one obtains a new explicit expression for the quaternionic Kähler metric, which allows one to recover the explicit form of the c-map metric of [25] and its one-loop deformation [40] as a special case, see [33,41] for details. This method provides a direct proof that these metrics are quaternionic Kähler, which is independent of supersymmetry or the proofs in the undeformed case given in [25,42]. As a consequence, one recovers the earlier result of [35], obtained using twistor methods, that applying the QK/HK correspondence (inverse to the HK/QK-correspondence) to the Ferrara-Sabharwal metric one obtains the rigid c-map metric. We remark that since every (para-)quaternionic Kähler manifold has an associated twistor or para-twistor space [43,44], one can also approach the geometry of the c-map through the corresponding twistor spaces. For this approach we refer to the literature, see in particular [43]. Another approach to the global description of the c-map is to cover the initial projective special Kähler manifold by projective special Kähler domains, to which one applies the supergravity c-map as formulated in [25], and then to check that the resulting quaternionic Kähler domains can be consistently glued to a quaternionic Kähler manifold. It was shown in [45] that the quaternionic Kähler domains take the formN =M × G, where G is a solvable Lie group, and that the quaternionic Kähler metric is a bundle metric gN = gM + g G (p), where g G (p) is a family of left-invariant metrics on G parametrised by p ∈M . 3 This was used to prove that the quaternionic Kähler domains obtained by applying the supergravity c-map domain-wise can be glued together such that resulting manifold has a well-defined quaternionic Kähler structure. Moreover, it was proved in [45], that both the supergravity r-map and the supergravity c-map preserve completeness of the Riemannian metrics. While complete projective special real curves and surfaces were classified in [45] and [46] respectively, a necessary and suf-ficient condition for the completeness of a projective special real manifold was obtained more recently in [47]. In fact, it was shown that a projective special real manifold H ⊂ Ê n+1 is complete if and only if it is closed as a subset of Ê n+1 , a condition which can be easily checked in many examples. Moreover it was shown that any projective special real manifold respecting a generic regularity condition on its boundary is complete. Therefore the composed r-and c-map can be used to construct many new examples of non-homogeneous complete quaternionic Kähler manifolds. Yet another description of the spatial and temporal c-map was obtained in [48], where the objective was to find a formulation of the temporal cmap which is adapted to lifting three-dimensional Euclidean supergravity solutions ('instantons') to four-dimensional stationary supergravity solutions (black holes and other 'solitons') [48][49][50]. To maintain the symplectic covariance of the four-dimensional theory, dimensional reduction was performed without taking the superconformal quotient in the four-dimensional theory, which resulted in the description of the (para-)quaternionic Kähler manifold N in terms of a U (1) principal bundle P →N . In this paper we will extend the local description given in [48] to the Euclidean c-map. Moreover, we will give a global construction of the bundle P and show that it is obtained with all data needed to define the (para-)quaternionic structure ofN in a natural way from the underlying projective special (para-)Kähler manifoldM as a one-dimensional extension of the tangent bundle T M of the associated conical affine special (para-)Kähler manifold M . Another approach, left to the future, would be to adapt the HK/QK-correspondence to encompass para geometries. One ingredient of [48] which will be useful in the present paper is to employ special real coordinates for the conical special (para-)Kähler manifold M . Special real coordinates make explicit the flat symplectic (rather then holomorphic) aspects of special Kähler manifolds [12,13,24,51]. From the affine point of view the existence of special real coordinates is related to the fact that any simply connected affine special Kähler manifold can be realised as a parabolic affine hypersphere [52], while the natural S 1 bundle over the associated projective special Kähler manifold carries the structure of a proper affine hypersphere endowed with a Sasakian structure [53]. Analogously, affine special para-Kähler manifolds are intrinsically improper affine hyperspheres [54]. Real coordinates play a central role in the analysis of black hole partition functions and their relation to the topological string [55][56][57][58]. The formalism of [48,56] uses special real coordinates on a conical affine special Kähler manifold to describe the underlying projective special Kähler manifold. A different approach where special real coordinates are introduced directly on the projective special Kähler manifold was described in [59] (see also [60] for a review of special real coordinates in the affine case). One aim of our programme is to make explicit the fact that Minkowskian and Euclidean theories can be presented in a uniform way. In [1] it was noted that in suitable coordinates the Lagrangian and supersymmetry transformations of vector multiplets take exactly the same form in either signature, and are only distinguished by interpreting the involution z → z as complex conjugation in the Minkowskian and as para-complex conjugation in the Euclidean case. Starting from [3] a unified ε-complex notation was used, where ε = −1 corresponds to the complex and ε = 1 to the para-complex case. This notation will also be used in the present paper. Since, apart from choosing to reduce a Minkowskian theory over space or over time, we can choose to start with a Euclidean theory in four dimensions, we will need a further refinement of our notation. Our convention is that whenever we talk about complex/para-complex manifolds or structures in a generic way, we will use the symbol ε = ±1, whereas ǫ 1 = ±1 refers to the geometry of the four-dimensional theory we start with, while ǫ 2 = ±1 distinguishes between reduction over time and reduction over space. We will explain more about this notation in the next subsection. The temporal c-map has been studied before in various publications, mostly in relation to constructing stationary solutions by lifting Euclidean solutions over time. In [61] a list of the symmetric spaces resulting from applying the temporal c-map to symmetric projective special Kähler manifolds was given. As mentioned in [61], these symmetric spaces are indeed para-quaternionic Kähler. This can be proved by either analysing the holonomy representation, or by comparing with the classification of pseudo-Riemannian symmetric para-quaternionic Kähler manifolds of [62]. 4 Main results Recall that given a projective special Kähler domainM of dimension 2n, defined by a holomorphic prepotential F that is homogeneous of degree two, the supergravity c-map assigns a quaternionic Kähler metric gN on a mani-foldN of dimension 4n + 4. The target metric is induced by the dimensional reduction of 4D, N = 2 supergravity coupled to n vector multiplets over a spacelike dimension, and was first computed explicitly by Ferrara and Sabharwal in [25]. Henceforth we shall refer to this construction specifically as the spatial c-map. It turns out that this metric can be defined even if the projective special Kähler manifold is not defined by a single holomorphic prepotential, but is rather covered by domains on which such prepotentials exist [45]. The total spaceN is then interpreted as a bundle overM , the fibres of which are solvable Lie groups isomorphic to the Iwasawa subgroup of SU (1, n + 2). The main purpose of this paper is to generalise the spatial c-map construction. We will give a different description of the total spaceN as an S 1 -quotientN = P/S 1 , where P = T M × Ê is the product of the tangent bundle of the (2n + 2)-dimensional conical affine special Kähler manifold (M, J, g, ∇, ξ) underlyingM with the real line. We will assume that M is simply connected in which case one may identify T M = M × Ê 2n+2 using the flat connection and P = M × Ê 2n+3 . The principal S 1 -action on P corresponds to the U (1) subgroup of the natural * -action on the first factor. It is locally generated by the trivial extension Z P to P of the Killing vector field Jξ on M . An advantage of this construction is that it does not place any restrictions on the projective special Kähler manifoldM , only that the underlying conic affine special Kähler manifold M is simply connected. It can also be adapted to the following two new cases, which is the main goal of this paper: This assigns to every projective special Kähler manifold of dimension 2n a para-quaternionic Kähler manifold of dimension 4n + 4. It is induced by the reduction of 4D, N = 2 supergravity coupled to vector multiplets over a timelike dimension. (ii) The Euclidean c-map. This assigns to every projective special para-Kähler manifold of dimension 2n a para-quaternionic Kähler manifold of dimension 4n + 4. It is induced by the reduction of 4D, N = 2 Euclidean supergravity coupled to vector multiplets over a spacelike dimension. This information is summarised in Table 1. While the explicit form of the target metric of the temporal and Euclidean c-maps can be easily adapted from the case of the spatial c-map, it is not obvious that the metrics are para-quaternionic Kähler. In order to prove this we will explicitly compute the Levi-Civita connection and show that it is compatible with an Sp(2) · Sp(2n, Ê)-structure. We will see that the reduced scalar curvature for all c-map target manifolds 5 is equal to −2. It was observed in [64] that the target manifold of the spatial c-map admits a complex structure which is part of the quaternionic Kähler structure. We will show that it also admits a second complex structure which is not part of the quaternionic Kähler structure. Similarly, the temporal c-map admits two complex structures, one of which is part of the para-quaternionic Kähler structure, and the Euclidean c-map admits two para-complex structures, one of which is part of the para-quaternionic Kähler structure. Let us give a brief summary of our construction for the spatial c-map. In order to define the quaternionic Kähler metric we must first recall some facts concerning conical affine special Kähler manifolds that can be found in [13,48]. Let (M, J, g, ∇, ξ) be a conic affine special Kähler manifold of complex Base Target Spacetime signature spatial projective special quaternionic (3 + 1) → (2 + 1) c-map Kähler Kähler temporal projective special para-quaternionic Kähler Kähler Euclidean projective special para-quaternionic (4 + 0) → (3 + 0) c-map para-Kähler Kähler Table 1: Summary of spatial, temporal and Euclidean c-maps. For a base manifold of dimension 2n the target manifold has dimension 4n + 4. dimension n + 1. We will assume that M is simply connected and therefore there exists a conic holomorphic nondegenerate Lagrangian immersion φ : M → T * n+1 that is unique up to symplectic transformations. On M there exist 2n + 2 globally-defined real functions (x 0 = Re Z 0 , . . . , x n = Re Z n , y 0 = Re W 0 , . . . , y n = Re W n ), where (Z I , W I ) I=0,...,n are complex linear coordinates on T * n+1 , that satisfy ω = g(·, J·) = 2dx I ∧ dy I and locally form a ∇-affine coordinate system about any point of M [13, Thm 9]. Since the functions (q a ) a=0,...,2n+1 := (x I , y I ) I=0,...,n are unique up to linear symplectic transformations one may uniquely define the following global one-forms on T M : is two times the Gram matrix ω( ∂ ∂q a , ∂ ∂q b ) of ω, i.e. ω = Ω ab dq a ∧ dq b , and (q a ,q a ) are global functions T M associated with the functions (q a ) on M . The special Kähler metric g on M is given by the Hessian of the function H = 1 2 g(ξ, ξ) < 0: The function H is homogeneous of degree two with respect to the functions (q a ). It is called the Hesse potential and, in the real formulation of special Kähler geometry, plays a role analogous to the holomorphic prepotential. The projective special Kähler metricḡ is related to H by [48] Here we have denoted by π : M →M = M/ * the canonical projection of the * -action on M , which is locally generated generated by the vector fields ξ and Jξ. We will now construct the quaternionic Kähler metric onN = P/S 1 . We first remark that the symmetric (0, 2)-tensor field on M has one-dimensional kernel ÊJξ. Using the canonical projection T M → M we may consider any covariant tensor field (such as h, H, . . .) on M as a tensor field on T M . Similarly, any covariant tensor field on T M can be considered as a tensor field on P by means of the canonical projection P = T M × Ê → T M . In particular we will consider the one-forms on P whereφ is the coordinate in the second factor of P = T M × Ê. Let us define on P the symmetric (0, 2)-tensor field which has kernel ÊZ P and is invariant under the circle group S 1 Z P . It induces a pseudo-Riemannian metric gN onN = P/S 1 , which is positive definite if the projective special Kähler metricḡ is positive definite. We will verify later that this metric can be brought to the standard form of the Ferrara-Sabharwal metric, and is therefore pseudo-quaternionic Kähler. When we consider the cases of the temporal and Euclidean c-map we will find that the tensor field g ′ on P differs from the case of the spatial c-map (1) only by certain sign-flips. It is convenient to introduce the parameters ǫ 1 , ǫ 2 ∈ {+1, −1} which are determined by the rule When we are not specifically discussing the c-map we will use the symbol ε = 'generic' epsilon, which can be either ±1 . One may interpret the parameters ǫ 1 , ǫ 2 physically as follows: The choice ǫ 1 = −1 corresponds to starting with a theory of 4D, N = 2 supergravity coupled to vector multiplets with Lorentzian spacetime signature, and ǫ 1 = 1 to the same theory with Euclidean spacetime signature. If ǫ 1 = −1 then ǫ 2 = −1 corresponds to the dimensional reduction of this theory over a spacelike dimension, and ǫ 2 = 1 to dimensional reduction over a timelike dimension. If ǫ 1 = 1 then one must necessarily reduce over a spacelike dimension, which corresponds to ǫ 2 = −1. However, as we will explain later, if one chooses instead ǫ 2 = 1 then the resulting target manifold is globally isometric to the case ǫ 2 = −1, and so both choices are mathematically equivalent. Using this notation one may write various expressions in a unified way for all three c-maps. For example, the expression for g ′ can be written for all c-maps as Note that when ǫ 1 = 1 the tensor h on M is of split-signature on any subspace complementary to its kernel. It is therefore clear from the above expression that g ′ induces a positive definite metric onN only when the metricḡ onM is positive definite and ǫ 1 = ǫ 2 = −1. For all other choices of ǫ 1 and ǫ 2 it induces a metric of split-signature. We will also discuss a complementary approach to describing c-map target spaces locally as the product whereM is (a domain in) the original projective special ǫ 1 -Kähler base manifold and G is the Iwasawa subgroup of SU (1, n + 2). With respect to this decomposition the metric onN can be written as whereḡ is the metric onM and g G (p) is a family of left-invariant metrics on G that depends on the point p ∈M . We will explicitly show that for fixed p the metric g G is a symmetric ǫ 1 -Kähler metric of constant ǫ 1 -holomorphic sectional curvature. This paper is organised as follows. We begin in Section 2 with a review of background material. In Section 2.1 we discuss ε-complex vector spaces, spaces of ε-complex lines and how these can be represented as symmetric spaces and realised as solvable Lie groups, and special ε-Kähler manifolds. In Section 2.3 we discuss ε-quaternionic Kähler structures on vector spaces and ε-quaternionic Kähler manifolds. The physical aspects of the c-map construction are dealt with in Section 3. We discuss theories of 4D, N = 2 supergravity coupled to vector multiplets with either Lorentzian or Euclidean spacetime signature, and the reduction of such theories to three dimensions over a spacelike or timelike circle. This provides the motivation for the choice of metric on the c-map target manifold. In Section 4 we present our construction of the c-map. We provide a detailed description of the target space topology, metric and ε-quaternionic structure. The explicit calculation of the Levi-Civita connection is postponed until Section 5, where we discuss each c-map on a case-by-case basis. In this section we also prove the existence of two integrable ε-complex structures on the c-map target manifold. Finally, in Section 6 we discuss the complementary approach to describing c-map target manifolds locally as group bundles. Throughout this paper we will use the index conventions 2 ε-Kähler and ε-quaternionic Kähler geometry ε-Kähler manifolds In this section we review ε-complex and ε-Kähler manifolds, and provide some examples which we will use later in the paper. The concepts of εcomplex geometry allow us to talk about complex and para-complex geometry in parallel. Intuitively, para-complex geometry differs from complex geometry by replacing the field of complex numbers by the ring of paracomplex numbers C = Ê ⊕ eÊ, where e is the para-complex imaginary unit We assume that the reader is familiar with the definitions and the relevant properties of para-complex, para-Hermitian and para-Kähler manifolds, which can be found, for instance, in [1]. As in [3] we will use a unified εcomplex notation and terminology, where ε = −1 refers to the complex case and ε = 1 to the para-complex case. Thus, for example, we will use the symbol i ε to denote the complex imaginary unit i in the case ε = −1 and the para-complex imaginary unit e in the case ε = 1 such that We denote by ε = Ê[i ε ] the ring of ε-complex numbers. Similarly, an almost ε-complex structure J on a real differentiable manifold M is a field of endomorphisms of the tangent bundle T M such that J 2 = ε½, and such that the eigendistributions of J have equal rank. Our convention for the relation between the ε-complex structure J, pseudo-Riemannian metric g and ε-Kähler form ω on an ε-Kähler manifold (M, J, g) is Among the simplest examples of ε-Kähler manifolds are spaces of constant ε-holomorphic sectional curvature, which are always (pseudo-)Riemannian locally symmetric spaces. As we will see later c-map spaces are fibre bundles over special ε-Kähler manifolds with fibres of constant ε-holomorphic sectional curvature. Therefore we will now discuss these spaces in some detail. ε-complex vector spaces The construction of ε-Kähler metrics that we are going to present is a generalisation of the well-known Fubini-Study metric on complex projective spaces P n . Consider the vector space Ê n+1 ⊕ Ê n+1 = Ê 2n+2 with coordinates (x, y) = x I , y J , I, J = 0, . . . , n . Next, we define an ε-complex structure on Ê 2n+2 by Note that J is skew with respect to ·, · , so that (Ê 2n+2 , J, ·, · ) is an ε-Hermitian vector space, that is a (pseudo-)Hermitian vector space 6 if ε = −1 and a para-Hermitian vector space if ε = 1. We identify (Ê 2n+2 , J) with the standard ε-complex vector space n+1 This identifies J with the standard ε-complex structure on n+1 ε , that is Jz = i ε z. On n+1 ε we consider the ε-Hermitian form which is of complex signature (k, ℓ) if ε = −1. Using the isomorphism n+1 ε ≃ Ê 2n+2 , we can write it as In coordinates it is given by Consider the open subset and define the space of ε-complex lines This can be viewed as taking a quotient with respect to the natural action v → zv of the group of units * of the ring ε . Since this group will play some role in the following, let us make some remarks. Remark 1. In the complex case * ε is the multiplicative group * of nonzero complex numbers, which is connected. In contrast, the para-complex numbers z = x + ey that are not invertible are precisely those which are located on the light cone x 2 − y 2 = 0, and the group of para-complex units C * has four connected components: where C * 0 is the connected component of unity. Remark 2. Note that when defining D we have excluded not only the zero vector but all null vectors. This is done for two reasons. In fact, in the case ε = 1 there exist non-zero singular vectors, that is vectors v such that the orbit C * v is of lower dimension than 2. In order to obtain a free action of C * we therefore need to exclude such vectors. This is ensured by excluding null vectors. Another reason is that, as we will see below, in order to define the induced metric on P (D) we will have to divide by v, v . Finally, in the case ε = −1, to avoid jumping of the signature of the metric on the quotient we restrict to spacelike complex lines. The restriction to spacelike lines is no loss of generality, since we can always multiply the metric by −1. Notice that in the case ε = 1 multiplication by e maps spacelike to timelike vectors and vice-versa, and therefore there is no notion of spacelike (nor timelike) para-complex lines. The group * ε acts freely and properly on D by ε-holomorphic transformations. Therefore, P (D) is a smooth ε-complex manifold and π : D → P (D) is an ε-holomorphic * ε -principal bundle. Using the ε-Hermitian form γ on n+1 ε , we define an ε-Hermitian form γ on P (D) by where p ∈ D and u, v ∈ T p D ≃ Ê 2n+2 . In terms of ε-complex coordinates, this sesquilinear form corresponds to the following tensor field on D: To see that this defines an ε-Hermitian metric on P (D), we first note that γ ′ is manifestly invariant under * ε . Moreover it is easy to see that γ ′ (ξ, ·) = 0 = γ ′ (Jξ, ·), where ξ = z I ∂ ∂z I +z I ∂ ∂z I is the position vector field on D. Thus γ ′ is also horizontal with respect to the * ε -action and hence can be pushed down to P (D). Since the kernel of γ ′ is spanned by ξ, Jξ, it defines a non-degenerate ε-Hermitian metric on P (D). Consequently the real part of γ ′ defines a metricḡ on P (D) such that g ′ = π * ḡ . The degenerate tensor field g ′ on D, when expressed in ε-complex coordinates, is This symmetric tensor field can be locally expressed using a potential K, which is given by the logarithm of the squared length of the position vector field: We can also describe the metricḡ using inhomogeneous coordinates on P (D), instead of using the symmetric tensor field g ′ on D. If we identify P (D) locally with the hypersurface z 0 = 1 the associated inhomogeneous coordinates are z A , A = 1, . . . , n. In terms of these coordinates, For later convenience we have taken η 00 = 1, which will cover all cases relevant to us, and definedη AB = −η AB . Thusη AB has signature (ℓ, k − 1). The tensorḡ is an ε-Kähler metric on P (D) with ε-Kähler potential It is straightforward to check that the ε-Kähler metricḡ has constant εholomorphic sectional curvature, that is the sectional curvature of a Jinvariant plane does not depend on the chosen plane. We will recover this later using an alternative description of these spaces (with the exception of P n ) in terms of open orbits of solvable Lie groups. It is known 7 that ε-Kähler spaces with constant ε-holomorphic sectional curvature c are locally symmetric and locally uniquely determined by the value of the constant c. Next we discuss in more detail the spaces P (D) as globally symmetric ε-Kähler spaces, which we represent as coset spaces. Representation as symmetric spaces The space P (D) is the space of ε-complex lines in an open subset D of the ε-complex vector space n+1 ε . We will now describe it as a symmetric space. for ε = 1, G is the para-unitary group, which is isomorphic to GL(n + 1, Ê) [1]. More precisely, the representation of the para-unitary group on Ê 2n+2 is equivalent to the sum of the standard (n + 1)-dimensional representation of GL(n + 1, Ê) and its dual. Since the group G acts transitively on P (D) we can identify P (D) with the corresponding homogeneous space where H is the stabiliser of an ε-complex line. We notice that already the special (pseudo-)unitary group SU (k, ℓ), respectively the special paraunitary group SL(n + 1, Ê), acts transitively on P (D). For notational convenience we prefer to work with the full ε-unitary group. Let us consider the possible cases in turn. 1. For ε = −1 and η IJ = δ IJ the Hermitian form is invariant under U (n + 1) andη AB = −δ AB . The stabiliser of a complex line in n+1 is U (1) × U (n). The resulting complex projective space is the symmetric space and the corresponding Kähler metric is the Fubini-Study metric: The resulting symmetric space is the complex hyperbolic space which is the dual, in the sense of Riemannian symmetric spaces, of P n . We remark that both spaces are real forms of GL(1 + n, )/ (GL(1, ) × GL(n, )). The Kähler metricḡ defined in the previous section is negative definite and coincides with the complex hyperbolic metric up to sign: The resulting symmetric spaces are indefinite signature versions of the Hermitian symmetric spaces P n and H n . We remark that they are again real forms of GL(1 + n, )/(GL(1, )×GL(n, )). The resulting pseudo-Kähler metric has complex signature (k − 1, ℓ): 4. We finally consider the para-complex case, ε = 1. The stabiliser of a point of P (D) under the para-unitary group GL(n+1, Ê) is GL(1, Ê)× GL(n, Ê). The resulting space is the para-complex analogue of any of the above spaces, which for convenience is referred to as para-complex hyperbolic space. The corresponding symmetric space is yet another real form of GL(n + 1, )/(GL(1, ) × GL(n, )). The resulting para-Kähler metric has real signature (n, n) irrespective of the signature of (η IJ ) = (1, −(η AB )): where z A =η AB z B . Realisation as a solvable Lie group Recall that the Iwasawa subgroup L of a non-compact semi-simple group G is the maximal triangular (and, hence, solvable) Lie subgroup of G, which is unique up to conjugation. As a consequence of the Iwasawa decomposition it acts simply transitively on the corresponding Riemannian symmetric space of the non-compact type G/H, which is defined as the quotient of G by its maximal compact subgroup H, which is unique up to conjugation. Standard examples include hyperbolic spaces such as G/H = SU (1,ñ + 2)/S(U (1) × U (ñ + 2)). This allows us to identify G/H with L and to compute geometric quantities on G/H, such as the Levi-Civita connection and curvature, purely algebraically on the Lie algebra l of L. On pseudo-Riemannian symmetric spaces G/H the group L, in general, no longer acts transitively, but it may still act at least with open orbit such that we can still identify the symmetric space with L locally and perform computations on l. This is indeed the case for all non-compact symmetric spaces of constant ε-holomorphic sectional curvature considered in the previous subsection. In fact, in this section, we show explicitly that the Iwasawa subalgebra l ⊂ su(1,ñ + 2) can be equipped with a scalar product ·, · and ε-complex structure J, such that the metric on L obtained by left-invariant extension of the scalar product is ε-Kähler and has constant ε-holomorphic sectional curvature. Depending on our choice of scalar product, this provides a local description of Hñ +2 , wherek +l =ñ + 3, or CHñ +2 in terms of a solvable Lie group equipped with a left-invariant ε-Kähler metric. We start by reviewing the standard realisation of the Lie algebra l of the Iwasawa subgroup L ⊂ SU (1,ñ + 2). The (2ñ + 4)-dimensional Lie algebra l admits the decomposition where X, Y ∈ V , and where ω is a non-degenerate symplectic form on V . Thus Z 0 extends V into the standard Heisenberg Lie algebra of dimension 2ñ + 3 on which D acts as a derivation. We choose an ω-skew-symmetric ε-complex structure J on V which is extended to l by setting On V this determines the (possibly indefinite) scalar product ·, · = ω(J·, ·) which we extend orthogonally to span{Z 0 , D} by This also determines the extension of the symplectic form to l by ω(D, Z 0 ) = 1. Since the ε-complex structure is skew-symmetric with respect to the scalar product, J ∈ so(l, ·, · ), we can express it in terms of bivectors as using the convention that Here pr V denotes the projection onto V . By identification of l with the Lie algebra of left-invariant vector fields on the associated Lie group L, we obtain a left-invariant metric g L , ε-complex structure J, and symplectic form ω on L. To compute the Levi-Civita connection ∇ on L we use the Koszul formula: where X, Y, Z are vector fields on L. It is sufficient to evaluate for leftinvariant vector fields, in which case the first three terms on the right hand side are zero. The remaining terms can be evaluated using the scalar product and commutator relations of l. The result can be summarised as follows: Here ∇ X , with X ∈ l, is interpreted as an endomorphism of l. It is straightforward to verify that Thus the ε-complex structure is parallel, ∇J = 0, and in particular integrable. We conclude that the metric g L on L is a left-invariant ε-Kähler metric. The curvature of the connection ∇ is computed by the formula R(X, Y ) can be interpreted as a skew endomorphism of l, and thus be computed on l. When evaluating commutators of skew endomorphisms, the following formula is useful: It is straightforward to show that R(X, Y ) takes the canonical form and one easily verifies that the ε-holomorphic sectional curvature is −1: For later applications is useful to introduce a Darboux basis The Gram matrix of the scalar product ·, · on V with respect to this basis is given by This determines the expression of J on V in this basis: We can choose the Darboux basis such thatη ij is diagonal with entries ±1. Now, to finish this subsection, we would like to indicate which scalar products on V correspond to which of the symmetric spaces of constant ε-holomorphic sectional curvature discussed in the previous subsection. At this point we anticipate some results which will be proven in Section 6. We will see in Section 6 that c-map spaces are fibre bundles over projective special ε-Kähler manifolds, where the fibre is precisely the solvable Iwasawa group L of SU (1,ñ+2) equipped with a left-invariant ε-Kähler metric which depends on the base point. (Hereñ is the complex dimension of the base manifold.) The c-map will provide coordinates on L, which easily allow one to find the associated ε-Kähler potential. By comparing to the ε-Kähler potentials listed in Section 2.1.3 we will then be able to identify the symmetric spaces that actually occur in the context of the c-map. For convenience we already summarise the result here: 1. For ε = −1 andη ij = δ ij , we obtain the complex hyperbolic space Hñ +2 equipped with the positive definite metric −ḡ. Note that when choosing ε = −1 and a positive definite scalar product on V , we do not obtain a metric on the compact space Pñ +2 , but a positive definite metric on its non-compact dual Hñ +2 . We will see in Section 6 why the compact space Pñ +2 cannot arise in the context of the c-map. Special ε-Kähler manifolds For later use we now review special ε-Kähler manifolds, following the definitions and theorems stated in [3]. Every simply connected ASεK manifold admits a canonical realisation as an immersed Lagrangian submanifold of the ε-complex symplectic vector space T * n+1 ε = 2n+2 ε , such that the special geometry of M is induced by the immersion, where n + 1 is the ε-complex dimension of M . From this one obtains the local characterisation of an ASεK manifold in terms of an ε-holomorphic prepotential. For any given p ∈ M one can choose linear symplectic coordinates in 2n+2 ε such that the symplectic form is given by dX I ∧ dW I and the functions X I restrict to a system of local ε-holomorphic coordinates near p, which we call special ε-holomorphic coordinates. The Lagrangian submanifold is then defined by equations of the form W I = F I (X) := ∂F (X) ∂X I , where F (X) = F (X 0 , . . . , X n ) is an ε-holomorphic function, which is called the prepotential. The metric is given by where F IJ are the second derivatives of the prepotential F , and a Kähler potential is therefore On M the 2n + 2 globally-defined real functions (q a ) = x I , y J , a = 0, . . . , 2n + 1 , form a local system of ∇-affine coordinates about any point, which we call special real coordinates. Both special ε-holomorphic and special real coordinates are useful when investigating ASεK geometry, although many of the new results in this paper will be presented in terms of the latter. The Kähler form and metric are given by the following globally-defined expressions where H is a globally-defined real function called the Hesse potential. The ε-complex structure is determined by the metric and Kähler form according to (3) and J 2 = εId ensures that The matrix (H ab ) is related to (F IJ ) by where The imaginary part of the ε-holomorphic prepotential is related to the Hesse potential by a Legendre transform H(x, y) = −ε(2ImF (X(x, y)) − 2y I u I (x, y)), which replaces the u I with y I as independent functions [66]. is an ASεK manifold (M, J, g, ∇) equipped with a vector field ξ such that where D is the Levi-Civita connection. The definition implies that L ξ g = 2g and L Jξ g = 0, so that while ξ acts homothetically, Jξ acts isometrically. Moreover the vector field ξ and, hence, Jξ preserves J and the two vector fields generate an infinitesimal action of a two-dimensional abelian Lie algebra. The corresponding condition on the Hesse potential for an ASεK manifold to be conical is that it must be homogeneous of degree two, once we have restricted the special real coordinates such that ξ is the corresponding Euler field, ξ = q a ∂ ∂q a . Such special coordinates are called conical, and it is understood in the following that special coordinates are conical. As in [3] we will always assume that g(ξ, ξ) = 2H does not vanish on M , which will be used in (14). In addition, we will assume for simplicity that M is simply connected and impose the following regularity assumption on CASεK manifolds in order to discuss projective special ε-Kähler manifolds in a convenient way. We assume that the infinitesimal action generated by ξ and Jξ is induced by a principal * ε -action on M and that the Lagrangian on a regular CASεK manifold, which is * ε -invariant and degenerate along the orbits of the * ε -action. In terms of special ε-complex coordinates h is given by where (N X) I = N IJ X J and XNX = N IJ X IX J , whilst in terms of special real coordinates it is given by The requirement that a CASεK manifold M is regular ensures that the projection π : M →M onto the space of * ε -orbits is the quotient map of a holomorphic principal bundle over an ε-complex manifold, and that the (0,2)-tensor field h on M induces an ε-Kähler metricḡ onM , such that π * ḡ = h. The ε-Kähler manifold (M ,J ,ḡ) is called a projective special ε-Kähler (PSεK) manifold. 8 The following remark will be used later. Remark 3. Note in the case ε = 1 that the action of i ε = e ∈ * ε = C * induces an anti-isometry of the CASεK manifold that sends (M, J, g, ∇, ξ) to (M, J, −g, ∇, ξ) but preserves the C * -invariant tensor h. 8 In the case ε = 1 one may define instead a projective special para-Kähler manifold as the quotientM of M by the action of the connected group C * 0 which is related toM by the four-fold coveringM →M , see [3]. The relation between a CASεK manifold and the associated PSεK manifold is via an ε-Kähler quotient and generalises the Fubini-Study-type constructions of the previous section. In terms of special coordinates (X I ) on M , the degenerate and * ε -invariant (0, 2) tensor h has a potential of the form (15) for h. One can describe (M ,ḡ) using homogeneous special ε-holomorphic coordinates (X I ) and the tensor h. Alternatively, one can introduce, for X 0 = 0, inhomogeneous special ε-holomorphic coordinates z A = X A X 0 , where A = 1, . . . , n, and define an associated prepotential F(z 1 , . . . , z n ) by Then the ε-Kähler metricḡ ofM has the ε-Kähler potential where F A = ∂F ∂z A . We note that we can identifyM locally with the subman- The simplest class of examples is provided by models with a quadratic prepotential where we take η IJ to be real and non-degenerate. The potential for the tensor h is Evaluating this on the hypersurface X 0 = 1, taking η 00 = 1, and settinḡ η AB = −η AB , we obtain the following ε-Kähler potential onM : This agrees, up to an overall sign, with the ε-Kähler potentials for the metrics on the spaces P (D) of ε-complex lines given in (6). The Gram matrix of the basis (19) defines a canonical scalar product ·, · can on Ê 4n of signature (4k, 4ℓ) if ε = −1 or (2n, 2n) if ε = 1. We will denote by O ε (4k, 4ℓ) the pseudo-orthogonal group with respect to ·, · can , and by so ε (4k, 4ℓ) its Lie algebra. Let us denote by J can α ∈ so ε (4k, 4ℓ) the matrix which represents the endomorphism J α with respect to the basis (19). Then is a skew-symmetric ε-quaternionic structure on (Ê 4n , ·, · can ). The triple (Ê 4n , ·, · can , Q can ) is our standard model for a pseudo-Euclidean vector space endowed with a skew-symmetric ε-quaternionic structure. We denote by Sp ε (1) the group generated by the Lie algebra sp ε (1) := Q can and by Sp ε (k, ℓ) the centraliser of Sp ε (1) in O ε (4k, 4ℓ). The Lie algebra of that centraliser is a real form of the complex Lie algebra of type C n . The inner product and ε-quaternionic structure are preserved by the group Notice that our notation is such that Sp(k, ℓ), k + ℓ = n, and Sp(2n, Ê) are real forms of the same complex Lie group Sp(2n, ). The H ⊗ E formalism Let E = 2n with standard basis B E = (E 1 , . . . , E 2n ). On E one may define an anti-linear complex structure j E and non-degenerate skew-symmetric bilinear form ω E that satisfy the following formulae where B * E = (E 1 , . . . , E 2n ) is the basis of 2n * dual to B E and (η AB ) = diag(½ k , −½ ℓ ), with n = k + ℓ. Complex conjugation is denoted by ρ E . The group Sp(k, ℓ) preserves both j E and ω E , the group Sp(2n, Ê) preserves both ρ E and ω E , and the symplectic form satisfies the following reality condition: . Let H = 2 denote a specific case of the above construction, where the standard basis is denoted by B H = (h 1 , h 2 ), the anti-linear complex structure by j H , complex conjugation by ρ H and the bilinear form by ω H . Consider the 4n-dimensional complex vector space H ⊗ E with standard basis (h A ⊗ E µ ) A=1,2;µ=1,...,2n . On H ⊗ E we may define the following: (i) Two real structures j H ⊗ j E and ρ H ⊗ ρ E . (iii) Three skew-symmetric endomorphisms J 1 , J 2 , J 3 that satisfy the εquaternion algebra and act according to where σ α are the Pauli matrices One may use the above data to construct an example of an ε-quaternionic Hermitian vector space (V, ·, · , Q) of real dimension 4n given by Since all ε-quaternionic Hermitian vector spaces of a given dimension are isomorphic we may state the following proposition: Indeed, a standard co-frame of H * ⊗ E * may be matched with an εquaternionic co-frame of V * through the expressions ε-quaternionic structure on the tangent bundle The above notions can be easily transferred to vector bundles. For instance, a (fibre-wise) ε-quaternionic structure in a vector bundle E → M is a subbundle Q ⊂ End(E) such that Q p ⊂ End(E p ) is an ε-quaternionic structure on E p for all p ∈ M . One may introduce pairwise anti-commuting local sections J 1 , J 2 , J 3 of Q defined over an open subset U ⊂ M satisfying the ε-quaternion algebra, such that Q p = span{(J α ) p | α = 1, 2, 3} for all p ∈ U . A fibre-wise ε-quaternionic structure on the vector bundle T M is called an almost ε-quaternionic structure on M . An almost ε-quaternionic structure Q on M is called an ε-quaternionic structure if it is parallel with respect to a torsion-free connection, which can be characterised by the property that the covariant derivative of any section of Q in the direction of any vector field is again a section of Q. If M is endowed with a pseudo-Riemannian metric then an almost ε-quaternionic structure on M is called Hermitian if it consists of skew-symmetric endomorphisms. A pseudo-Riemannian manifold of real dimension 4n > 4 with almost ε-quaternionic Hermitian structure Q is called ε-quaternionic Kähler if Q is parallel with respect to the Levi-Civita connection. On a pseudo-Riemannian manifold (M, g) with almost ε-quaternionic Hermitian structure we may use Proposition 1 to make the local identification T M = H ⊗ E, where H and E are (at least locally defined) complex vector bundles of dimension 2 and 2n respectively, such that the metric and ε-quaternionic structure satisfy (21). We call a local complex co-frame of the form (U Aµ ) = (h A ⊗ E µ ) an ε-quaternionic vielbein. The metric takes the form g = ǫ AB ρ µν U Aµ ⊗ U Bν and on T * M an ε-quaternionic vielbein is subject to the reality condition Recall that η = (η AB ) = diag(½ k , −½ ℓ ), k+ℓ = n. An ε-quaternionic vielbein may be identified with an ε-quaternionic co-frame through the expressions given below Proposition 1. On a manifold with almost ε-quaternion structure we call an adapted connection a connection on T M for which the almost ε-quaternionic structure is parallel. It is well-known from the theory of G-structures that, with respect to a frame of the G-structure, the connection one-form of an adapted connection takes values in the Lie algebra g. An almost ε-quaternionic Hermitian structure corresponds to a G-structure with Lie group G = Sp ε (1) · Sp ε (k, ℓ) and therefore the connection one-form of an adapted connection takes values in sp ε (1) ⊕ sp ε (k, ℓ). Since this is a subalgebra of so ε (4k, 4ℓ) an adapted connection is automatically metric compatible. In an ε-quaternionic vielbein basis the connection one-form Ω of an adapted connection takes the form where p is a 2 × 2 matrix satisfying and q, s, t are n × n matrices satisfying The coefficients of the torsion tensor are given by Notice that the connection matrix (Ω Aµ Bν ) has the following structure, see (22): (1) and (Ω µ ν ) ∈ sp ε (k, ℓ). If the torsion vanishes then the adapted connection coincides with the Levi-Civita connection and the manifold is ε-quaternionic Kähler. Alternatively, if the Levi-Civita connection one-form takes values in sp ε (1) ⊕ sp ε (k, ℓ) when written in an ε-quaternionic vielbein basis then the manifold is ε-quaternionic Kähler. 3 Dimensional reduction of four-dimensional vector multiplets Four-dimensional vector multiplets Our starting point is the bosonic part of the Lagrangian for n = n (4) V N = 2 vector multiplets coupled to supergravity, as given by (7.9) of [3]: In the following we will explain each term appearing in this expression. R 4 and e 4 are the four-dimensional Ricci scalar and vielbein, andμ,ν, . . . are four-dimensional space-time indices. We employ a notation which applies to standard (Lorentzian) and Euclidean supergravity simultaneously. The main difference between Euclidean and standard vector multiplets is that the complex structure of the scalar manifoldM is replaced by a para-complex structure [1,3], and thus we use the ε-complex notation introduced previously. From now on the parameter ǫ 1 distinguishes between Lorentzian space-time (ǫ 1 = −1) and Euclidean space-time (ǫ 1 = 1). The ǫ 1 -complex scalar fields z A are local coordinates of a PSǫ 1 K manifold M with metricḡ =ḡ AB dz A dz B , whereḡ AB is the ǫ 1 -Kähler metric with ǫ 1 -Kähler potential Kḡ given in (18). For ǫ 1 = −1 this is the well known projective special Kähler geometry of vector multiplets in the 'new conventions' of [67], while for ǫ 1 = 1 this is the projective special para-Kähler geometry of Euclidean vector multiplets which was defined in [3]. The scalar metric g has positive signature (2n, 0) for ǫ 1 = −1 and split-signature (n, n) for ǫ 1 = 1. The original construction of the vector multiplet Lagrangian in Lorentzian signature was performed using the superconformal calculus [6]. This employs an auxiliary theory of n+1 rigid superconformal vector multiplets with complex scalars X I , I = 0, . . . , n, which are local coordinates of a CASK manifold M . After gauging the superconformal transformations the theory becomes gauge equivalent to a theory of n vector multiplets coupled to Poincaré supergravity. This construction is reviewed in [21]. The scalar metricḡ is obtained from the scalar metric g = N IJ dX I dX J of the scalar manifold M of the associated superconformal theory by gauge fixing the local symmetry group * ≃ Ê >0 × U (1), where Ê >0 are dilatations, while the chiral U (1) transformations are part of the R-symmetry group U (1)×SU (2) of the N = 2 supersymmetry algebra. In [3] it was shown how this procedure can be adapted to Euclidean vector multiplets, where the scalar manifold M is a conical affine special para-Kähler manifold, and where the symmetry group . While in this paper we find it convenient to define projective special para-Kähler manifolds by dividing out the full group C * , only the subgroup SO(1, 1) ⊂ O(1, 1) is part of the R-symmetry group SO(1, 1) × SU (2) of the Euclidean supersymmetry algebra. Consequently only the group Ê >0 × SO(1, 1) is a symmetry of the superconformal Lagrangian. But as explained previously, dividing out the group Ê >0 × SO(1, 1) leads to the same scalar manifoldM , provided that we restrict to the subset on which the function −i ǫ 1 (X IF The relations between the superconformal theories and the supergravity theories are given by ǫ 1 -complex versions of the standard formulae of special Kähler geometry, which were presented in Section 2.2. It is possible to rewrite the scalar term using ǫ 1 -complex scalar fields which are local coordinates of the associated CASǫ 1 K manifold M : where the D-gauge −i ǫ 1 (X IF I − F IX I ) = 1 has been imposed. Here g IJ are the coefficients of the pullback h = π * ḡ of the PSǫ 1 K metricḡ to M , which are given by (15). The D-gauge restricts the scalars X I to a real hypersurface S ⊂ M , and since the right hand side is in addition invariant under local U ǫ 1 (1) transformations, the n + 1 ǫ 1 -complex fields X I represent as many physical degrees of freedom as the fields z A . While it is possible to gauge-fix the residual local U (1) ǫ 1 symmetry too, we prefer not to do so at this point, because this allows us to keep all expressions manifestly covariant under symplectic transformations. The field equations of N = 2 supergravity are invariant under electric-magnetic duality transformations, which act by Sp(2n + 2, Ê) transformations. 9 Under these transformations (X I , F I ) T transforms as a vector, while the transformation of z A = X A /X 0 is non-linear. The remaining two terms in (23) involve the abelian field strengths F Î µν and their Hodge-dualsF Î µν . As with the scalar term, the couplings I IJ and R IJ can be expressed in terms of the prepotential F (X 0 , . . . , X n ). The relevant formula is where we defined z 0 = 1, and where N IJ are the coefficients of the metric g on the CASǫ 1 K manifold M , which are given by (9). We use a short-hand notation where (N z) I := N IJ z J and zN z := z I N IJ z J . The negative imaginary part −I IJ of the vector coupling matrix N IJ determines the kinetic terms for the vector fields. Therefore it must be positive definite in Lorentzian space-time signature. It is known that by choosing g = N IJ dX I dX J to have signature (2n, 2), the scalar couplings g AB and vector couplings −I IJ are positive definite. We remark that both −I IJ dX I dX J and g = N IJ dX I dX J can be viewed as metrics on the scalar manifold M , and are related to one another by a simple geometric operation which flips the signature along a complex one-dimensional subspace [45]. In the Euclidean case the metric −I IJ dX I dX J always has split-signature, irrespective of the signature of the real matrix −I IJ . If the Euclidean theory has been obtained by dimensional reduction of a five-dimensional theory with respect to time, then the matrix −I IJ has Lorentz signature (n, 1), with the negative definite direction corresponding to the Kaluza-Klein vector. The metrics g andḡ are para-Kähler and have split-signature (n + 1, n + 1) and (n, n), respectively. Electric magnetic duality acts on the gauge fields through the linear action of Sp(2n + 2, Ê) on the vector (F Î µν , N IJ F Ĵ µν ) T . Reduction to three dimensions We now carry out the reduction of the four-dimensional vector multiplet Lagrangian (23) to three dimensions. This type of calculation is standard, so we will not give many details, though we need to specify our notation and conventions. If we start with Lorentzian signature (ǫ 1 = −1) in fourdimensions we have the option to either reduce over space, or over time, which will be distinguished by a new parameter ǫ 2 , where ǫ 2 = −1 for spacelike reduction and ǫ 2 = 1 for timelike reduction. If we start with a Euclidean theory (ǫ 1 = 1), then we can only reduce over space, so ǫ 2 = −1. All three cases will be treated simultaneously up to a certain point. The reduction is performed along the lines of [48], with the following modifications: (i) we now include the reduction of four-dimensional Euclidean theories, (ii) some fields have been renamed, (iii) the definition of the Riemann tensor has been changed by an overall sign. For completeness, we briefly review the relation between the four-dimensional and threedimensional quantities. Four-dimensional indicesμ,ν, . . . are split into threedimensional indices µ, ν, . . . and the index y, which refers to the dimension we reduce over. We decompose the four-dimensional metric into a three-dimensional metric g µν , the Kaluza-Klein vector V µ and the Kaluza-Klein scalar φ. The four-dimensional vector fields have been decomposed into a scalar part ζ I = A I y and a vector part A I µ − ζ I V µ , with the second term restoring manifest gauge invariance. The three-dimensional field strengths V µν and F I µν have then been dualised into scalarsφ andζ I , see [48] for details. Instead of the four-dimensional scalars z A we are using the corresponding superconformal scalars X I and the degenerate tensor g IJ . The resulting three-dimensional Lagrangian is For ǫ 1 = ǫ 2 = −1 this agrees, up to conventional choices, with [25], and for ǫ 1 = −ǫ 2 = −1 this agrees, up to the above mentioned changes in conventions, with [48]. As explained in [48], one can absorb the Kaluza-Klein scalar φ into the scalar fields X I by the field redefinition Y I = e φ/2 X I . These fields are now related to the Kaluza-Klein scalar φ by the D-gauge condition So φ will be now considered as a function of the independent variables Y I . Geometrically, we interpret φ as a coordinate along the orbit of the homothetic action of Ê >0 on M . Using homogeneity, we can rewrite the scalar terms as follows: . Note that while both expressions take the same form, on the left hand side the fields X I are subject to the D-gauge, while φ is an independent field. In contrast on the right hand side φ is considered to be a dependent field, which can be expressed in terms of the Y I . Since both sides of the equation are invariant under local U ǫ 1 (1) transformations, both sets of fields represent the same 2n + 1 independent physical real degrees of freedom. Now we interpret the Y I as ǫ 1 -holomorphic special coordinates on M . We can therefore rewrite the theory in terms of the associated special real coordinates q a , defined by decomposing and setting (q a ) = (x I , y I ) T . Note that in this parametrisation the Kaluza-Klein scalar is expressed in terms of q a by where we recall that H denotes the Hesse potential. We also defineq a = 1 2 (ζ I ,ζ I ) T and remark that both q a andq a are symplectic vectors while the dualised Kaluza-Klein vectorφ is a symplectic scalar. As explained in detail in [48], the Lagrangian (25) can be written in terms of the 4n + 5 fields (q a ,q a ,φ) with all couplings expressed using the Hesse potential H, the tensor field and the constant matrix Ω ab representing the symplectic form (10): Since all local degrees of freedom have been converted into scalars, the Lagrangian (27) is a non-linear sigma model coupled to gravity. The 4n+5 real scalar fields (q a ,q a ,φ) are local coordinates of its target space P . Due to the local U ǫ 1 (1) symmetry, there are only 4n + 4 physical degrees of freedom, and the symmetric tensor field on P defined by the Lagrangian is invariant and degenerate along the orbits of the U ǫ 1 (1)-action. By gauge-fixing this symmetry we can obtain a sigma model with a (4n + 4)-dimensional target manifoldN , equipped with a non-degenerate metric. Since U ǫ 1 (1) acts on the symplectic vector q a , whileq a andφ are invariant, such a gauge fixing will break the manifest symplectic invariance of the sigma model with target P . Therefore it is advantageous to describeN in terms of the larger space P . Geometric data on a conic affine special ε-Kähler manifold The starting point for our construction of the c-map will be a regular, simply connected CASεK manifold M , see Section 2.2. The purpose of this section is to introduce a global orthogonal co-frame on M and to express certain geometrical data in terms of this co-frame. We are specifically interested in the cubic tensor C = ∇g, the difference tensor S = D − ∇, and the pullback of the Levi-Civita connection one-form on the corresponding PSεK manifold σ = π * σ . The necessary expressions are given by (38), (39) and (41) respectively. Let (M, J, g, ∇, ξ) be a regular, simply connected CASεK manifold of dimension 2n + 2, which in the case ε = −1 has signature (2k, 2ℓ + 2), k + ℓ = n. Recall that M is a principal * ε -bundle over a PSεK manifold (M ,J ,ḡ) of dimension 2n, with fundamental vector fields where Moreover, by choosing the orthonormal frame on U ⊂M adapted to the ε-complex structure we can further assume that J(e A ) = e A+n , J(e A+n ) = εe A , A = 1, . . . , n . In such a frame the ε-complex structure J(e p ) = J m p e m is represented by the constant matrix Such a choice of frame is not unique, with any two choices differing by a gauge transformation with values in U ε (k, ℓ) ⊂ SO ε (2k, 2ℓ) := SO(2k, 2ℓ) , ε = −1 SO(n, n) , ε = +1 , n = k + ℓ . It is useful to consider also the inclusion map ι : H → T M , which is characterised by We have the equation and the matrix P t has the following properties: where (J m p ) is the constant matrix (29) representing the tensor J| H : H → H in the frame (e m ). Proof: Part (i) follows immediately from the fact that ker ϕ = span{ξ, Jξ}. For part (ii) one may use the fact that and therefore η mp (P m a P p b ) = h ab , cf. (16). For part (iii) we note that − 1 2H g and h coincide when restricted to H Since this is non-degenerate on H we can invert this formula Plugging in (e m , e p ) gives expression (iii). Using (iii) one can easily check that T satisfies the equation Using (i) one can also check that the vectors T a m ∂ ∂q a are perpendicular to ξ and Jξ. In view of the characterisation (32), this proves that the matrix T represents the inclusion map ι : H → T M . The latter property implies (34). For part (iv) we note that ϕ * • J * = J * • ϕ * : H * → T * M . Acting on e m this gives J * m p e p = P m a J * a b dq b . Plugging e n into this expression and using (34), (33) and (11) gives the desired result. Let us now turn our attention to the cubic tensor C = ∇g = H abc dq a ⊗ dq b ⊗ dq c , where H abc are the triple derivatives of the Hesse potential. The cubic tensor is related to the difference tensor which is immediately obtained from g = ω(J·, ·). Differentiating the equation g(JX, JY ) = εg(X, Y ) with respect to ∇ and using the equation (36) one can prove that C(·, J·, J·) = εC(·, ·, ·) . The cubic tensor is degenerate with kernel containing span{ξ, Jξ} but not * ε -invariant, and therefore not well-defined onM . In the above local frame of H * ⊂ T * M we may write C = C mpq e m ⊗ e p ⊗ e q , where the components are symmetric and satisfy Due to (36) the components of the difference tensor S = S m pq e m ⊗ e p ⊗ e q are given by and the one-forms X → S m p (X) = e m (S X e p ) by It follows from (37) that We end this section by computing the pull-back to M of the Levi-Civita connection one-form on the corresponding PSεK manifoldM . Lemma 2. Let σ ∈ Ω 1 (M, so ε (k, ℓ)) denote the pull-back of the Levi-Civita connection one-formσ onM . The components of σ in the above local frame of H * ⊂ T * M are given by where the one-form v was defined in (30). Proof. The Levi-Civita connection one-formσ onM is uniquely determined by the requirement that it is metric compatible and torsion-free. In terms of the pull-back σ the metric compatibility condition implies that which is easily seen to be satisfied by (41). The torsion-free condition implies that which we will now show is satisfied by (41). Using Lemma 1 (iii) and (iv) we have . To calculate the last term we have used (39) and (33). It vanishes in virtue of the symmetry of S. Calculating ( * ) individually using (12), (31) and Lemma 1 (i-ii) we find and therefore expression (43) is satisfied. By Proposition 3 of Section 5 the solution to (42) and (43) is unique, and, moreover, it is precisely the pull-back of the Levi-Civita connection one-form onM . The c-map for various spacetime signatures In this section we will construct the c-map target manifold (N , gN , QN ). We will present this construction for the spatial, temporal and Euclidean c-maps in a unified way using the (ǫ 1 , ǫ 2 )-notation introduced previously. We will begin with the topological data onN , before moving on to the metric gN and ε-quaternionic structure QN . Consider a regular, simply connected CASǫ 1 K manifold M of dimension 2n + 2. Given M one may construct the (4n + 5)-dimensional manifold P = T M × Ê, that is the product of the tangent bundle of M with the real line. On P we have 4n+5 global real functions (q a ,q b ,φ) which are defined as follows. We start with the globally-defined functions (q a ) on M , introduced before, which restrict to special real coordinates in a neighbourhood U of any point of M . The functionq b on T M is defined by the property that it takes the value v b on the vector v a ∂ ∂q a . As we have natural projections On P one may consider the principal action of the subgroup U ǫ 1 (1) ⊂ * ǫ 1 . In this way, one may interpret P as a principal U ǫ 1 (1) bundle over a manifold N . Let Z P be the vector field generating the infinitesimal U ǫ 1 (1)-action on P . This is precisely the horizontal lift of the vector field Jξ on M , and is given by We define the c-map target manifold as the orbit spacē which by construction has dimension 4n+4. This information is summarised in Figure 1. Notice that in the case ǫ 1 = 1 the manifoldN has at least two connected components distinguished by the sign of the Hesse potential H. The non-linear sigma model of the dimensionally reduced Lagrangian (27) defines on P the symmetric bilinear form where ǫ 1 and ǫ 2 are determined by the different c-maps according to the rule (2) and where H = 0 is now allowed to change sign and the PSǫ 1 K metric is allowed to be indefinite. The bilinear form g ′ has a one-dimensional kernel ÊZ P and is invariant under the U ǫ 1 (1)-action on P . It may therefore be pushed-down to give a well-defined metric gN onN . This procedure makes sense even in the case ǫ 1 = ǫ 2 = 1, which we have not given a physical interpretation so far. As the next proposition shows, this case gives a metric equivalent to the one in the case ǫ 1 = −ǫ 2 = 1. Proposition 2. For the case ǫ 1 = 1 the pull-back of g ′ under the action of e ∈ C * is given by Proof: The pull-back of the functions (q a ,q b ,φ) are given by cf. (45). In fact, the first term is computed as follows using (17): where J a b are the components of the para-complex structure on M , pulledback to P . Notice that from this calculation we also obtain (ϕ P e ) * H a = −ǫ 1 2Ω ab q b . Using these formulae together with the identities (ϕ P e ) * H = −H and (ϕ P e ) * H ab = H ab , which follow from the fact that e acts anti-isometrically on the metric g of M , the result is easy to check. Recall that the manifold (N , gN ) is obtained by taking the quotient of P with respect to the action of U ǫ 1 (1) ⊂ * ǫ 1 . In the case ǫ 1 = 1 the action of e / ∈ U ǫ 1 (1) on P induces an involution onN which interchanges the connected components ofN . This 2 -action does not preserve the metric gN , but maps N , gN | (ǫ 1 ,ǫ 2 )=(1,−1) to N , gN | (ǫ 1 ,ǫ 2 )=(1,1) and therefore both manifolds are globally isometric. For this reason one may take either ǫ 2 = −1 or ǫ 2 = +1 for the Euclidean c-map at the expense of working with a manifoldN which is not connected but naturally contains both possible choices. A co-frame of P adapted to the pull back of the c-map metric On a local patch of P it is convenient to introduce the following 4n + 4 linearly independent one-forms: We will refer to the collection L * = (e m , u 1 , u 2 ,ê n ,û 1 ,û 2 ) m,n=1,...,2n as a local partial co-frame on P . Note that Z 0 P = span L * . The globally-defined one-forms u 1 , u 2 ,û 1 ,û 2 ∈ L * are independent of the choice of the functions q a and therefore uniquely defined. The one-forms e m ,ê n ∈ L * are unique up to U ǫ 1 (k, ℓ) ⊂ SO ǫ 1 (2k, 2ℓ) gauge transformations, which act according to The bilinear form is written in terms of the partial co-frame L * as where (η mp ) is given by (28). Consider the globally-defined one-form This satisfies v(Z P ) = 1 and is invariant with respect to the U ǫ 1 (1)-action on P . Therefore it may be interpreted as a connection on the principal U ǫ 1 (1)-bundle P →N . We extend the partial local co-frame L * to a local co-frame (L * , v) on P . It is important to note that although g ′ is invariant under the U ǫ 1 (1)action this is not necessarily true for the individual one-forms in L * . In fact, only e m , u 1 , u 2 are invariant under the action of U ǫ 1 (1), with the remaining one-forms transforming according to The following lemma can be directly calculated using the results of Section 4.1. It will be used later to extract the Levi-Civita connection one-form on (N , gN ). The exterior derivatives of the one-forms in the co-frame (L * , v) are given by Be careful to note the index convention A = 1, . . . , n and m, p = 1, . . . , 2n. The matrix-valued one-forms S and σ were defined on M in the previous subsection and are pulled-back to P in the above expressions. The constant matrices J and d were also defined in the previous subsection. The appearance of v in the expressions for dê m , dû 1 , dû 2 is due to the fact that they are not invariant under the flow of Z P . The ε-quaternionic structure We now turn our attention to the ε-quaternionic structure. Using the connection v we may decompose the tangent space into T P = ÊZ P + ker v and the dual tangent space into T * P = Êv + Z 0 P . The vector space ker v is dual to Z 0 P , which we recall is spanned by L * . There exists a unique basis L = (e m , u 1 , u 2 ,ê p ,û 1 ,û 2 ) m,p=1,...,2n of ker v dual to L * . In local coordinates this is given by which is the ε-quaternion algebra up to relabelling. Since the expressions for J 1 , J 2 , J 3 are invariant under transformations of the form (48) they are independent of the choice of frame L of ker v in the class of frames considered above and are therefore globally-defined on P . In Table 2 we summarise which of these endomorphisms are almost complex and which are almost para-complex when restricted to ker v ⊂ T P . It is interesting to define two additional endomorphism fields (which are also independent of the choice of L as above) which satisfy J ′ 3 2 ker v =J 2 ker v = ǫ 1 and are skew-symmetric with respect to g ′ . The previously defined endomorphism J 3 differs from J ′ 3 by sign on the two-dimensional subspace spanned by (u 1 , u 2 ) and fromJ by sign on the (2n + 2)-dimensional subspace spanned by (e m ,û 1 ,û 2 ). Neither J ′ 3 nor J form part of the ε-quaternion algebra. Using Lemma 1,J can be written in terms of U ǫ 1 (1)-invariant vectors as Here we have used the splitting of the manifold M parametrised by the coordinates q a into the level sets of the function φ defined in (26). In particular, we can include φ in a new local coordinate system on M consisting of φ together with a choice of local coordinates on a level set of φ. The coordinates chosen on one level set are extended to the other level sets by imposing that the coordinates are invariant under the flow of u 1 . In the resulting new coordinate system, one computes u 1 = 2 ∂ ∂φ . Next, we note thatθ and that It was shown in [64] that the almost complex structure J 3 on the target manifold of the spatial c-map is integrable. The other parts of this theorem will be proved on a case-by-case basis in Sections 5.3 -5.5. For easier reference we stated the result as three cases (a)-(c). The endomorphisms J 1 , J 2 , J 3 define a (fibre-wise) ε-quaternionic structure Q P = span{J 1 , J 2 , J 3 } on ker v ⊂ T P , which is skew-symmetric with respect to the metric g ′ | ker v . Due to the transformation properties under U ǫ 1 (1), this induces an almost ε-quaternionic Hermitian structure QN on (N , gN ). In the next section we will show, by explicit calculation, that QN is parallel with respect to the Levi-Civita connection, which proves the following theorem. In all three cases the reduced scalar curvature ν = scal/(4(n + 1)(n + 3)) is equal to −2. This can be seen by comparing the Sp ε (1)-curvature of the c-map target manifold R H = dp + p ∧ p , with the Sp ε (1)-curvature of ε-quaternionic projective space À ε P n+1 Here the matrix J H α is given in expression (73) and ω α (·, ·) := ǫ α g ′ (J α ·, ·) is the fundamental two-form associated with the almost ǫ α -complex structure J α . The matrix p is given for each c-map separately in Sections 5.3 -5.5. For an ε-quaternionic Kähler manifold the above Sp ε (1)-curvature tensors are related by [62] R H = νR H 0 . Computing both sides one finds that ν = −2 in all cases. Levi-Civita connection and integrable ε-complex structures In this section we will calculate the Levi-Civita connection on the target manifold (N , gN ) of the c-map for various spacetime signatures. We will also show that the two skew-symmetric almost ε-complex structures J 3 and J ′ 3 introduced in the previous section are integrable. In order to compute the Levi-Civita connection and to check the integrability of the structures J 3 and J ′ 3 one needs to calculate exterior derivatives of an appropriate local co-frame onN . To do this we will make use of the partial co-frame (47) on the U ǫ 1 (1)-principal bundle P →N . There are two complementary approaches one may take when performing these calculations: 1. Use a local section of P →N to pull-back the partial co-frame from P to a co-frame ofN and then perform calculations. 2. Perform calculations directly on P using the partial co-frame and then use a local section to pull-back the results toN . We will adopt approach 2 since one only needs to make a choice of local section after all calculations have been performed. There is a slight complication due to the fact that the partial co-frame (47) is not invariant under the flow of the fundamental vector field Z P and therefore not projectable toN , which we address in Section 5.1. The relation between the two approaches is discussed in Section 5.2. The explicit calculation of the Levi-Civita connection and integrability of the ε-complex structures for various spacetime signatures are presented case-by-case in the remaining sections. Calculating on P using a non-invariant partial co-frame In this section we want to discuss the following problem. Let (M, g) be a pseudo-Riemannian manifold with almost ε-quaternionic Hermitian structure Q and π : P → M be a principal bundle with structure group G. Suppose we are given pointwise linearly independent one-forms θ i , i = 1, . . . , n = dim(M ) on some open subset U ⊂ P , which are horizontal in the sense that they vanish on any vertical vector, such that where (η ij ) = diag(½ k , −½ ℓ ) is the Gram matrix of an orthonormal frame in standard ordering. We will assume that π * T M is trivial on U . Systems (θ i ) as above will be called partial co-frames of P over U . Notice that given a principal connection on P and a basis of g = Lie G, any partial co-frame of P over U is canonically extended to a co-frame of P over U . The problem is to show that Q is parallel with respect to the Levi-Civita connection, and, hence, the manifold (M, g) is ε-quaternionic Kähler. This involves computing the Levi-Civita connection of g in terms of (θ i ), without assuming that the forms θ i are G-invariant and, hence, projectable to M . Proposition 3. Under the above assumptions, the system of equations has a unique solution σ = (σ i j ) ∈ Ω 1 (U, so(k, ℓ)). Given a second system of n linearly independent horizontal one-forms (θ i ) on U ⊂ P , the solutioñ σ = (σ i j ) of the system dθ i +σ i j ∧θ j = 0 , is related to σ byσ Proof: We first prove the uniqueness. Suppose that σ ′ is a second solution of (58). Then the difference ∆ = (∆ i j ) = σ ′ − σ ∈ Ω 1 (U, so(k, ℓ)) satisfies the equations ∆ i j ∧ θ j = 0. For the coefficients ∆ i jk in the expansion ∆ i k = ∆ i jk θ j this implies ∆ i jk = ∆ i kj . Therefore ∆ jik := η il ∆ l jk is antisymmetric in (i, k) and symmetric in (j, k), which implies ∆ = 0. One can easily check that given a solution σ of (58) and a gauge transformation A ∈ C ∞ (U, O(k, ℓ)),σ = −(dA)A −1 + AσA −1 is a solution of (59), if we defineθ i = A i j θ j . Now we prove the existence. Given the above hypothesis on U , we can assume without restriction of generality that U = π −1 (U 0 ) is the preimage of an open subset U 0 ⊂ M on which an orthonormal co-frame (θ i 0 ) exists. It is sufficient to remark that the pullback of the connection one-form σ 0 of the Levi-Civita connection of (M, g) with respect to the co-frame (θ i 0 ) gives a solution of (58), where (θ i ) = (π * θ i 0 ). The equation (58) is in fact obtained as the pullback of the equation dθ 0 + σ 0 ∧ θ 0 = 0, which expresses the vanishing of the torsion of the Levi-Civita connection of (M, g). Here θ 0 is the column vector with entries θ i 0 . The almost ε-quaternionic Hermitian structure Q on M induces a (fibre-wise) ε-quaternionic Hermitian structure Q P in the normal bundle N = T P/T v P to the fibres of P → M , where T v P ⊂ T P denotes the vertical distribution. The ε-quaternionic structure Q P is Hermitian in the sense that it consists of endomorphisms which are skew-symmetric with respect to the (fibre-wise) metric π * g in N. By construction Q P is invariant under the G-action on N induced by the principal G-action on P . Conversely, a fibre-wise skew-symmetric ε-quaternionic structure Q P on (N, π * g), which is invariant under the G-action on N, induces an almost ε-quaternionic Hermitian structure Q on M , which may be parallel or not. Proposition 4. Given a G-invariant skew-symmetric fibre-wise εquaternionic structure Q P on N the induced almost ε-quaternionic Hermitian structure Q on (M, g) is parallel with respect to the Levi-Civita connection if the solution of (58) takes values in the Lie algebra sp ε (1) ⊕ sp ε (k, ℓ), provided the partial co-frame (θ i ) is ε-quaternionic. Proof: Consider an open subset U ⊂ P on which an ε-quaternionic partial co-frame (θ i ) is defined. We may assume without restriction of generality that U = π −1 (U 0 ) is the preimage of an open subset U 0 ⊂ M on which an ε-quaternionic co-frame (θ i 0 ) exists. This may be pulled back to give another ε-quaternionic partial co-frame (θ i ) = (π * θ i 0 ). Since both (θ i ) and (θ i ) are ε-quaternionic partial co-frames they are related to one-another by a gauge transformation of the form A = (A i j ) ∈ C ∞ (U, Sp ε (1) · Sp ε (k, ℓ)). Let us denote by σ the solution of (58) in the basis (θ i ) and byσ the solution in the basis (θ i ). Proposition 3 shows that in order to compute the Levi-Civita connection of a manifold (N , gN ) in the image of the c-map it is sufficient to solve the equation (58) locally on P without having to assume that the partial coframe (θ i ) is projectable. If the solution to (58) takes values in sp ε (1) ⊕ sp ε (k, ℓ) in an ε-quaternionic partial co-frame then the manifold (N , gN ) is ε-quaternionic Kähler by Proposition 4. Alternative approach: calculating onN using a co-frame Let us now briefly discuss an alternative way of calculating exterior derivatives and the Levi-Civita connection directly on the target manifold (N , gN ) in the image of the c-map. Let U ⊂ P be an open set on which the partial co-frame (47) is defined. Consider any local section s : U 0 → P with values in U , for example the local section defined by the equation x 0 = 0. (Recall that x 0 = q 0 is one of the functions (q a ,q b ,φ) on P introduced in Section 4.2.) We may use the section s to define a co-frame on U 0 ⊂N given by It is then possible to calculate the exterior derivatives and the Levi-Civita connection in this local co-frame onN . One may relate this approach to that of Section 5.1 as follows. Since the exterior derivative commutes with the pull-back of a differentiable map we have where the exact expressions on the RHS can be read off from (51). Moreover, from Proposition 3 it follows that the Levi-Civita connection σ 0 onN in the basis (61) is given by the pull-back of the unique solution σ of equation (58) in the basis (47), which is calculated in the following sections. The spatial c-map In this section we consider the reduction over space from 3 + 1 to 2 + 1 dimensions. This means that one must set ǫ 1 = −1 and ǫ 2 = −1 in the expressions in Section 4.2. Recall from Section 3.2 that the Hesse potential H is assumed to be negative. In order to expose the quaternionic geometry we define the complex partial co-frame on P Recall that X I = e −φ/2 Y I , and, due to homogeneity, N IJ (X,X) = N IJ (Y,Ȳ ) and N IJ (X,X) = N IJ (Y,Ȳ ). We have locally defined the matrix (P A I ) with entries where Π a I represents the holomorphic projection from the special holomorphic coordinates Y I to special real coordinates q a : Notice that P A I Y I = 0, and, hence, P A I dX I = e −φ/2 P A I dY I . Using the local section s = {Im(X 0 ) = 0} (= {x 0 = 0}) of P →N discussed in Section 5.2 one can pull-back (62) to the complex orthonormal co-frame on (N , gN ) presented in [25] 10 . Proposition 5. The complex partial co-frame (62) is related to the real partial co-frame (47) introduced in Section 4.2 by The one-form v may be written as Proof: Using e φ = −2H and Y I = x I + iu I , F I = y I + iv I , H a = (2v I , −2u I ) T , the first two expressions are calculated to be Comparing with the explicit expressions in (47) gives the desired result. Next, we observe that Using the fact P A I Y I = 0 along with (64) and (63) one may write Making use of the identity 4Π a I N IJΠ b J = H ab + i 2 Ω ab , which can be easily verified using (13) and (64), along with the expression for J * a b given by the components of (11), we can write From (35) it follows that P m b J * b a = J * m p P p a , and, hence, E A =ê A +iê A+n . Lastly, we calculate where in the second line we used (9) and in last line (12). The exterior derivatives of the one-forms in the complex co-frame may be written as [25] (see also [69] for the indefinite case) where σ A B := σ A B + iσ A+n B and S A BC := S A BC + iS A+n BC . These expressions may be checked using (51) and the identities (40) and (44). Proof of Theorem 1 (a): The following proof that J 3 is integrable was provided in [64]: a basis of the +i eigendistribution of J * 3 is given by B (1,0) = ū, v,ē A , E A . Each term in the exterior derivative of any element in B (1,0) contains a one-form in the set B (1,0) . Therefore the distribution is integrable by the Newlander-Nirenberg theorem, hence the almost-complex structure J 3 is integrable. We now consider the integrability of J ′ 3 andJ. A basis of the +i eigendistribution of J ′ 3 * is given by B ′(1,0) = ū,v,ē A , E A , and by the same argument as above J ′ 3 is integrable. A basis of the +i eigendistribution of J is given by u, v, e A , E A , and thereforeJ is integrable if and only if S A BC = S A+n BC = 0, which is the case if and only if the cubic tensor C, defined in (36), vanishes. This is true if and only if the holomorphic prepotential F , or, equivalently, the Hesse potential H on the corresponding CASK manifold M is a quadratic polynomial. Proof of Theorem 2 (a): The complex one-forms, along with their conjugates, may be gathered together into the quaternionic vielbein In this co-frame the Levi-Civita connection one-form decomposes according to (22), where p, q, t are given by [25] p = The quaternionic structure is therefore parallel with respect to the Levi-Civita connection. Let us briefly explain how one may check that the above expression for the Levi-Civita connection is correct. It is obvious from the formalism that the above expression defines a metric connection so it suffices to check that its torsion is zero. In terms of an ε-quaternionic vielbein the latter condition is given by dU Aµ + Ω Aµ Bν ∧ U Bν = 0 . This can be naturally split into two separate sets of equations where we have defined In the quaternionic case s =t and f AI = ǫ ABēBJ , and therefore the second set of equations follows from the first set by complex conjugation. However in the para-quaternionic case, which we will deal with in the following sections, the second set of equations are not implied by the first, and must be checked independently. Let us end by explicitly checking, for instance, that the formula for dE A obtained from (68) coincides with the exterior derivative of E A as given in (66): Here we have omitted writing the symbol for the wedge product. The temporal c-map We now consider the reduction over time from 3 + 1 to 3 + 0 dimensions. In this case we must set ǫ 1 = −1 and ǫ 2 = 1. Recall that in our construction of the c-map the spatial and temporal c-map have the same target manifold but different metrics. In particular, we may use the same partial co-frame L * on P defined by (47) in both cases. The almost-complex structureJ in the case (ǫ 1 , ǫ 2 ) = (−1, 1) coincides with −J in the case (ǫ 1 , ǫ 2 ) = (−1, −1) except for its action on the twodimensional subspace spanned by (u 1 , u 2 ), where it acts with opposite sign. Taking this into account, one may use the same argument as in the proof of Theorem 1 (a) thatJ is integrable if and only if C = 0. Let us define the real partial co-frame which we gather together into the para-quaternionic vielbein One may use Proposition 5 to write the vielbein in terms of the real and imaginary parts of the complex co-frame (62). Notice that the above expression for the para-quaternionic vielbein is not related to the expression for the spatial c-map quaternionic vielbein by replacing complex coordinates with the para-complex coordinates. However, as we will explain in the next section, such a relationship does exist for the vielbeins of the spatial and Euclidean c-maps. Proof of Theorem 2 (b): In the frame (69) the Levi-Civita connection oneform decomposes according to (22), where p, q, t, s are given by The para-quaternionic structure is therefore parallel with respect to the Levi-Civita connection. The Euclidean c-map We now consider the reduction from 4 + 0 to 3 + 0 dimensions. In this case we make the choice ǫ 1 = 1 but ǫ 2 may be left arbitrary. Let us define the real partial co-frame on P Proposition 6. The following para-complex partial co-frame (and its paracomplex conjugate) is related to the above real partial co-frame by replacing the para-complex unit i ǫ 1 with 1: where dq a =: Π a I dY I +Π a I dȲ I and P A I := e φ/2 (P A a + i ǫ 1 P A+n a )Π a I . The one-form v may be written as Proof. The proof is analogous to the proof of Proposition 5. In the paracomplex case one must use the identities e A = 1 The exterior derivatives of the one-forms in the real partial co-frame can be computed from Lemma 3 Proof of theorem 1 (c): We first consider J 3 . A basis of the +1 eigendistribution of J * 3 is given by B + = (ũ, v,ẽ A , E A ). Each term in the exterior derivative of any element in B + contains a one-form in the set B + . Therefore the distribution is integrable by Frobenius' theorem. A basis of the −1 eigendistribution of J * 3 is given by B − = (u,ṽ, e A ,Ẽ A ), and by the same argument it is also an integrable distribution. Therefore the almost-paracomplex structure J 3 is integrable. Let us now consider J ′ 3 andJ. A basis of the +1 eigendistribution of J ′ 3 * is given by B ′+ = (ũ,ṽ,ẽ A , E A ) and a basis of the −1 eigendistribution by B ′− = (u, v, e A ,Ẽ A ). By the same argument as above J ′ 3 is integrable. A basis of the +1 eigendistribution ofJ is given by (u, v, e A , E A ) and the −1 eigendistribution by (ũ,ṽ,ẽ A ,Ẽ A ). ThereforeJ is integrable if and only if the cubic tensor C vanishes. One may gather together the elements of the real partial co-frame (70) into the para-quaternionic vielbein Proposition 6 shows that one may replace the complex unit i and holomorphic coordinates in the formal expression for the spatial c-map quaternionic vielbein (67) with the para-complex unit i ǫ 1 and para-holomorphic coordinates in order to obtain the above expression for the para-quaternionic vielbein in the Euclidean c-map with (ǫ 1 , ǫ 2 ) = (1, −1). The three endomorphisms J 1 , J 2 , J 3 defined in (53) correspond to the following three 2-by-2 matrices J H 1 , J H 2 , J H 3 : Notice that in the last three cases we could have used the same basis (τ α ). The reason not do so was to allow for the unified expression (53) for (J α ) in terms of the orthonormal basis. Proof of theorem 2 (c): In the basis (72) the Levi-Civita connection one-form decomposes according to (22), where p, q, t, s are given by The Levi-Civita connection is therefore compatible with the paraquaternionic structure. 6 c-map spaces as fibre bundles with bundle metrics In Section 4.2 we have described c-map spaces in terms of the U ǫ 1 (1)principal bundle P = T M × Ê →N equipped with the degenerate symmetric tensor field g ′ , see (49), which pushes down to the ε-quaternionic Kähler metric gN . We now turn to a complementary point of view, where c-map spaces are locally described as product manifolds whereM is the original PSǫ 1 K manifold, which is locally a PSǫ 1 K domain, and where G is the Iwasawa subgroup of SU (1, n + 2). The ε-quaternionic Kähler metric can then be written in the form of a 'bundle metric' whereḡ is the PSǫ 1 K metric, and where g G (p) is a family of left invariant metrics on G which is parametrised by p ∈M . We will show that for fixed p ∈M the metrics g G (p) are among the symmetric ǫ 1 -Kähler metrics of constant ǫ 1 -holomorphic sectional curvature that were discussed in Sections 2.1.3 and 2.1.4, and give explicit expressions for the metric, ǫ 1 -complex structure and ǫ 1 -Kähler potential. The bundle metric We start from (25), where we re-write the expression g IJ ∂ m X I ∂ mX J D in terms of the physical four-dimensional scalars z A . Explicitly, the metric gN now takes the form (74) whereḡ =ḡ AB dz A dz B , see (24), and where which as indicated depends on p ∈M . TakingM to be a PSǫ 1 K domain, we find thatN is a productN =M × L →M with fibre L = Ê 2n+4 . The fields z A provide holomorphic coordinates onM and (ζ I ,ζ I ,φ, φ) are real coordinates on L. Following [45] we define the following one-forms: where I = 0, . . . , n. n this co-frame the fibre metric is where ǫ := −ǫ 1 ǫ 2 . Since I IJ is symmetric and invertible, by a linear change of coordinates, we assume where (η IJ ) = diag(−ǫ 1 , 1, . . . , 1). Here we used the information about the signature of the matrix (I IJ ) provided in Section 3.1. Thus pointwise with respect to p ∈M we can bring the fibre metric to the standard form The one-forms are invariant under the following group of affine transformations depending on 2n + 4 real parameters (v I ,ṽ I , α, λ): The Lie group structure underlying the above affine transformations is (v + e λ/2 v ′ ,ṽ + e λ/2ṽ′ , α + e λ α ′ + 1 2 Thus Ê 2n+4 , considered as a Lie group G with the above multiplication, acts on L = Ê 2n+4 by the affine transformations (78). Using this group action we can identify the Lie group G with the G-orbit of the point (0, 0, 0, 0), which is all of L. The affine transformation (78) is then given by the left action of G on itself. The differentials of the one-forms (θ a ) = (η I , ξ I , η n+1 , ξ n+1 ) are linear combinations of wedge products of the θ a with constant coefficients: These coefficients are, in fact, the structure constants of the Lie algebra g of the group G. This is clear since the forms (θ a ) can be considered as a basis of the space of left-invariant forms on the group G. The left-invariant vector fields (V a ) = (X I , Y I , Z 0 , D) dual to the oneforms (θ a ) = (η I , ξ I , η n+1 , ξ n+1 ) are given explicitly by The non-trivial commutators between these vector fields are (81) This is a solvable Lie algebra, and looking back at Section 2.1.4 we recognise it as the Iwasawa Lie algebra g of SU (1, n + 2). Therefore (79) is the group multiplication of the Iwasawa group G. Thus g G (p) is a family of leftinvariant metrics on the fibres L ≃ G of the productN =M × G →M . We saw in Section 2.1.4 that the natural ε-complex structure J G (setting ε = ǫ 1 ) on g is given by its action on the basis of vector fields Z 0 , D, X I , Y I via whereη IJ = Y I , Y J . Identifying 4g G as given in (77) with the scalar product considered in Section 2.1.4 we getη IJ = ǫ 1 ǫ 2 η IJ . Comparing this with (57) we see that the almost ǫ 1 -complex structureJ onN obtained by projecting the tensor fieldJ from P toN can be written as 11 J = −ǫ 2 JM − J G . In particular, this shows that J G is different from the restriction of the structures ±J 3 and ±J ′ 3 to the fibres of the projectionN =M × G →M , with the exception of the case whenM is a point and therefore G is 4dimensional. In the latter case J ′ 3 coincides with −J G , see the end of the next section for a discussion of this special case. Kähler potentials for the fibre metrics We will now identify the ε-Kähler potentials for the metrics on the fibres G ≃ L of c-map spaces, and thus show that they are among the ε-Kähler metrics described in Section 2.1.3, where now ε = ǫ 1 . We treat all three cases of the c-map simultaneously. Along the fibre the matrix N IJ = R IJ + i ǫ 1 I IJ is constant. Let us introduce the ǫ 1 -complex coordinates (C I , S) via C I :=ζ I − N IJ ζ J , S := e φ − ǫ 2i ǫ 1φ − 1 2 C I I IJC J . One can show that √ 2e −φ/2 dC I = ξ I − i ǫ 1 I IJ η J , e −φ dS − ǫ dC I I IJC J = ξ n+1 − ǫi ǫ 1 η n+1 , where differentials are restricted to the fibre, whilst the Kaluza-Klein scalar φ can be expressed in terms of the fields (C I , S) as 2e φ = S +S − ǫ C I I IJC J . Hence, the metric on G is given by g G = dS − ǫ dC I I IJC J 2 (S +S − ǫ C I I IJC J ) 2 + ǫ dC I I IJ dC J S +S − ǫ C I I IJC J . In order to compare to the parametrisation used in Section 2.1.3, we introduce the ǫ 1 -complex variables (u, u I ) via in terms of which the metric on G becomes g G = ūdu + ǫ 2ū I I IJ du J 2 (1 − |u| 2 − ǫ 2 u I I IJū J ) 2 + |du| 2 + ǫ 2 du I I IJ dū J 1 − |u| 2 − ǫ 2 u I I IJū J , as can be checked by a straightforward but long calculation. A simple calculation shows that the metric g G is ǫ 1 -Kähler with potential K = − log 1 − |u| 2 − ǫ 2 u I I IJū J . These metrics g L with ǫ 1 -Kähler potential K L are, up to an overall sign, among the ε-Kähler metricsḡ with ε-Kähler potentialK introduced in Section 2.1.3, where now ε = ǫ 1 . Since the choice of the initial special ε-Kähler metric onM determines the signature of the metric on G, only a subset of the metrics considered in Section 2.1.3 can be realised by the c-map. In particular the Fubini-Study metric on P n+2 ≃ U (n + 3)/(U (1) × U (n + 2)) , cannot be realised. To obtain the negative of the Fubini-Study metric, we would need to take ǫ 1 = −1, andη ab = −δ ab which gives K L = − log(1 + δ ab z azb ) , g L = − (1 + δ cd z czd )δ ab −z a z b (1 + δ cd z czd ) 2 dz a dz b . However, since diag(1, ǫ 2 I IJ ) = −δ ab it is not possible to obtain this geometry using the c-map, even if we were to allow for four-dimensional vector fields with negative kinetic energy. We now discuss the geometries realised by the three c-maps. In order to interpret the resulting signatures in terms of dimensional reduction, we recall that the coordinates z a encode the followings fields: the Kaluza-Klein scalar φ , the dualised Kaluza-Klein vectorφ, the components of the fourdimensional vector fields along the direction we reduce over, ζ I , and the scalars dual to the three-dimensional vector fields,ζ I . The signs in front of the kinetic terms of these fields can be read off from the three-dimensional Lagrangian (25). Equivalently, they are determined in terms of the signs in the four-dimensional Lagrangian (23) through the following general properties of dimensional reduction and Hodge dualisation: (i) spacelike reduction preserves all signs, while timelike reduction reverses the sign for the Kaluza-Klein vector and for the scalars obtained by reducing vector fields; (ii) dualisation of three-dimensional vector fields preserves the sign in Lorenzian signature and reverses it in Euclidean signature. Now we list the cases which can be realised by the different versions of the c-map. Here we assume that we start with a four-dimensional theory of vector multiplets with positive definite kinetic terms. This implies that I IJ is negative definite. Dimensional reduction over a spacelike direction then results in a three-dimensional theory with positive definite kinetic terms. The (real) signature is (n + 2, n + 2) irrespective of the signature of η ab . This geometry is realised as a fibre geometry for the Euclidean c-map, ǫ 1 = 1, ǫ 2 = ±1. The result is independent of the signature of I IJ , and hence ofη ab , since the metric is para-Hermitian and has splitsignature. In terms of dimensional reduction, φ andφ, and ζ I andζ I have opposite signs irrespective of the signs in the four-dimensional Lagrangian. From [3] we know that if we obtain the Euclidean theory by reduction of five-dimensional supergravity with vector multiplets over time, then I IJ has signature (1, n), which reflects the fact that the Kaluza-Klein vector of the 5d/4d reduction has a negative kinetic term. We remark that by matching the ε-Kähler potentials obtained by the cmap to those found in Section 2.1.3 we have now proved that the solvable Lie groups presented in Section 2.1.4 do indeed provide local realisations of the symmetric spaces discussed in Section 2.1.3. We further remark that for the non-compact symmetric spaces of indefinite signature, that is for H (l,k−1) and CH n+2 , the Iwasawa subgroup does not act transitively, though one can find an Iwasawa subgroup which acts with open orbit. In these cases the fibre cannot be identified globally with the corresponding symmetric space, since the fibre has trivial topology, while the symmetric space has non-trivial topology. This is different for H n+2 , where the Iwasawa group acts transitively, so that the fibre is globally isometric to U (1, n + 2)/U (1) × U (n + 2). The simplest examples of c-map spaces are obtained by taking the initial special ε-Kähler manifold to be trivial,M = {pt}. This corresponds to starting with pure supergravity, and gives rise to a single hypermultiplet, often referred to as the universal hypermultiplet. The corresponding real four-dimensional ε-quaternionic Kähler manifolds 12 are rather special as they only consist of the fibre, and are therefore locally symmetric spaces which are simultaneously ε-Kähler and ε-quaternionic Kähler. Here the εcomplex structure J G compatible with the ε-Kähler metric coincides with the additional integrable ε-complex structure −J ′ is simultaneously Kähler and quaternionic Kähler. This space is the simplest hypermultiplet geometry occurring in supergravity and appears naturally in various constructions. In particular the classical moduli spaces of M-theory and type-II superstrings on Calabi-Yau threefolds contain this space as a subspace, with the scalar φ being related to the Calabi-Yau volume and the type-II dilaton, respectively. (2)) , is simultaneously para-Kähler and para-quaternionic Kähler. In [70] it was observed that this geometry is realised by reduction of pure Euclidean supergravity [71], and by dualising the double-tensor multiplet in Euclidean signature.
27,349
2015-07-16T00:00:00.000
[ "Mathematics" ]
On the origin of the DIII-D L-H power threshold isotope effect The increased low to high confinement mode (L to H-mode) power threshold PLH in DIII-D low collisionality hydrogen plasmas (compared to deuterium) is shown to result from lower impurity (carbon) content, consistent with reduced (mass-dependent) physical and chemical sputtering of graphite. Trapped gyro-Landau fluid (TGLF) quasilinear calculations and local non-linear gyrokinetic CGYRO simulations confirm stabilization of ion temperature gradient (ITG) driven turbulence by increased carbon ion dilution as the most important isotope effect. In the plasma edge, electron non-adiabaticity is also predicted to contribute to the isotope dependence of thermal transport and PLH , however its effect is subdominant compared to changes from impurity isotopic behavior. This L-H power threshold reduction with increasing carbon content at low collisionality is in stark contrast to high collisionality results, where additional impurity content appears to increase the power necessary for H-mode access. (Some figures may appear in colour only in the online journal) Heat and particle transport in magnetically confined plasmas has been found to depend on hydrogenic isotope mass, with a vast majority of experiments (fusion devices with both carbon and metallic wall materials) demonstrating larger transport in hydrogen, compared to deuterium and tritium plasmas [1][2][3][4][5][6][7].This is contradictory to 'naive' gyro-Bohm transport theory, which predicts more heat transport (higher radial heat flux, Q) with increasing main ion mass: Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Above, c 0 is a mass-independent constant, m i and m D are the main ion and deuterium ion mass, Q GBD is the normalized deuterium ion gyro-Bohm heat flux, n e and T e are the electron density and temperature, c s = √ T e /m D is the deuterium ion sound speed, and ρ * = √ T e m D /(a e B) is the normalized deuterium ion-sound gyro-radius (a, B, and e are the plasma minor radius, magnetic field, and elementary charge, respectively).The gyro-Bohm scaling above predicts ion thermal flux Q i ∝ √ m i , opposite to experimental observations.Gradient driven turbulent transport physics, which can break this gyro-Bohm scaling, is a leading framework for explaining this isotope effect.This effect is particularly important for efficiently achieving high-confinement (H-mode) plasmas, where fusion reactor relevant conditions are most easily met.This H-mode state, which can only be accessed by exceeding a minimum threshold input power P LH , exhibits roughly double the energy confinement time compared to low confinement (L-mode) plasmas [8].Modern tokamak experiments routinely observe a L-H threshold isotope effect, with reduced P LH in plasmas with higher main ion mass [4][5][6][9][10][11][12][13].This mass dependent threshold power is important for projecting auxiliary heating power requirements on existing tokamaks, mostly in deuterium, to future reactors, which will operate with a 50:50 deuterium-tritium mixture.For example, the International Thermonuclear Experimental Reactor (ITER) [14] is designed to achieve a fusion gain of 10 using the H-mode operating scenario in a mixed deuterium-tritium plasma.ITER is especially vulnerable to the isotope effect during initial nonnuclear Pre-Fusion Power Operation phases 1 and 2 (PFPO-1/2) due to the use of hydrogen main ions, and may have insufficient heating (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) for reliable H-mode access. Developing schemes for reducing this power requirement is therefore crucial to ITER's success, and motivates improved understanding of this effect.In this paper, we will elucidate the possible origin of the L-H power threshold isotope effect in the DIII-D tokamak [15].The DIII-D device is a conventional medium sized, graphite armored, diverted tokamak with flexible shaping capabilities and a large array of available plasma diagnostics, in operation in San Diego, USA since 1986.By comparison, ITER is approximately nine times larger by plasma surface area compared to DIII-D, and is expected to employ a tungsten divertor and main chamber, with a lower anticipated (metallic) impurity content of Z eff = 1.1.In the following sections, we will demonstrate that the L-H transition power threshold in DIII-D depends strongly on main ion dilution (the effective charge state Z eff ) at low collisionality, which is shown to be significantly larger in deuterium than in hydrogen plasmas.Based on recent research in the Joint European Torus (JET) with ITER-like metallic walls, the isotope dependence of main ion dilution does not appear to be unique to carbon machines, and may contribute to the isotopic dependence of P LH in other devices [16][17][18]. From DIII-D's suite of diagnostics, key parameters relevant to the L-H transition were documented for approximately 500 recent (>2010) hydrogen and deuterium discharges.Parameters at transition, such as plasma current (I p ), impurity content (Z eff ), and ion temperature (T i ), were recorded in addition to other quantities relevant to the L-H power threshold (power sources and sinks).The threshold power is calculated as: The injected neutral beam, electron cyclotron, and ohmic powers are P NB , P ECH , and P OH respectively.The power consumed by changing the diamagnetic stored energy and lost by core plasma radiation are ∂W dia ∂t and P rad,core respectively.All quantities were time-averaged over the 30 ms window preceding the L-H transition.Identification of the transition time was performed by inspecting discharges for a simultaneous drop in Balmer α recycling light emission and rise in line-averaged density (n).An example of such a transition can be found in figure 1. Historically, P LH is often observed to have a minimum vs. line-averaged density n [19].From the DIII-D database, in ITER similar shape with n = 1-2.5 × 10 19 m −3 , below the density where P LH exhibits a minimum at DIII-D (n min = 3-4 × 10 19 m −3 ), the L-H power threshold was found to decrease strongly with increasing Z eff .Z eff is measured by Charge Exchange Recombination Spectroscopy (CER) [20] and Thomson Scattering [21] using the fully ionized carbon and electron densities n C 6+ and n e at a normalized minor radius ρ = √ ψ t = 0.7, as shown in figure 2(a).Noteworthy is that DIII-D hydrogen plasmas almost never reach the impurity levels of their deuterium counterparts.This difference is attributed to the larger (mass dependent) physical and chemical sputtering yield of the graphite (carbon) divertor and main chamber tiles by deuterium, compared to hydrogen [22]. To determine how changes in Z eff may be altering the threshold power, profile fitting analysis of an ensemble of nearly 80 L-H transitions, with a range of line-averaged density n = 1-5.5 × 10 19 m −3 , edge safety factor q 95 = 3-7, and neutral beam injected torque T inj = 0-5 N-m was undertaken.During fitting, correction of impurity ion temperature (T i ) measurements for Zeeman and Fine Structure effects [23], and checking Z eff measurements against visible bremsstrahlung continuum emission [24,25] was performed to ensure high data quality.TRANSP power balance analysis was performed using these profiles, taking care to match transport metrics (neutron rate, plasma stored energy, loop voltage, inductance, n) to minimize errors in power accounting [26]. Detailed power balance analysis of 10 of these hydrogen and deuterium L-H transitions, shown in figure 2(b), indicates that both electron and ion heat fluxes (Q e , Q i ) in L-mode just before transition contribute to P LH .Figure 2(b) illustrates this by showing the pre-transition separatrix (ρ = 1) ion and electron loss power, overlayed with P LH .On average, Q e + Q i ⩽ P LH due to TRANSP accounting for additional loss channels, such as charge exchange for ions and neutral ionization work for electrons.Both ion and electron loss powers are observed to decrease with increasing Z eff .Discharges in this limit were found to have low collisionality (ν * i , ν * e ≲ 1 at ρ = 0.95).At higher collisionality (above the P LH density minimum), the hydrogen and deuterium power thresholds increase with increasing Z eff (figure 2(c)), consistent with past findings at high L-mode density [27].Experimentally, T e ≈ T i for almost all transitions in figure 2(c).Assuming T e = T i , power balance analysis indicates Q i ≲ Q e , with both heat channels contributing to increasing P LH .Collisionally, these transitions are in the Pfirsch-Schluter (PS) regime, and converging to similar P LH independent of isotope and Z eff , as previously observed by Yan et al [28].This convergence is not observed on several other tokamaks however (such as ASDEX Upgrade [9]) and is currently not well understood.Distinguishing all 80 analyzed transitions by neoclassical transport regime using edge ion collisionality (banana, plateau, and PS regimes), one finds that the isotope effect is strongest at low collisionality, and suppressed approaching the PS limit (figure 3).Ion collisionality is approximated as [29]: R, log(Λ i ), and ϵ represent the major radius, Coulomb logarithm, and inverse aspect ratio respectively.The observed P LH vs. Z eff trend reversal from low to high collisionality is consistent with a transition from Trapped Electron Mode/ion temperature gradient (TEM/ITG) turbulence to Resistive Ballooning Mode (RBM) dominated turbulence previously observed in simulations [30].Analogous to density scan simulations including ITG/TEM and RBM modes by Bourdelle, the collisionality at the minimum of P LH is observed to increase with lower Z eff (figure 3).L-H transitions at similar Z eff and collisionality are found to have nearly identical power threshold, independent of m i (hydrogen in △ and deuterium in ⃝) in different collisionality regimes.Two banana regime plasmas with a large difference in L-H threshold were selected for detailed turbulence and gyrokinetic analysis, from an ensemble of eighty transitions where power balance analysis has been performed.These two plasmas were chosen because their shapes, L-mode radial kinetic profiles (n e , T e , T i ), injected NBI torque and q 95 were almost identical.In addition, these plasmas had ITER PFPO-1 relevant density n = 1.6 × 19 m −3 , safety factor q 95 = 3.6, ITER similar shaping, and low neutral beam torque [31].Figure 4 illustrates each plasma's L-mode profiles approximately 10 ms before the L-H transition.The deuterium plasma normalized carbon density gradient was used to infer the core electron density due to the lack of reliable inner core Thomson scattering and reflectometry data.The hydrogen plasma inner core toroidal rotation measurements (Ω C ) were not available.The most substantial difference between these two plasmas, aside from main ion species, was carbon impurity content (Z eff ).Despite such similar profiles, heat fluxes calculated from TRANSP power balance analysis were nearly two times larger in H compared to D. Heat flux uncertainty bands represent the time averaged variation in the 50 ms time interval preceding the L-H transition.Main ion charge exchange analysis indicated T C 6+ = T H/D after correcting impurity measurements for Zeeman and fine-structure effects [32].Review of many L-mode pre-transition profiles similar to figure 4 suggested that nearly identical kinetic profiles (n e , T e , T i ) are a commonality among L-H transitions in deuterium and hydrogen, as observed in previous experiments [4,6,10,[33][34][35].This is believed to be due to the required edge ion pressure profile providing sufficient ⃗ E × ⃗ B shear to trigger a positive feedback turbulence suppression loop [36].As a result, it is hypothesized that the heat flux needed to sustain the pre-transition L-mode radial gradients sets P LH , and causes the isotope effect.Stability analysis using quasilinear thermal fluxes from the gyro-fluid stability code trapped gyro-Landau fluid (TGLF) [37] was undertaken to identify the origin of the large isotopic difference in thermal fluxes.TGYRO simulations, which adjust radial temperature and density gradients to match power balance and TGLF-predicted heat fluxes, were run until convergence to match the experimentally observed heat fluxes by adjusting the T e and T i profiles, holding the n e profile fixed.Converged solutions were obtained after 20 iterations, using an extended perpendicular wavenumber (k θ ρ s ) grid model to capture long wavelength modes.All three TGLF quasilinear saturation rules [38][39][40] with and without electromagnetic effects were tested, with saturation rule 2 most closely matching both experimental temperature gradients and profiles (figure 4 in orange (H) and light blue (D)).Saturation rule 2 builds on previous models by including realistic geometry effects and species dependent Landau averaging.Electromagnetic (EM) corrections, although included in the analysis, were found to contribute negligibly to thermal fluxes, consistent with electrostatic ITG/TEM turbulence.TGLF results predicted nearly identical H and D kinetic profiles, consistent with experimental observations. Gradient scans in the outer core plasma (ρ = 0.7) were performed to identify the origin of the heat flux difference required to maintain the same profiles in H and D, as shown in figure 5. Scans identified characteristics of ITG turbulence (low k θ ρ s dominant spectra, mode propagation in the ion diamagnetic direction, a critical T i gradient), with a shift in the T i critical gradient observed as the dominant unstable mode feature driving the thermal flux differences.TGLF and CGYRO calculations demonstrate that the difference in carbon content is responsible for the shift in critical ITG gradient, in agreement with database results and predictions of ITG turbulence behavior with impurities [41].Such results are consistent with observations of turbulence suppression with impurity seeding, dubbed the RI-mode, documented extensively at TEXTOR and DIII-D [42,43,44].Changing species from D to H with fixed main ion dilution fraction had no effect on the heat flux levels (yellow line in figure 5), however reduced main ion dilution of the D plasma led to a critical gradient that matches the H-plasma critical gradient (purple vs. red), indicating main ion dilution as the precise origin of this effect.First principles nonlinear (k θ ρ s < 1.1) CGYRO [45] simulations (dots) around the TGLF optimized gradients confirmed reduced model predictions. To identify if the edge H-mode pedestal forming region contains a similar isotope effect, non-linear (k θ ρ s < 1.1) CGYRO modeling around the deuterium experimental gradients at ρ = 0.9 was undertaken.CGYRO scans reproduced both D electron and ion heat fluxes as shown in blue circles on figure 6 at the (deuterium) experimental conditions.Single changes to CGYRO of m i and n C 6+ (yellow and purple respectively) indicated the presence of two isotope effects: one from main ion dilution (similar to the findings at ρ = 0.7 shown in figure 5) and another from intrinsic m i changes.Changing both m i and n C 6+ (shown in red) allowed matching the observed electron thermal flux in hydrogen, and a near match of the ion thermal flux.Electron non-adiabaticity was determined as the likely origin of the m i mass effect by modifying the parallel electron response time [46,47].In simulations imposing hydrogen main ion mass (red and yellow dots), re-scaling the parallel electron response time to its deuterium value ( √ 2) nearly accounted for the difference in heat fluxes with changing m i in gyro-Bohm heat flux units (Q/Q GBi ).This can be seen in figure 6 with the ⋆ simulations (yellow → m H , red → m H + n C 6+ ), which are converted from gyro-Bohm to experimental heat flux units using Q GBD (not Q GBH ) for direct comparison to deuterium counterparts (purple and blue). In summary, from a large survey of DIII-D isotope experiments, reduced model transport simulations from TGLF sat.2, and first principles gyrokinetic modeling from CGYRO, differences in carbon content are identified as the likely origin of the observed isotope dependence of the L-H power threshold P LH at low collisionality in the DIII-D tokamak.Ongoing work at DIII-D seeks to possibly reduce P LH via impurity seeding as a test for improving H-mode access during ITER PFPO-1/2.The universality of this isotope effect explanation is indeterminate at this time, however isotopic sputtering and impurity dynamics have also been observed to affect P LH in devices with ITER-like wall materials such as in JET [16][17][18]. Figure 1 . Figure 1.(a) Contributions to the L-H threshold power vs. time, with Psep in black.The red vertical line and gray shaded region indicate the exact time of L-H transition and time averaging window for calculated quantities.(b) Line-averaged density (n) and Balmer α recycling light emission during a typical L-H transition. Figure 2 . Figure 2. Panels (a) and (c) illustrate the observed P LH trend with Z eff at low and high collisionality in hydrogen and deuterium plasmas.Panels (b) and (d) show results of power balance analysis using the TRANSP code.The L-mode separatrix heat fluxes carried by ions (gray) and electrons (violet) for a sample of transitions are shown in panels (b) and (d). Figure 3 . Figure 3. P LH vs. edge ion collisionality (ρ = 0.95).Heat map shows low (red) and higher (purple) Z eff , and symbol shapes indicate plasma species (△ for H and ⃝ for D).Solid lines are parabolic fits to shown data for a narrow range of Z eff .Vertical dashed lines divide collisionality into neoclassical transport regimes (banana, plateau, and Pfirsch-Schluter). Figure 4 . Figure 4. L-mode profiles and gradients approximately 10 ms before L-H transition for hydrogen (red) and deuterium (blue) plasmas at DIII-D vs normalized radius ρ.Dots are raw experimental data.Heat flux profiles are from TRANSP power balance analysis, with the time-averaged variation in heat flux as error bands.Orange and light blue (H, D) stars indicate TGLF flux matching solutions to T i and Te using sat.rule 2. Figure 5 . Figure 5. Scan of normalized T i gradient at ρ = 0.7 vs. ion heat flux.Solid lines (closed circles) indicate the TGLF (CGYRO) calculated thermal fluxes.The vertical dashed line indicates the flux matching gradient for both D and H experimental heat fluxes (blue and red horizontal lines).Simulations in blue and red are from deuterium and hydrogen conditions shown in figure 4. Yellow and purple data are based on deuterium, but with reduced m i and main ion dilution respectively. Figure 6 . Figure 6.Nonlinear CGYRO simulations at ρ = 0.9 around deuterium experimental gradients (blue vertical lines).Panels (a), (c) show electron heat flux, and (b), (d) ion heat flux.Panels (a), (b) illustrate a density length scale scan, and (c), (d) a T i gradient scan.Horizontal lines with shaded region indicate the power balance heat fluxes and uncertainty (blue → D, red →H).Yellow and red ⋆ illustrate simulations with hydrogen main ion mass and parallel electron response time rescaled to deuterium values.
4,282.6
2023-09-11T00:00:00.000
[ "Physics" ]
Amyloid Oligomers and Mature Fibrils Prepared from an Innocuous Protein Cause Diverging Cellular Death Mechanisms* Background: Although oligomers are considered more important, mature fibrils also show evidence as cytotoxic agents in neurodegenerative diseases. Results: Oligomers and fibrils both kill PC12 cells albeit mechanistically differently. In vivo, only oligomers inhibit hippocampal long term potentiation. Conclusion: Protein aggregates, even those irrelevant to disease, are capable of inducing different toxic actions in neuronal cells. Significance: Understanding these toxic mechanisms is vital in improving amyloidosis therapy. Despite significant advances, the molecular identity of the cytotoxic species populated during in vivo amyloid formation crucial for the understanding of neurodegenerative disorders is yet to be revealed. In this study lysozyme prefibrillar oligomers and fibrils in both mature and sonicated states have been isolated through an optimized ultrafiltration/ultracentrifugation method and characterized with various optical spectroscopic techniques, atomic force microscopy, and transmission electron microscopy. We examined their level and mode of toxicity on rat pheochromocytoma (PC12) cells in both differentiated and undifferentiated states. We find that oligomers and fibrils display cytotoxic capabilities toward cultured cells in vitro, with oligomers producing elevated levels of cellular injury toward undifferentiated PC12 cells (PC12undiff). Furthermore, dual flow cytometry staining experiments demonstrate that the oligomers and mature fibrils induce divergent cellular death pathways (apoptosis and secondary necrosis, respectively) in these PC12 cells. We have also shown that oligomers but not sonicated mature fibrils inhibit hippocampal long term potentiation, a form of synaptic plasticity implicated in learning and memory, in vivo. We conclude that our in vitro and in vivo findings confer a level of resistance toward amyloid fibrils, and that the PC 12-based comparative cytotoxicity assay can provide insights into toxicity differences between differently aggregated protein species. Despite significant advances, the molecular identity of the cytotoxic species populated during in vivo amyloid formation crucial for the understanding of neurodegenerative disorders is yet to be revealed. In this study lysozyme prefibrillar oligomers and fibrils in both mature and sonicated states have been isolated through an optimized ultrafiltration/ultracentrifugation method and characterized with various optical spectroscopic techniques, atomic force microscopy, and transmission electron microscopy. We examined their level and mode of toxicity on rat pheochromocytoma (PC12) cells in both differentiated and undifferentiated states. We find that oligomers and fibrils display cytotoxic capabilities toward cultured cells in vitro, with oligomers producing elevated levels of cellular injury toward undifferentiated PC12 cells (PC12 undiff ). Furthermore, dual flow cytometry staining experiments demonstrate that the oligomers and mature fibrils induce divergent cellular death pathways (apoptosis and secondary necrosis, respectively) in these PC12 cells. We have also shown that oligomers but not sonicated mature fibrils inhibit hippocampal long term potentiation, a form of synaptic plasticity implicated in learning and memory, in vivo. We conclude that our in vitro and in vivo findings confer a level of resistance toward amyloid fibrils, and that the PC 12-based comparative cytotoxicity assay can provide insights into toxicity differences between differently aggregated protein species. The failure of specific proteins to correctly fold and adopt their native functional structures has been correlated with a vast range of debilitating diseases including Alzheimer and Parkinson diseases (1)(2)(3). Such proteinaceous fibrillar aggregates known as amyloid fibrils are currently implicated in scores of degenerative diseases that affect a variety of peripheral tissues as well as the central nervous system (4,5). Although the dysfunctional assembly of peptides does not always carry a negative consequence, as seen with the tumoricidal molten globule-oleic acid complex HAMLET (Human Alpha-Lactalbumin Made LEthal to Tumor cells (6)), a group of ϳ25 unrelated proteins have been shown to be causative agents in the formation of a number of clinically distinct conditions (7). However, recent evidence suggests the ability to aggregate may be a generic property of possibly all polypeptide chains under specific denaturing conditions. Hen egg white lysozyme (HEWL) 2 can be engineered to aggregate at acidic pH and elevated temperatures (8) making it a very useful model to study protein misfolding and disease. Despite the advancements in the field of protein aggregation and disease pathology, very little is known about the exact in vivo mechanisms of formation and cytotoxicity. Various studies involving animal models (9) and cell lines (10,11) provided early evidence that supported the idea that mature fibrils were solely cytotoxic. However, this hypothesis has been the subject of intense scrutiny in recent years, and a large number of experiments have shown that the prefibrillar aggregates, known as oligomers, are intrinsically involved in and may even be the sole and direct cause of cell damage observed in various disease * This work was supported in part by grants from the Health Research Board of Ireland (to I. K.) and Science Foundation Ireland (to M. J. R). The authors declare that they have no conflicts of interest with the contents of this article. 1 models (7,(12)(13)(14)(15). The growth in literature surrounding the role of oligomers in neurodegenerative disease pathology has coincided with increasing levels of controversy surrounding mature fibrils in these disorders. One argument suggests that fibrils may possess no cytotoxic abilities at all (13,15,16), whereas there is a substantial body of experimental evidence that fully demonstrates that amyloid fibrils are capable of causing cellular death in numerous situations (17)(18)(19). A related point of interest is whether cellular differentiation confers any significant biological resistance against amyloid fibrils and oligomers. Currently there is a substantial lack of knowledge regarding cell susceptibility and resistance to protein aggregates. Studies have presented evidence that suggests that toxic oligomers and fibrils display differing levels of toxicity toward particular cell lines (20,21). It has also been shown that differentiated cells exhibit elevated levels of resistance against amyloid injury (22). In this article we investigate the neurotoxic effects of HEWL oligomers, sonicated fibrils, and mature fibrils on the rat neuronal cell line PC12 in both differentiated and undifferentiated states. Mature amyloid fibrils and prefibrillar aggregates were isolated and characterized using a battery of techniques including atomic force microscopy (AFM), thioflavin T (ThT) fluorescence, and Congo Red (CR) birefringence. Here, we report for the first time that both mature amyloid fibrils and prefibrillar oligomeric species, isolated after 21 days from wild type HEWL, are highly toxic to PC12 cells in vitro, and in addition, the separated fractions elicit different cellular death pathways, one apoptosis, the other a form of cell death associated with necrotic cell death or a late apoptotic mechanism. Differentiated PC12 cells (PC12 diff ) showed an elevated level of resistance toward oligomers only, while no such resistance was found for mature amyloid fibrils. We also investigated the effects of HEWL aggregates on long term potentiation (LTP), a model of cellular mechanisms underlying learning and memory formation. Interestingly, we found that LTP in vivo was inhibited by lysozyme oligomers but not sonicated fibrils. These results indicate that although produced from the same original protein, amyloid bodies in different aggregated states represent individual cytotoxic entities possessing unique properties, potentially providing insight into the true injurious events observed in neurodegenerative diseases. Experimental Procedures Proteins and Reagents-All reagents were of analytical grade or the highest purity available. HEWL, propidium iodide (PI), 1-anilinonaphthalene-8-sulfonic acid (ANS), and Tris were purchased from Sigma. Alamar Blue and all cell culturing equipment were purchased from Invitrogen. RPMI and PBS were purchased from GIBCO. Congo Red and ThT were obtained from Acros. FITC-labeled annexin-V was purchased from Promokine. HEWL Aggregation-HEWL was prepared in distilled water adjusted to pH 2 with HCl. The final protein concentration was 1 mM (molecular mass of HEWL ϭ 14.3 kDa, extinction coefficient ⑀ 1% ϭ 26.4 at 280 nm). Aliquots were incubated at 65°C for 21 days to allow sufficient fibrillization to occur (8). Control HEWL was prepared fresh at room temperature at pH 2. HEWL Separation-Desired fractions of lysozyme were separated in relationship to particle mass after 21 days of incubation. Samples were ultracentrifuged for 1 h at 100,000 ϫ g using a Beckman Optima TLX Ultracentrifuge. The pellet, containing mature amyloid fibrils, was resuspended in pH 2 distilled water to a concentration of 1 mM. At this point sonicated samples were subjected to ultrasound power for 30 s periodically for 5 min. The supernatant was spun down via ultrafiltration, utilizing 100-kDa cutoff filters (Amicon Ultra-4 Centrifugal Filter Devices). The resulting filtrate was further separated by repeating the ultracentrifugation process with 30-kDa cutoff filters. The resulting retentate (Ͻ100 kDa and Ͼ30 kDa) constituted the oligomer solution. Aggregated samples were prepared fresh on the day diluted and a final concentration of 1 mM was determined using a NanoDrop ND-1000 Spectrophotometer (Thermo Scientific) averaging three readings for use in cytotoxic assays. Thioflavin T Assay-ThT was carried out as described previously (23). HEWL samples were measured using a Jasco FP 6200 spectrofluorimeter. Fluorescence intensity was measured by excitation at 440 nm (slit width 5 nm) and emission at 482 nm (slit width 10 nm), averaging over 30 s. Congo Red Spectroscopic and Birefringence Assay-Both Congo Red assays was carried out as described (23). UV-visible absorbance was measured using a NanoDrop ND-1000 Spectrophotometer between wavelengths 400 and 700 nm. A maximal spectral difference at 505 nm is evincive of amyloid fibrils. For the birefringence assay the Congo Red/HEWL solutions were analyzed using a Nikon Eclipse E400 POL with the polarizers crossed at a 90°angle to each other. Amyloid birefringence was identified by the presence of apple green coloration. ANS Binding-ANS was carried out as described (24) using excitation at 390 nm, and emission between 410 and 600 nm. AFM Imaging-HEWL samples were prepared for AFM analysis by depositing the fibril suspension onto freshly cleaved mica by spin coating. The spin recipe encompassed a dispersion routine (750 rpm for 45 s followed by 1000 rpm for 45 s) allowing the fibril suspension to be spread evenly across the mica surface. Samples were then dried (4000 rpm for 15 s) to remove any excess solvent. AFM measurements were performed in ambient conditions on an Asylum MFP-3D atomic force microscope (Asylum Research, Santa Barbara, CA). The probe normal spring constant was between 1.2 and 2.0 newton/m with a tip apex Ͻ5 nm. High resolution acoustically driven cantilevers (Nanosensors, SSS-FM) operating at a resonance frequency of 60 -80 KHz in AC-mode were used with a scan resolution of 1024 ϫ 1024 and scan frequency of 0.6 Hz. Structures were analyzed using MFP-3D IGOR PRO software. Transmission Electron Microscopy-Protein samples were visualized using a JEOL 2100 Transmission Electron Microscope operating at 200 kV with a lanthanum hexaboride (LaB 6 ) emission source. Prior to microscopy, 10 l of HEWL sample was deposited onto carbon-coated grids. The solution was displaced with 0.5% glutaraldehyde and subsequently washed with dH 2 O. The sample was stained using uranyl acetate and allowed to air dry before transferring to microscope. Cell Culture and Differentiation-Undiffereniated PC12 (PC12 undiff ) cells were maintained in RPMI 1640 medium/ GlutaMAX TM -1 supplemented with 10% FCS, and 1% penicillin/streptomycin. Culture medium was replaced every 3 days. For cell viability assays, cells were seeded at a density of 10 6 cells/well in 96-well plates. For NGF-induced differentiation, PC12 cells were seeded onto collagen-coated 96-well plates at a cell density of 1.5 ϫ 10 4 cells/well in RPMI 1640 medium/ GlutaMAX TM -1. Wells were supplemented with 1% FCS, 1% penicillin/streptomycin, and 0.01% NGF every 2 days for a period of 7 days to allow differentiation. Immediately prior to the experiment the 1 mM stock of the fibrillar and prefibrillar aggregates were diluted with distilled water to produce concentrations varying from 20 to 300 M. The cells, lysozyme aggregates, and media (volume 100 l) were added and incubated overnight prior to treatment with Alamar Blue. All tests were conducted in triplicate. All cells were maintained in a 95% air, 5% CO 2 humidified atmosphere at 37°C. Alamar Blue Cell Viability Assay-Alamar Blue fluorescence measurements were carried out as described previously (25). Emission levels were measured using a SPECTRAmax Gemini XS Microplate Spectrofluorometer. Cell viability was expressed as a percentage of fluorescence absorbance in the aggregateexposed cells in relationship to untreated cells. Flow Cytometry-Cells were seeded at 10 6 cells/ml/well the day before analysis in 12-well suspension plates. Wells were treated with 25 M of the aggregated solution and left for 24 h overnight at 37°C. Cells (1 ml) were removed and spun down at 1600 rpm for 5 min prior to washing in a 1ϫ annexin binding buffer (10.9 mM Hepes, 140 mM NaCl, and 2.5 mM CaCl 2 , pH 7.4). Cells were spun down and stained with 10 l of anti-annexin V antibody and left for 15 min on ice in the dark. Cells were spun down, washed, and resuspended in 500 l of 1ϫ annexin binding buffer. Immediately prior to FACS analysis, cells were treated with 10 l of 50 g/ml of PI. Flow cytometry was carried out using a FACSCalibur flow fluorocytometer (BD Biosciences). Compensation was carried out using control samples for untreated PI/annexin V only, double stained, and non-stained. Electrophysiological Techniques-In vivo electrophysiology was performed using techniques described previously (26). Animal experiments were licensed by the Department of Health and Children, Ireland. Adult male Wistar rats were anesthetized with urethane (1.5 g/kg, intraperitoneally). Single pathway recordings of field excitatory postsynaptic potentials (EPSPs) were made from the stratum radiatum in the CA1 area of the dorsal hippocampus in response to stimulation of the ipsilateral Schaffer collateral/commissural pathways. Test EPSPs were evoked at a frequency of 0.033 Hz and at a stimulation intensity was adjusted to give an EPSP amplitude of 50% of maximum. The high-frequency stimulation (HFS) protocol for inducing LTP consisted of 10 trains of 20 stimuli, interstimulus interval of 5-ms, and inter-train intervals of 2 s. The intensity was increased to give an EPSP of 75% of maximum amplitude during the HFS. LTP is expressed as the mean Ϯ S.E. % baseline field EPSP amplitude recorded over at least a 30-min baseline period. Similar results were obtained when the EPSP slope rather than amplitude was measured. For statistical analysis, EPSP amplitudes were grouped into 10-min epochs. Standard one-way analysis of variance was used to compare the magnitude of LTP between multiple groups followed by post hoc Tukey's tests. Unpaired Student's t tests were used for two-group comparisons. A p Ͻ 0.05 was considered statistically significant. To inject samples into the rat brain a cannula was implanted in the lateral cerebral ventricle (coordinates: 1 mm lateral to the midline and 4 mm below the surface of the dura) just prior to electrode implantation. Injections (15 l over 10 min) were made via a Hamilton syringe connected to the internal cannula. Isolation and Characterization of Prefibrillar Aggregates and Mature Amyloid Fibrils of Hen Egg White Lysozyme-HEWL samples incubated for 21 days were separated through a series of ultracentrifugation and ultrafiltration steps (see "Experimental Procedures"). To eliminate any potential diffusion-limitation issues that could arise from using mature fibrils, samples were sonicated post-isolation to produce shorter fragments of mature fibrils that still maintain their general fibrillar structure (27). To validate the presence of amyloid structures conclusively and to avoid false negative results, newly isolated samples were subjected to a combination of Congo Red spectroscopic and birefringence assays as well as ThT (23). Both oligomers and sonicated fibrils showed no signs of birefringence, however, mature HEWL fibrils are clearly birefringent at ϫ4 magnification (Fig. 1A). For Congo Red spectral analysis all samples showed a shift in the maximum absorbance from 495 to 505 nm (Fig. 1B). HEWL was also examined daily for 21 days using the cationic benzothiazole dye ThT, which exhibits enhanced fluorescence upon binding to amyloid fibrils. HEWL at 65°C showed a marked daily increase in fluorescence intensity values when compared with values exhibited by control HEWL at room temperature (Fig. 1C). Furthermore, when treated with ANS ultracentrifuged/ultrafiltrated oligomers, sonicated fibrils, and mature fibrils all showed a blue shift in spectral maxima (Fig. 1D). Microscopy Imaging of Amyloid Aggregates-As a direct visual confirmation of the existence of oligomeric and fibrils species, oligomers, sonicated fibrils, and mature fibrils were analyzed using AFM and TEM in conjunction with the above spectroscopic measurements (Fig. 2A). In contrast to the mature fibrils, both the oligomeric and sonicated samples exhibited a scattered array of spots providing direct evidence of the successful size-based fractionation through the combined ultracentrifugation/ultrafiltration methodology. For the mature fibril, the three-dimensional structure and the ␣ z-height line section was measured 3.26 Ϯ 0.65 nm, whereas the ␤ z-height line section at the crossover was 4.23 Ϯ 0.46 nm, indicative of helical twists in the mature fibril (Fig. 2, C and D) as classically found in typical cases (28). Oligomers, sonicated fibrils, and mature lysozyme fibrils were also visualized using TEM. All samples were negatively stained with the dye uranyl acetate (Fig. 2E). All samples show distinct differences in shape and size as a result of the destructive nature of sonication. Oligomers and Fibrils Are Cytotoxic to PC12 Cells of Both Undifferentiated and Differentiated Morphologies-The toxicities of oligomeric and fibrillar species of both sonicated and mature states were assessed using the PC12 cell line. Cultured PC12 cells have the propensity to continuously divide and provide a model for tumor research. Upon continued treatment with NGF, PC12 undiff cells transform from a neoplastic morphology and begin to extend branching varicose processes (Fig. 3A). Different fibrillar and prefibrillar aggregates were added to both cell lines at concentrations in the 20 to 300 M range and incubated at 37°C for 24 h. Percentage cell death was calculated using the Alamar Blue viability assay. All lysozyme species show significant efficacy toward PC12 diff cells with oligomers yielding LC 50 values of 44 M and fibrils values of 91 M (Table 1). PC12 undiff cells were treated with identical concentrations. The LC 50 values for mature amyloid fibrils (94 M) showed no discernible difference in susceptibility. Interestingly, oligomertreated PC12 undiff cells give an LC 50 value of 29 M, considerably lower than that observed with the differentiated cell type. Sonicated fibrils, whereas not as potent as oligomers on both cell lines, were more potent than mature full fibrils. The monomeric state of HEWL was additionally examined and showed no cytotoxicity toward either cell line (Fig. 3, B and C). HEWL Oligomers and Amyloid Fibrils Induce Cellular Death via Apoptotic and Secondary Necrotic Pathways, Respectively-During apoptosis, cells undergo highly specific morphological changes including plasma membrane blebbing, pyknosis, and exposure of the negatively charged phospholipid phosphatidylserine (PS) on the inner leaflet of the cell membrane (29,30). Necrotic cells undergo a different form of cell death characterized by swelling and sudden deflation of the dying cell, resulting in leakage of internal cellular contents into the surrounding milieu (31). PC12 undiff/diff cell death was analyzed by FACS using a combined stain for FITC-labeled annexin-V (apoptosis) and PI (necrosis or non-apoptotic). This experiment demonstrates that both fibrils and oligomers induce high levels of cell death within PC12 undiff/diff cells. Oligomer-treated PC12 undiff cells (Fig. 4B) show elevated annexin V-FITC staining when compared against the control group (Fig. 4A). Fibrils show increased staining for annexin V-FITC and PI (Fig. 4D). Similarly, PC12 diff cells also show differences in cell death mechanisms. Oligomeric treated cells show a propensity for binding annexin V-FITC staining (Fig. 4F), whereas fibrillar treated PC12 diff cells showed increased annexin V/PI double positive staining (Fig. 4H) when compared with control (Fig. 4D). PC12 undiff/diff cells were also analyzed using sonicated fibrils. PC12 undiff cells showed increased staining for annexin-V (Fig. 4C), whereas PC12 diff cells showed approximately equal stain- ing for annexin-V and dual annexin-V/PI (Fig. 4G). In addition cellular death in PC12 undiff/diff cells treated with HEWL samples was examined using Western blot analysis probing for PARP. In both cell lines native HEWL and the monomeric fraction showed no apoptotic death. Sonicated fibrils and mature fibrils elicited a slight apoptotic response although oligomers appeared to induce the highest level of apoptotic cell death (Fig. 5). These results suggest that PC12 undif/diff cells treated with HEWL-isolated oligomers, sonicated fibrils, and mature results in the cleavage of the apoptotic marker protein PARP, indicating that apoptosis has been activated in response to these proteinaceous fractions, which is in agreement with our flow cytometry data. Inhibition of LTP by Lysozyme Oligomers-Amyloid neurodegenerative diseases commonly lead to cognitive decline and memory deterioration. To determine whether HEWL aggregates alter normal brain functions in vivo we examined their effects on hippocampal LTP, a well established correlate of learning and memory, which is potently inhibited by amyloid ␤-protein (A␤) (26). Because we had previously found that fibril preparations of A␤ are inactive in this model we compared the actions of oligomers and sonicated fibrils in the present experiments. Fig. 6 shows that in anesthetized control rats HFS induced robust and long lasting potentiation (Ͼ3 h) of excitatory synaptic transmission in the CA1 area of the hippocampus after intracerebroventricular injection of either 15 l of distilled water (n ϭ 3) or 1 mM non-aggregated HEWL (15 nmol, n ϭ 4) (combined group, 151 Ϯ 15%, n ϭ 7, p Ͻ 0.05, compared with pre-HFS baseline). In contrast, injection of the same volume of a 350 M HEWL oligomer sample completely FIGURE 2. AFM images of amyloid aggregates on mica formed from hen egg lysozyme. A, oligomers, sonicated fibrils, and amyloid mature fibrils visualized using AFM. B, shows a typical z-height AFM image of a fibril using a 1-m scan range. Fibril fine structure is evident from the periodic steps indicated by the white circles. The periodic steps can also be seen in the three-dimensional image found in C. D, represent z-height line sections on areas of the fibril that are between the step of the periodic structure (␣) and through the step itself (␤). E, transmission electron microscopy of HEWL samples. Samples represented are oligomers, sonicated fibrils, and mature fibrils. Samples were deposited onto carbon-coated grids and stained with 2% uranyl acetate prior to visualization. These images are representative of three separate experiments. NOVEMBER 20, 2015 • VOLUME 290 • NUMBER 47 inhibited LTP at 3 h post-HFS (5.25 nmol, 103 Ϯ 5%, n ϭ 7, p Ͼ 0.05, compared with baseline, p Ͻ 0.05, compared with control LTP) without affecting baseline synaptic transmission (102 Ϯ 9%, n ϭ 4, p Ͼ 0.05, compared with pre-injection recording, Fig. 7, A and B). We chose this concentration of HEWL oligomers based on the in vitro toxicity data and pilot in vivo experiments. Furthermore, a 10-fold lower dose of HEWL oligomers (35 M in 15 l, 0.525 nmol) did not inhibit LTP (145 Ϯ 10%, n ϭ 4, p Ͻ 0.05, compared with baseline, p Ͼ 0.05, compared with control LTP, Fig. 7C). Interestingly, short fibrillar fragments generated by sonication of mature fibrils failed to inhibit LTP even when a much higher dose was injected (1.5 mM in 15 l, 22.5 nmol, 137 Ϯ 10%, n ϭ 4, p Ͼ 0.05 compared with control LTP, Fig. 6). Of note, prior to HFS there was no significant difference in baseline excitability between any groups shown in Fig. 6 as measured by EPSP amplitude (2.7 Ϯ 0.2, 2.5 Ϯ 0.3, and 2.4 Ϯ 0.3 mV for vehicle, HEWL oligomers, and sonicated fibrils, respectively) or stimulation intensity (6.6 Ϯ 0.4, 7.0 Ϯ 0.5, and 7.2 Ϯ 0.6 mA for vehicle, HEWL oligomers, and sonicated fibrils, respectively). Discussion The current study demonstrates that HEWL amyloid fibrils and oligomers, judiciously isolated via ultracentrifugation and ultrafiltration, are highly toxic to cultured PC12 cells of both differentiated and undifferentiated states and that their cell death responses are specific to each. Both prefibrillar soluble aggregates and mature fibrillar aggregates displayed cytotoxicity toward rat PC12 cells, with differentiated cells showing an enhanced level of resistance against oligomers. Isolated fractions also induce differing cell death pathways in PC12 undiff/diff with oligomers activating an apoptotic response, whereas fibrils generate secondary necrotic, non-apoptotic death. The prolonged incubation of native HEWL protein resulted in the formation of oligomers, which preceded the conversion of mature fibrils, consistent with current knowledge on amyloid formation (18,32). When compared with the oligomers, the mature amyloid fibrils isolated in this experiment showed all of the hallmark spectroscopic signatures, relatively more pronounced Congo Red birefringence, increased ThT binding, and a shift in UV-visual Congo Red (Fig. 1, A-C). All fractions also showed differences in their hydrophobic region exposure when measured using ANS (Fig. 1D). These results indicate that the transition from native HEWL protein toward fibrillar aggregate coincides with a change in the protein environment, whereby the protein loses its native three-dimensional structural packing and displays increased exposure of hydrophobic residues. We also examined the separate fractions using high-resolution AFM and TEM (Fig. 2, A-D). Mature fibrils appear as long single-stranded protofilaments, displaying a characteristic left handed "twist." Both sonicated and oligomer samples appear smaller in size yet distinctively unique from each other proving that no further elongation of the aggregate was achieved and that the size-based purification procedure was successful. There is considerable debate regarding the true pathogenic role of oligomers and mature fibrils in neurodegenerative diseases. Early studies suggested that mature amyloid fibrils were the main pathogenic entities involved in amyloidosis (9,33). In contrast there is equally strong evidence that oligomeric species may also have a pathogenic role in these disorders (15,34,35). The perception of the role for amyloid fibrils has changed over recent years and it has been suggested that their true function may be that of an evolutionary protection mechanism employed by the body after the fundamental damage was carried out at an earlier phase by the soluble oligomers (36). Our study demonstrates that HEWL oligomers and fibrils display toxicity toward PC12 undiff/diff cells and that soluble oligomers may not be the only toxic component involved in neurodegenerative disease. The higher levels of HEWL needed to induce cell death are representational of the normally non- : 100 m). B and C, Alamar Blue emission viability results for PC12 cell lines treated with HEWL-separated fractions (Log scale). Undifferentiated (B) and differentiated (C) PC12 cells were exposed to varying concentrations of prefibrillar, and fibrillar aggregates then assayed using the fluorescent staining compound Alamar Blue. Monomeric HEWL was also examined and displayed no decrease in viability. The error bar indicates the values of mean Ϯ S.E. of three experiments and were calculated using GraphPad software. This protective role is consistent with earlier work examining amyloid injury in differentiated and undifferentiated human neurotypic SH-SY5Y cells (22). A possible reason for this is that phenotypic alterations that are a consequence of differentiation modify aggregate binding to the cell surface (38). Using dual apoptotic and necrotic staining, we have shown that oligomeric treated PC12 undiff/diff cells display increased annexin V staining indicative of apoptosis, although these cells treated with mature fibrils show annexin V/PI double positive staining, suggesting a late apoptotic or secondary necrotic death. FACS was also conducted on the sonicated fibrillar samples, using identical settings. Sonicated fibrils used to treat PC12 undiff cells showed increased apoptotic staining, whereas the PC12 diff cells showed equal amounts of staining for both apoptosis and late necrosis, suggesting that these cells are entering the later stages of the apoptotic cycle of cell death (Fig. 4, A-H). PC12 undiff/diff cells exhibited cleaved PARP, an apoptotic indicator, when treated with oligomers, sonicated fibrils, and mature fibrils (Fig. 5). Monomeric treated cells show no signs of apoptotic death, which is supported by the evidence that these samples are not cytotoxic to cells. HEWL oligomers produced the highest level of apoptotic death, whereas sonicated fibrils and mature fibrils gave slightly lower levels of cleaved PARP. This data in collaboration with FACS analysis suggests cells treated with HEWL fractions, in particular oligomers, are dying through an apoptotic cell death pathway. The lower levels of apoptotic cell death seen in fibrillar treated cells NOVEMBER 20, 2015 • VOLUME 290 • NUMBER 47 Differences in Cell Death Caused by Amyloid Aggregates suggest that in addition to apoptosis, non-apoptotic cell death is also occurring. We found that HEWL oligomers inhibited hippocampal LTP in vivo (Fig. 6), a process linked with memory loss and function. Despite the fact that these oligomers are not as potent at inhibiting LTP as, for example, A␤ oligomers (39), the data provide new insight into a role of protein misfolding in neurodegenerative diseases as well as potential benefits of immunotherapies using conformation selective antibodies. We have previously found that insoluble amyloid fibrils formed by A␤ did not affect LTP in vivo (40). One reason for the lack of effect is that, due to diffusional restrictions, such large aggregates are not reaching hippocampal parenchyma from the injection site in the ventricle. To address this issue we used a sonication approach to produce short fragments of mature fibrils whereas maintaining general fibrillar structure (27). The observed difference in the abilities of oligomers and fibrillar fragments to inhibit LTP in vivo (Fig. 6) is in accord with our cell death findings (Fig. 3, B and C), where sonicated fibrils were less toxic than oligomers in vitro, and probably reflects the different nature of the interaction between neurons and these different HEWL aggregates. Indeed, it has been demonstrated that the pro-apoptotic signaling pathway is involved in inhibition of LTP by A␤ oligomers (41). Alternatively, intracerebroventricular injected sonicated fibrils may not be reaching hippocampal parenchyma and rather interact with cells at the ventricular surface. It is important to note that whereas oligomers have been shown to universally display inherent toxicity through a shared membrane permeabilization mechanism of pathogenesis (34,42), fibrils exhibit much greater diversity with regard to pathogenicity and unlike oligomers may employ a number of various mechanisms of cytotoxicity (43,44). Fibrils formed from differing proteins and indeed from the same peptides have shown dramatic differences in terms of conformational variation and cytotoxicity (18,(45)(46)(47). This may be responsible for the variations observed in terms of cytotoxic capability and could be a reason for the inconsistency displayed by numerous experiments examining the cytotoxic ability of fibrils. The true toxic component involved in the cell/tissue damage symptomatic of neurodegenerative diseases is still a proverbial gray area in the world of amyloid pathology (7,48). Our results demonstrate that HEWL aggregates exhibit toxicity toward PC12 cells, and that the exact method of toxicity may be unique for individual fractions as well as dependent on the differentiated state of the cell. Oligomers inhibit LTP in rats in vivo, although sonicated fibrils appear to have no effect. Although both oligomers and fibrils have been shown to be damaging in vitro, in vivo experiments have yet to show that fibrils possess the ability to disrupt LTP, further highlighting the complicated matrix that surrounds the efficacy of these protein species. Knowledge of these subtle differences would be very important when designing anti-fibrillogenesis therapeutics. Despite this, our findings confirm that innocuous hen lysozyme can be engineered to produce both cytotoxic soluble prefibrillar aggregates and mature amyloid fibrils, further strengthening the claim that fibrillar conformation, and not the identity of the protein, is key to cellular toxicity and the underlying specific cell death mechanism.
7,207.2
2015-07-28T00:00:00.000
[ "Biology", "Medicine" ]
Carbon Fiber Reinforced Polymer Cables: Why? Why Not? What If? Cables of suspended structures are suffering due to increased corrosion and fatigue loading. Since 1980, EMPA and BBR Ltd. in Switzerland have been developing carbon fiber-reinforced polymer (CFRP) parallel wire bundles as cables for suspended structures. The excellent properties of those bundles include corrosion resistance, very high specific strength and stiffness, superior equivalent moduli and outstanding fatigue behavior. An anchoring scheme produced with gradient materials based upon ceramics and epoxy is described. For the first time, large CFRP cables were applied in 1996 on the vehicular cable-stayed Stork Bridge with 124 m span in Winterthur, Switzerland. The performance of these cables and later applications was and still is monitored with sophisticated fiber-optical systems. Up to date, these results are fully matching the high expectations. Under the assumptions that (1) the behavior of the pilot applications of CFRP cables described in this paper will be further on fully satisfactory, (2) active systems for distributed mitigation of wind-induced vibrations are going to be successful and (3) there is a need for extremely long-span bridges to cross straits like that of Bab el Mandeb, Messina, Taiwan or Gibraltar, why should the next generation of structural engineers not use CFRP cables for such extremely long-span bridges? This would open spectacular new opportunities. Introduction Today's state of the art in cables for civil structures is the accumulation of past innovations. Much has been reached. However during the past 30 years, the bridge engineering community has experienced more and more damage on stay and hanger cables [1,2]. Cables are suffering due to increased corrosion and fatigue loading. Within the last few years, even more and more corrosion damage has been discovered on large main cables of suspension bridges [3]. Most bridge engineers seem to agree that the corrosion and fatigue resistance of cables for suspended structures have to be enhanced. Researchers proposed already decades ago modern approaches using non-metallic, which means noncorrosive, materials in civil construction [4][5][6][7]. The introduction of carbon fiber-reinforced polymers (CFRP) instead of steel for cables has since 1982 been proposed in [8]. From the lifetime point of view, studies indicated superior results for carbon fiber composites compared to aramid or glass. It was found that the future potential of carbon fibers is highest [9]. The purpose of the following EMPA R&D work was to develop an anchorage system capable of successfully handling the huge potential of CFRP wires, to achieve a high reliability of parallel wire bundles made of such advanced composites and to apply it in pilot projects. Carbon Fibers The ideal construction materials are based on the elements found principally toward the middle of the periodic table. These elements, including carbon, form strong, stable bonds at the atomic level. Materials held together by such bonds are rigid, strong and resistant to many types of chemically aggressive environments up to relatively high temperatures. Furthermore, their density is low and raw materials are available in almost unlimited quantities. Carbon fibers are made by carbonizing an organic polymer yarn with a fiber diameter in the 5-to 10-µm range. The fibers mostly used within the projects described in the following sections were the Torayca T700S having a strength of 4,900 MPa, an elastic modulus of 230 GPa and an elongation at failure of 2.1%. The density is 1.8 g/cm 3 . The axial thermal expansion coefficient is approximately zero. The carbon fiber dates back to 1879. The inventor, Thomas Edison, used carbon fibers as filaments for early electrical light bulbs. CFRP Wires An advanced composite material built up of parallel fibers and a matrix might seem unnecessarily complicated at first sight. Why not simply take a solid carbon rod for the parallel wire bundle of a cable? Carbon would be, as was pointed out above, a very rugged material having the outstanding properties shared by elements from the middle of the periodic table. Such materials have however seen little use as structural materials in the past due to their extremely brittle behavior. A fine notch at the surface or a small flaw within the bulk can lead to a sudden, premature and catastrophic failure of a structural element made of such a material. Considerations of the atomic structure and statistics show that the strength of carbon can be greatly increased and made highly reliable in the form of fibers. Furthermore, the crack in a composite wire does not propagate as suddenly as in a solid body. A flaw in a fiber does not inevitably lead to the failure of a structural element. When a fiber is embedded in a polymer matrix, it can take up full load again a short distance away from a crack. For these reasons, CFRP wires are very reliable. Carbon fiber-reinforced polymer wires are produced by pultrusion, a process for the continuous extrusion of reinforced polymer profiles. Rovings are drawn (pulled) through an impregnating bath with epoxy resin, the forming die, and finally a curing area. The fibers have a good parallel alignment and are continuous (endless up to 4 km or even more). The fiber volume content of the wires used for the described projects was in the range of 68-72%. The axial properties of a CFRP wire (modulus, strength) can simply be calculated with the rule of mixture. Measured properties are listed in Table 1. The wires used in this project have a diameter of 5 mm. The cables are built up as parallel wire bundles. The principal objectives are minimal strength loss of the wires in a bundle as compared to single wires. Since CFRP wires are corrosion resistant, no corrosion inhibiting compound or grout is required. However, it is still necessary to protect the wires against wind erosion and ultraviolet radiation attack, because the combination of these two attacks could degrade the wire surfaces. A polyethylene (PE filled with carbon black) pipe was used for adequate shielding. Such pipes are very successfully applied to shield parallel wire bundles made of steel since 1972. The resistance against outdoor weathering including UV radiation is at least in Western Europe excellent. The Anchorage of CFRP Cables The key problem facing the application of CFRP cables and thus the impediment to their widespread use in the future is how to anchor them. The outstanding mechanical properties of CFRP wires mentioned above are only valid in the longitudinal direction. The lateral properties including interlaminar shear are relatively poor. This makes it very difficult to anchor CFRP wire bundles and obtain the full static and fatigue strength. The EMPA has been developing CFRP cables using a conical resin-cast termination. The evaluation of the casting material to fill the space between the steel cone (in future it will be a filament-wound CFRP cone) of the termination and the CFRP wires was the key to the problem. This casting material, also called load transfer media (LTM) has to satisfy multiple requirements: (1) the load should be transferred without reduction of the high long time static and fatigue strength of the CFRP wires due to the connection; (2) galvanic corrosion between the CFRP wires and the steel cone of the termination must be avoided. It would harm the steel cone. Therefore, the LTM must be an electrical insulator. The conical shape inside the socket provides the necessary radial pressure to increase the interlaminar shear strength of the CFRP wires. The concept is demonstrated in the Fig. 1 left and right using for this example a one-wire system. If the LTM over the whole length of the sockets is a highly filled epoxy resin, there will be a high shear stress concentration at the beginning of the termination on the surface of the CFRP wire ( Fig. 1, left). This shear peak causes pullout or tensile failure far below the strength of the CFRP wire. One could avoid this shear peak by the use of unfilled resin. However, this would cause creep and an early stress rupture. The best design is shown in Fig. 1, right. The LTM is a gradient material. At the load side of the termination, the modulus of elasticity is low and continuously increases until reaching a maximum. The LTM is composed of aluminum oxide ceramic (Al 2 O 3 ) granules with a typical diameter of 2 mm. All granules have the same size. To get a low modulus of the LTM, the granules are coated with a thick layer of epoxy resin and cured before application (Fig. 2, left, top). Hence, shrinkage can be avoided later in the socket. To obtain a medium modulus, the granules are coated with a thin layer. To reach a high modulus, the granules are filled into the socket without any coating (Fig. 2, left, bottom). With this method, the modulus of the LTM can be designed tailor-made. The holes between the granules are all filled by vacuum-assisted resin transfer molding with epoxy resin. The concept of a termination is shown on the right side of Fig. 2. Many CFRP parallel wire bundles were tested at EMPA in static and fatigue loading. The results prove that the anchorage system described is very reliable. The static load carrying capacity generally reaches 92% of the sum of the single wires. This result is very close to the theoretically determined capacity of 94% [10]. Fatigue tests performed on cables at EMPA showed the superior performance of CFRP under cyclic loads [9]. The anchorage system is patented [11]. The BBR Systems Ltd. in Schwerzenbach-Zurich got from EMPA an international license. Stork Bridge in Winterthur: CFRP Stays-A Milestone in Bridge Construction The Stork Bridge, erected in 1996, is situated over the 18 tracks of the railroad station in Winterthur and has a central A-frame tower supporting two approximately equal spans of 63 and 61 m (Fig. 3). The cables that converge at the tower top are rigidly anchored into a box anchorage at the apex of the A-frame. The superstructure has two principal longitudinal girders (HEM 550, Fe E 460) spaced at 8 m and supporting a The CFRP cable type ( Fig. 4) used for the Stork Bridge consists of 241 wires each with a diameter of 5 mm. This cable type was subjected to a load three times greater than the permissible load of the bridge for more than 10 million load cycles. This corresponds to a load several times greater than that which can be expected during the life cycle of the bridge. The two CFRP cables with their anchorage and the neighboring steel cables have been equipped by the EMPA with conventional sensors and also with state-of-the-art glass fiber-optical sensors, which provide permanent monitoring to detect any stress and deformation. This arrangement also permits a comparison between theoretical modeling and the reality of a practical application. This development project was promoted by the CTI, Commission for Technology and Innovation of the Swiss Federation, to strengthen world-wide confidence among bridge engineers in cables made from carbon fiber-reinforced polymers and thus to create a leading role for the Swiss industry in the field of stay cables. The cable-stayed Stork Bridge will certainly be a milestone in international bridge construction, because CFRP cables do not simply have excellent behavior with regard to corrosion and fatigue, but are also five times lighter than steel cables with even higher strength properties. This high strength with low weight will permit us to build bridges in future with considerably longer spans than are currently possible, as will be discussed in the last chapter. The use of CFRP in a pilot project requires long-term structural health monitoring to gain finally confidence for this modern class of materials. Structural safety and change in structural behavior of these materials are of great interest, especially because long-term experience in application is missing. Clear statements about health condition are only possible with reliable sensors and data acquisition. Forces, stresses and deformations are often monitored by strain measurements. In this case, the strain of the CFRP cables was measured using sensing systems based on fiber-optic Bragg gratings (FBGs) and electrical resistance strain gauges (RSGs) due to their high resolution, low drift and high reliability [12]. The redundant use of sensors not only increases the reliability of the measurements, but also allows drawing conclusions about the actual reliability and measurement uncertainty of the sensing systems. In conjunction with the applications, also appropriate sensor lifetime testing is performed. Fiber-optic Bragg gratings were surface adhered to loaded wires and to dummy wires used for temperature compensation. In contrast to the Stork Bridge, all BG sensors were embedded in the CFRP wires during the industrial pultrusion process 2 years later at the Kleine Emme Bridge near Lucerne. Some FBGs were pre-strained on dummy wires (not loaded) to a level of 2,500 µm/m to monitor creep due to delamination of the fiber coating or the epoxy adhesive. The load on the CFRP wires is moderate and corresponds to an average strain of about 1,200 µm/m. The sensor system is operational since April 1996 without any reliability problems. Important information about the reliability of the fiber-optical monitoring data on the Stork Bridge can be derived from the FBGs on the so-called dummy wires. Four of the seven FBGs per cable are installed on unloaded wires for both, temperature compensation and creep monitoring. The temporal strain evolution of the The most important measurements are those of the relative displacement between the anchorage cones of the terminations and the load transfer media (LTM). As expected, there is a relative displacement due to creep in function of time. However, all displacement curves show clear signs of leveling out. These results fully match the earlier high expectations. The Verdasio Bridge in the South of Switzerland The Verdasio Bridge (Fig. 5) in the south of Switzerland is a two-lane highway bridge and was built in the 1970s. The length of the continuous two-span girder is 69 m. A large internal post-tensioning steel cable positioned in a concrete web was fully corroded as a result of the use of salt for deicing. It had to be replaced in December 1998. The refurbishment of the Verdasio Bridge represented the first attempt to make practical use of the results of extensive experiments performed in the 1990s on continuous two-span beams post-tensioned with CFRP cables. Smooth wires were, however, used in place of the CFRP strands described in [13], with the same anchorage system as described above. Four external CFRP cables arranged in a polygonal layout (Fig. 5) at the inner face of the affected web inside of the box replaced the corroded steel cross section. Each cable was made up of 19 pultruded CFRP wires with a diameter of 5 mm. Here too, the cables are equipped with sensors to measure the post-tensioning force. The main problem in this application was to tension the cable around relatively tight bending radii of 4.5 m, as CFRP wires are sensitive to transverse pressure due to their composition. However, a series of as yet Also in the case of this bridge, the relative displacement within the anchorages and the cable load in function of the time were measured. This project was as far most interesting as in this case the nominal post-tensioning stress on the cable cross sections is as high as 1,610 MPa. The main questions were about stress relaxation (Fig. 6) and relative displacement (Fig. 7) due to this high sustained loading. The long-term results demonstrate also in this case that the relative displacement in the anchorage is also under this high creep load leveling off. There is also no stress relaxation in the CFRP wires. The "Kleine Emme" Bridge Near Lucerne The bicycle and pedestrian single span bridge (Fig. 8) crossing the River "Kleine Emme" near Lucerne was built in October 1998. The bridge deck is 3.8 m wide, 47 m long and is designed for the maximum load of emergency vehicles. The superstructure is a space truss of steel pipes in composite action with the steel rebar reinforced concrete deck. The bottom flange, a tube of 323-mm diameter, was post-tensioned with two CFRP cables inside the tube. Each cable was built up with 91 pultruded CFRP wires of 5-mm diameter. The post-tensioning force of each cable is 2.4 MN. Therefore, the CFRP wires are loaded with a sustained stress of 1,350 MPa. This project saw the first ever use of CFRP wires with integrated fiber-optic Bragg gratings (FBGs) for this kind of application. The continuous monitoring and optimization of future production processes for high-grade CFRP wire will obviate the need for time-consuming final quality checks. Where the projected application requires incorporation of sensors in the CFRP wires, these may be integrated at the production stage for process monitoring. For the bridge over the Kleine Emme, "endless" optical fiber sensors with Bragg gratings have been incorporated in the CFRP wires during the continuous pultrusion process. The strain and temperature signals from the sensors were monitored and analyzed already during production. The parallel wire bundles were produced in Dubendorf by EMPA and BBR, wound onto 2.5-m-diameter reels and transported to the bridge site in Emmen. Since October 1998, these cables have tensioned the bottom chord of the new bridge over the Kleine Emme River. The sensors used for process monitoring during production now serve to monitor cable strain and thus the post-tensioning force in the bottom chord. Also in the case of this bridge, the relative displacement within the anchorages and the cable load in function of the time were measured. This project was as far also interesting as in this case the nominal post-tensioning stress on the cable cross sections is 1,350 MPa. Main questions were also here about stress relaxation and relative displacement due to this high sustained loading. The results give proof that the relative displacement in the anchorages also in this case is leveling off and there is no stress relaxation. The Dintelhaven Bridge consists of two concrete box girders in the harbor of Rotterdam. The three span continuous girder, in which the CFRP pre-stressing elements have been applied, has a main span of almost 185 m and has been erected by using the balanced cantilever method. Four CFRP cables with a length of 75 m are pre-stressed at a load of 2.65 MN each. The location is in the negative moment zone above the supports. The cross sections of the cables and the materials are the same as described above in the case bridge over the "Kleine Emme". The TNO-Report [14] concludes: (1) The prestressing elements were assembled successfully. Besides some difficulties related to the quality and the length of the wires, no irregularities were detected. (2) From the evaluation it was found that most problems that have occurred during the installation of the elements in the bridge were attributed to the novelty of the material. It is expected that if CFRP is used more frequently, additional protections when using the material will become more common. (3) From the measurements during and after tensioning of the pre-stressing elements in the Dintelhaven Bridge, it was found that the anchorage settlements, as well as the load development in time, was comparable to the long-term experiments performed in the laboratory. (4) Despite the fact that the behavior of the pre-stressed CFRP elements was as expected, it is recommended to continue the measurements in the bridge as long as possible (even after completion of the bridge). Another Advantage of CFRP Cables Carbon fiber-reinforced polymer cables have a very high specific strength and stiffness, do not corrode, show outstanding performance under fatigue loading, do not relax, do not suffer stress corrosion and are very lightweight. Light weight is on one hand a great advantage for the stiffness performance of long stays and on the other hand for extremely long-span bridges (will be discussed later). The stiffness of a cable-stayed bridge depends largely upon the tensile stiffness of the stay cables. The displacement of the end of a free-hanging stay cable under an axial load depends not only on the cross-sectional area and the modulus of elasticity of a cable, but to a certain extent on the cable sag, as described in [15]. The relative equivalent modulus of elasticity E e /E, of a cable is defined as: where E e is the equivalent modulus, E the modulus of elasticity, l the horizontal span of the cable, ρ the density of the cable material and σ the applied cable stress. Figure 9 gives a comparison between steel and CFRP stays. There is no doubt that from the technical standpoint, CFRP is today the best suited material for such cables. However, since initial cost is the major and often the only parameter used by bridge owners in decision making, it is very difficult for CFRP to compete against steel. Even if the carbon fiber price would decrease further within the next few years, it will be very difficult for CFRP cables to compete unless the entire life cycle of a bridge is considered in the costs. A few clients for bridge cables such as some departments of transportation increasingly require more and more life cycle costing to be carried out. This takes into account the predicted inspection and maintenance costs over the lifetime of the bridge, usually taken as 100 years. Costs are evaluated by calculating the net present value of the expenditure stream using a cash discount rate of typically 6% [16]. CFRP cables benefit considerably compared with steel in such comparisons. The most important factor to remember is not the cost per kilogram of materials, but rather the cost effectiveness of the installed cables considering the life expectancy and the cost of the alternatives. This has worked to the advantage of the CFRP strip and sheet bonding technique for rehabilitation of structures [17,18] and there is a high probability that this will also be the case for CFRP cables in the future. Why Not? As mentioned above, much has been reached. However, if we do not go forward, we go backward. As engineers, we must be innovative, so that tomorrow's world will be better. Innovation starts with the questions "Why?" as shown in the chapter above and "Why not?" The question "Why?" gave us the opportunity to challenge the status quo. The question "Why not" taps into a new emphasis on rather old-fashioned kind of civil engineering ingenuity. Think, for example, about the Great Pyramid of Giza, the Pantheon in Rome, the Roman aqueducts and the Brooklyn or the George Washington Bridge. Many of our "why-nots" are counterintuitive-or maybe we should say temporarily counterintuitive. Ideas that never before occurred to us often reveal and explain themselves with as little as a single question like "Why not carbon fiber-reinforced polymer (CFRP) cables?" Let us assume that the behavior of the pilot applications of CFRP cables described in the previous chapter is further on fully satisfactory and there is a need for extremely long-span bridges to cross straits like that of Bab el Mandeb, Messina or Gibraltar. Why should the next generation of structural engineers not use CFRP cables? This would open spectacular new opportunities. It has been shown in a feasibility study already in 1987 [19] that from a static point of view, which means not considering the dynamic wind loading, a crossing of the strait of Gibraltar at its narrowest site with a suspended bridge of a center span of 8,400 m seems viable. Multiplication of the Limiting Span of Suspended Bridges The dead load of a suspended superstructure increases with the span, and there is for any type of bridge a limiting span beyond which the dead load stresses exceed the assigned limit of allowable stresses. The use of CFRP cables would allow a multiplication of the limiting span of suspended structures in comparison to steel cables. In Fig. 10, the specific design loads versus the center span for the classical form of suspension bridges made of steel are compared with those made of CFRP according to Eq. 2. The specific design load is defined as the dead load g of the superstructure divided by the live load p. The calculation of the specific design load as a function of the span is as follows according to [20]: whereas g is the dead load of structure (kN/m), p is the live load (kN/m), is the center span (m), σ is the allowable stress (N/m), α is the systems coefficient (m/s 2 ), and ρ is the density (kg/m 3 ). The limiting span lim is calculated as follows: The uniaxially loaded cables and hangers are mainly responsible for determining the allowable stress in Eq. 2. This permits the use of the allowable stresses for steel and CFRP cables ( Table 2). The assumptions used for Fig. 10 given in Table 2 can, to some extent, be questioned, e.g., the allowable stress of 1,000 MPa for CFRP cables might be too low compared with the ultimate strength of 3,300 MPa. However, this would not have any fundamental influence on the qualitative statement of Fig. 10. Table 3 gives examples for specific design loads of existing and planned bridges, e.g., for a possible steel cable suspension bridge crossing the Strait of Messina, the dead load is twice the live load per unit length. In [19] is has been demonstrated that the "break-even span" for a bridge made of CFRP is approximately 4,000 m. The "break-even span" is the span at which cost for a bridge made of steel or of CFRP are the same. This implies that only superstructures with main spans in the range of 4,000 m and greater will be the domain CFRP. Feasibility Considerations In [19] is a proposal for the bridging of the Strait of Gibraltar at its narrowest site with a cable-stayed bridge of CFRP. Compression forces in the deck must be avoided. This can be reached by the cable-net concept with large CFRP main cables integrated in the deck and anchored in the ground at the abutments. This would be a highly challenging civil engineering task. This is true not only for the superstructure, but also for the towers, the foundations and the anchorages. Employing building materials currently used for superstructures, this challenge cannot be met. The development step from the previously designed longest center span of 1991 m (Akashi-Kaikyō Bridge) utilizing steel to the proposed span of 8,400 m ( Fig. 11) or greater can only be achieved with advanced composites. Indeed, CFRPs seem predestined, above all for the cables which would comprise approximately 70% of the weight of the superstructure, since the outstanding properties of strength and stiffness of unidirectional fiber composites can be used to full advantage. The application of CFRP cables is today, as shown in previous sections, state of the art. The girders and decks could be built out of fiber-reinforced polymers like CFRP and/or GFRP (glass fiber-reinforced polymer) as shown by in [21,22]. Lightweight steel or aluminum decks and girders are also viable, since for such extreme spans the limiting span is mainly controlled by the dead load of the cables. Another obstacle would be the financing of an extremely long-span bridge made of CFRP. Surely no contractor is willing to build such an object without being able to estimate the risk. This is only possible after years of practical experience on objects of medium span. Therefore, pilot projects like that of the Stork Bridge are of very high importance. Open Questions No doubt, there are still many open questions. By far, number one is the question about the dynamic behavior of an extremely long-span bridge especially under wind load. The increase in span length of bridges results in a remarkable decrease of their stiffness and natural frequencies. Due to low structural damping (the damping performance of CFRP is only a bit better than that of steel) and relatively low mass, extremely long-span bridges become very susceptible to vibrations caused by winds. Menn and Billington [23] proposed a concept for extremely long-span bridges. The dynamic stability of such a bridge is assured by placing on either side of the deck girder a sloping cable-stayed system carried by slender pylons, which are supported by the central pylon. Peroni and Casadei [24] suggested a three-dimensional tensile structure with a hyperboloid shape: this consists of a 3D net with the ropes interlaced to each other to form a wicker basket containing inside the deck of the bridge. The principal net rope, beginning from two towers at the extremities of the bridge, is developed around elliptic sections that gradually reduce toward the midspan of the bridge. The particular interlaced cables conformation formed a "closed system" extremely stable with respect to the horizontal, vertical and torsion effects of the wind loads. There are more "passive systems" for the mitigation of flutter, vibrations and oscillations under discussion. Such "passive systems" will, with a high probability, not be sufficient. Advanced "active systems" and control strategies are needed. To fill this gap, the author initiated at the EMPA laboratories in the year 2000 the program "Adaptive material Systems" [25]. This program includes a project with an innovative wind-induced vibration mitigation strategy based on active control of the bridge's aerodynamic profile. An array of adjustable winglets is installed along both edges of the girder and their angular position is controlled as a function of the current dynamic state of the structure and the local wind field measurement. This information is shared with other similar units distributed over the whole length of the bridge through wireless networking. The characteristics of the interaction between the wind field and the underlying structure (nonlinear, spatially heterogeneous, time-variant and noisy) with the additional degrees of freedom introduced by the flap system result in complex mathematical models and associated control strategies. The need for real-time coordination between various units, leading to an active control of the global aerodynamic profile of the bridge with the constraints introduced by the mechanical structure, makes the problem of mitigating vibrations at the perturbation source extremely challenging. For the mitigation of the cable vibrations, analogous systems have been considered. The destructive effects of lightning strikes are well known. The studies of lightning and the means of preventing its striking an object or the means of passing the strike harmlessly to ground have continued since the days when Franklin first established that lightning is electrical in nature. From these studies, two conclusions emerge: firstly, lightning will not strike an object if it is placed in a grounded metal cagel; secondly, lightning tends, in general, to strike the highest objects in the area. As composite materials replace more and more metals in aircrafts, there has been an increase of risk of damage by lightning to such composite sections. CFRP is a conductor, but is relatively resistive to electricity which causes it to heat up as the current passes through it. A lightning strike has two main effects on unprotected CFRP: firstly, the main body of the CFRP becomes so hot that the epoxy resin component vaporizes; and secondly, the structural integrity of the CFRP will have been affected after the carbon cooled down. It will probably retain a considerable tensile strength, but it will lose interlaminar shear and compressive strength. Therefore, the aircraft industry developed aluminum grids which are used to protect the composite in its outermost layers. In the case of the Stork Bridge, it was decided to insulate the CFRP parallel wire bundles. It was possible to do so with very little additional cost, since the cables were anyway packed into a polyethylene pipe similar to steel parallel wire bundles. In the past, there were lightning strikes which hit the towers of the bridge without any damage for the CFRP cables. Also for the "open questions", there will be solutions, as just discussed. The successful development of CFRP cables was hard and took a lot of time. To resolve the open questions will be even harder and take more patience and creativity. Therefore, is "Why not?" a sustained argument against complacency? What If? However, what would happen if "somebody" would order an extremely long-span CFRP Bridge today? "What if?" keeps us humble and conservative. Today, the time is not yet ripe for very large standalone CFRP bridge projects. However, there is a need for the replacement of main cables on several large suspension bridges, as shown in previous sections. In such cases, a stepwise procedure could be performed to reduce the risk. The "25th of April Suspension Bridge" in Lisbon inaugurated in 1966 needed in 1999 additionally a lower train platform with two train tracks besides the existing upper platform with six car lanes. To accommodate this, the bridge underwent extensive structural reinforcements, including a second set of main cables, placed above the original set, and the main towers were increased in height. The original builder, American Bridge Company, was called again for the job, performing the first aerial spinning of additional main cables on a loaded, fully operational suspension bridge. Such an operation would be much easier and faster with CFRP cables. This approach could be a very efficient solution to complement the loss of cross sections on existing suspension cables due to corrosion. Such an application should be the next step in the development of CFRP cables. Conclusions We explored the basis of innovation, starting with the questions "Why?" and "Why not?" The question "Why?" gave us the opportunity to challenge the status quo, to introduce the idea of CFRP cables and overcome first restrictions with full-scale CFRP pilot projects. The question "Why not?" allowed us to discuss the future of CFRP cables. The question "What if?" will keep us humble and conservative. That is correct. As civil engineers, we have a high responsibility. However this should not prevent us to go ahead with new, promising developments in the domain of CFRP cables. Although a great number of problems remain to be solved, the crossings of straits like that of Bab el Mandeb, Messina or Gibraltar with extremely long-span CFRP bridges appears feasible from the technical point of view within the next 30-40 years. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
7,975.8
2012-02-01T00:00:00.000
[ "Engineering" ]
Cellular targets for neuropeptide Y-mediated control of adult neurogenesis Neuropeptides are emerging as key regulators of stem cell niche activities in health and disease, both inside and outside the central nervous system (CNS). Among them, neuropeptide Y (NPY), one of the most abundant neuropeptides both in the nervous system and in non-neural districts, has become the focus of much attention for its involvement in a wide range of physiological and pathological conditions, including the modulation of different stem cell activities. In particular, a pro-neurogenic role of NPY has been evidenced in the neurogenic niche, where a direct effect on neural progenitors has been demonstrated, while different cellular types, including astrocytes, microglia and endothelial cells, also appear to be responsive to the peptide. The marked modulation of the NPY system during several pathological conditions that affect neurogenesis, including stress, seizures and neurodegeneration, further highlights the relevance of this peptide in the regulation of adult neurogenesis. In view of the considerable interest in understanding the mechanisms controlling neural cell fate, this review aims to summarize and discuss current data on NPY signaling in the different cellular components of the neurogenic niche in order to elucidate the complexity of the mechanisms underlying the modulatory properties of this peptide. Introduction In adult tissues, stem cells reside in a permissive and specialized microenvironment, or niche, in which different molecular signals coming from the external environment, together with feedback signals from progeny to parent cells, tightly regulate self-renewal, multipotency and stem cell fate (for review see Hsu and Fuchs, 2012). In this regard, many findings underlie the key role played by neurotransmitters on stem cell biology in niches located both inside and outside the central nervous system (CNS; for review see Katayama et al., 2006;Riquelme et al., 2008). Cross-species comparative analysis points out that it could be included in a more general and evolutionary old function, going beyond their role in inter-neuronal communication (for review see Berg et al., 2013). Among them, neuropeptides, molecules released both by neurons, as co-transmitters, and by many additional release sites (for review see van den Pol, 2012), are emerging as important mediators for signaling in both neurogenic and non-neurogenic stem cell niches (for review see Oomen et al., 2000;Louridas et al., 2009;Zaben and Gray, 2013), thus representing possible shared signaling molecules in their biological dynamics. One of the most abundant neuropeptides in the CNS is neuropeptide Y (NPY), a 36-amino-acid polypeptide that is highly conserved during phylogenesis (Larhammar et al., 1993). Through its ability to modify its levels and expression pattern following environmental changes in both physiological and pathological conditions (Scharfman and Gray, 2006;Zhang et al., 2014), it is involved in many different functions, both inside and outside the CNS. These functions are performed by binding to different G-coupled NPY receptors distributed in different organs (Pedrazzini et al., 2003). In peripheral organs, NPY can be found in sympathetic nerves, where its release mediates vasoconstrictive effects, in adrenal medulla and in platelets (for review see Hirsch and Zukowska, 2012). NPY takes part in cardiovascular and metabolic response to stress (for review see Hirsch and Zukowska, 2012), in coronary heart disease and hypertension (Zukowska-Grojec et al., 1993). More recently, the NPY-induced modulation of different stem cell niches has been highlighted. A direct role in adipogenesis has been indicated (Kuo et al., 2007;Park et al., 2014;Zhang et al., 2014), as well as its angiogenic properties, which have been widely described in different tissues (Ekstrand et al., 2003;Zukowska et al., 2003). The NPY system is also crucially involved in the regulation of the osteogenic niche, where its presence is due to both local production and release from NPY-immunoreactive fibers, and it plays a pivotal function in the neuro-osteogenic network that regulates bone homeostasis (Franquinho et al., 2010;Lee et al., 2010Lee et al., , 2011. Within the CNS, NPY is a major regulator of food consumption and energy homeostasis (for review see Lin et al., 2004), acts as one of the crucial players of the stressrelated mechanisms (for review see Hirsch and Zukowska, 2012), and participates in anxiety, memory processing and cognition (for review see Decressac and Barker, 2012). It is also involved in the pathogenesis of several neurologic diseases, including neurodegenerative diseases, such as Alzheimer's disease, Huntington's disease (revised by Decressac and Barker, 2012) and temporal lobe epilepsy (Marksteiner et al., 1989(Marksteiner et al., , 1990Vezzani and Sperk, 2004), in which anticonvulsant and neuroprotective effects have also been observed (for reviews see Vezzani et al., 1999;Vezzani and Sperk, 2004;Gray, 2008;Decressac and Barker, 2012;Malva et al., 2012). At the cellular level, it is either co-released locally by GABAergic interneurons (for review see Sperk et al., 2007;Karagiannis et al., 2009) or comes from the blood by diffusion across the blood-brain barrier (Kastin and Akerstrom, 1999). It modulates excitatory neurotransmission and regulates hyperexcitability, particularly in the hippocampus (Baraban et al., 1997). The Y1, Y2 and Y5 receptors (Y1R, Y2R, Y5R) exhibit specific distribution patterns within the CNS (Parker and Herzog, 1999;Xapelli et al., 2006) and mediate the wide range of NPY physiological functions (Pedrazzini et al., 2003). In this context, a comprehensive analysis of relevant data on the NPY-mediated control of adult neurogenesis, focusing on its effects on the different cellular components of the neurogenic niche, could be particularly helpful to improve our understanding of the complex functions of this neuropeptide. NPY and Neural Stem Cells (NSCs) The direct effects of NPY on neural elements of the different neurogenic niches located outside (olfactory epithelium [OE] and retina) or inside the CNS (subventricular zone [SVZ], subcallosal zone [SCZ], subgranular zone [SGZ]) have been widely studied (Figure 1). The proximity to anatomical elements releasing NPY and the stem cell expression of Y1R, as also described in the adipogenic and osteogenic niches (Togari, 2002;Lundberg et al., 2007;Lee et al., 2010;Zhang et al., 2014), are common elements. Effects of NPY on the OE Niche The vulnerability of olfactory sensory neurons to different environmental factors and the crucial role of the sense of smell in mammalian daily life account for neurogenesis in the OE; as the OE is accessible in living adult humans, it also offers a source of cells useful for understanding the biology of adult neurogenesis in health and disease (Mackay-Sim, 2010). Hansel et al. provided the first evidence of a proliferative role of NPY on NSCs (namely basal cells) of the OE (Hansel et al., 2001), where the peptide is locally produced by the ensheathing cells of olfactory axon bundles and by sustentacular non-neuronal cells (Ubink et al., 1994). Experiments performed using transgenic animals and primary olfactory cultures have shown that this effect is mediated by the Y1R (Hansel et al., 2001;Doyle et al., 2008) and involves Protein Kinase C and ERK1/2 pathways, which are ultimately involved in regulating the expression of genes involved in controlling cell proliferation and differentiation (Hansel et al., 2001). NPY release is regulated by ATP, which is constitutively expressed by the OE and preferentially released on injury, and the consequent activation of P2 purinergic receptors (Kanekar et al., 2009; NPY-microglia interactions in the modulation of neurogenesis (dotted black arrow) may be hypothesized. In addition, NPY stimulates astrocyte proliferation mainly via the Y1 receptors (Y1R). NPY also acts on the endothelium through the Y2 receptors (Y2R), in cooperation with the Y5 receptors (Y5R): consequently a direct effect on the endothelial component of the neurogenic niche could be hypothesized (dotted yellow arrow), resulting in increased angiogenesis and possible modulation of endogenous neurogenesis (dotted black arrow). Jia and Hegg, 2012). A role of NPY in the maturation and survival of olfactory receptor neurons has also been proposed (Doyle et al., 2012). Effects of NPY on the Retinal Niche Many findings suggest the presence of a regenerative potential within the mammalian retina, in which Muller astrocytes, that are responsible for the homeostatic and metabolic support of retinal neurons, appear capable of proliferating and giving rise to neuronal cells in response to retinal damage (for review see Lin et al., 2014). Both NPY and NPY receptors (Y1R, Y2R and Y5R) are expressed by the different retinal cellular subpopulations, namely neurons, astrocytes, microglia and endothelial cells (Alvaro et al., 2007;Santos-Carvalho et al., 2014). Interestingly, in vitro experiments in Muller cell primary cultures pointed out a modulatory role of NPY on cell proliferation: at low dose it negatively affects the proliferation rate of the cells, while at high doses it increases cell proliferation through the Y1R stimulation and consequent activation of the p44/p42 MAPKs, p38 MAPK and PI3K (Milenkovic et al., 2004). The NPY-mediated proliferative effect has been confirmed in experiments on retinal primary cultures, which revealed that NPY-treatment stimulates retinal neural cell proliferation, through nitric oxide (NO)-cyclic GMP and ERK 1/2 pathways via Y1R, Y2R and Y5R (Alvaro et al., 2008). Effects of NPY on SGZ Within the dentate gyrus (DG) NPY is selectively released by GABAergic interneurons located in the hilus, which innervate the granule cell layer in close proximity to the SGZ (for review see Sperk et al., 2007); a physiological role for NPY in the regulation of dentate neurogenesis can therefore be hypothesized. The pro-neurogenic role of NPY on hippocampal NSCs has been evidenced both in vitro (Howell et al., 2003(Howell et al., , 2005 and in vivo (Decressac et al., 2011). In vitro evidence suggests a purely proliferative effect Gray, 2008), specifically involving the Y1R, which is mediated by the intracellular NO pathway, through NO/cyclic guanosine monophosphate (cGMP)/cGMP-dependent protein kinase (Cheung et al., 2012), ultimately culminating in the activation of ERK1/2 signaling (Howell et al., 2003;Cheung et al., 2012). Interestingly, in line with the results obtained in the retinal niche (Alvaro et al., 2008), the role of NPY in the modulation of another signaling pathway driving a complex modulation of NSC activities emerges. It is well known, in fact, that NO exerts a dual influence on neurogenesis, depending on the source (for review see Carreira et al., 2012): while intracellular NO is pro-neurogenic, the extracellular form exerts a negative effect (Luo et al., 2010). In this respect the Y1R has also been proposed as a key target in the selective promotion of NO-mediated enhancement of dentate neurogenesis (Cheung et al., 2012). Decressac et al. confirmed, by in vivo administration of exogenous NPY in both wild type and Y1R knock out mice, that NPY-sensitive cells are the transit amplifying progenitors expressing nestin and doublecortin (DCX), which selectively express the Y1R (Decressac et al., 2011), as also evidenced in vitro (Howell et al., 2003; Figure 1). A preferential differentiation of newly generated cells towards a neuronal lineage has also been reported (Decressac et al., 2011). In this regard, it is worth emphasizing the role also played by NPY in seizure-induced dentate neurogenesis. Studies on NPY−/− mice show a significant reduction in bromodeoxyuridine incorporation in the DG after kainic acid administration . Interestingly, the DCX-positive cells, besides being selective targets of NPY, are one of the most important neuroblast subpopulations recruited in seizureinduced neurogenesis (Jessberger et al., 2005). These findings are in line with the notion that different neural progenitor subpopulations within the niche show different sensitivity to physiological and/or pathological stimuli (Kempermann et al., 2004;Fabel and Kempermann, 2008), thus representing selective targets for potential drugs aimed at modulating endogenous neurogenesis, of which NPY appears to be a possible candidate. Exogenous NPY has been administered in the Trimethyltin (TMT)-induced model of hippocampal neurodegeneration and temporal lobe epilepsy, in which selective pyramidal cell loss in hippocampal CA1/CA3 subfields (Geloso et al., 1996(Geloso et al., , 1997, reactive astrogliosis and microglial activation (for review see Geloso et al., 2011;Corvino et al., 2013;Lattanzi et al., 2013) are associated with injury-induced neurogenesis (Corvino et al., 2005). NPY injection in TMT-treated rats results in long-term effects on the hippocampal neurogenic niche, culminating in the functional integration of newly generated neurons into the local circuit (Corvino et al., 2012(Corvino et al., , 2014. The early events following NPY administration are characterized by the up-regulation of genes involved in different aspects of NSC dynamics. In particular, Noggin, which participates in self-renewal processes (Bonaguidi et al., 2008), Sox-2 and Sonic hedgehog, both involved in the establishment and maintenance of the hippocampal niche (Favaro et al., 2009), NeuroD1, which regulates differentiation and maturation processes (Roybon et al., 2009), Doublecortin, a driver of neuroblast migration (Nishimura et al., 2014) and brain-derived neurotrophic factor (BDNF), which is involved in different aspects of dentate neurogenesis (Noble et al., 2011), have all been reported to be significantly modulated within the first 24 h following treatment with NPY (Corvino et al., 2012(Corvino et al., , 2014. These findings suggest that in vivo NPY administration, in association with the peculiar changes in the microenvironment induced by the ongoing neurodegeneration, may trigger a complex mechanism that goes beyond a mere proliferative effect. It can be speculated that it occurs as the result of NPY's effect on both neural and non-neural elements of the niche and/or as a consequence of multiple cell-cell interactions (Figure 2). Effects of NPY on SVZ In the SVZ, the most abundant reservoir of NSCs in the human brain (Doetsch, 2003b;Lim and Alvarez-Buylla, 2014), NPY comes from the cerebrospinal fluid, together with other nutrients and growth factors (Hou et al., 2006). Dense NPYpositive networks also surround this region (Stanic et al., 2008;Thiriet et al., 2011). NPY is also locally expressed by a subset of subependymal cells (Curtis et al., 2005) and by immature neural progenitors, thus suggesting a role as an autocrine/paracrine factor in the control of SVZ neurogenesis (Thiriet et al., 2011). The effects of the peptide on the SVZ neurogenic niche have been assessed by both in vitro (Agasse et al., 2008;Thiriet et al., 2011) and in vivo studies (Stanic et al., 2008;Decressac et al., 2009). Also in this case the pro-neurogenic role of NPY is essentially played by the Y1R (Agasse et al., 2008;Stanic et al., 2008;Thiriet et al., 2011), which is mainly expressed by DCX-positive neuroblasts in adult mice (Stanic et al., 2008; Figure 1) and in Sox2 and nestin-positive cells in the developing rat (Thiriet et al., 2011). Consistently with the reported effects on dentate and olfactory NSCs, the Y1R mediates a proliferative effect, via phosphorylation of ERK MAP kinases p42 and p44 (Thiriet et al., 2011). The involvement of stress-activated protein kinase/JNK pathways, considered to play an important role in neural differentiation and maturation, has also been reported (Agasse et al., 2008). It is well known that, while sharing common regulators, the different neurogenic niches may show some differences in specific aspects, including cellular organization, neuronal subtype differentiation and migration of NSCs (Ming and Song, 2011). In this regard, some discrepancies with the SGZ have emerged: in the SVZ, in fact, NPY appears also to exert a direct role on cell migration (Decressac et al., 2009;Thiriet et al., 2011) and neuronal differentiation (Agasse et al., 2008;Decressac et al., 2009), while a mere proliferative role, without instructive signals to differentiation processes, emerged from in vitro studies on SGZ NSCs . In particular, in vivo administration of NPY in adult wild type mice showed that the newly generated neurons migrate not only to the olfactory bulb, but also towards the striatum, where they preferentially differentiate into GABAergic neurons (Decressac et al., 2009). Experiments performed on Y1R knock out mice indicated that they show a disrupted assembly of neuroblasts in the rostral migratory stream, compared with the chain-like organization present in wild type animals (Stanic et al., 2008), suggesting a role of this receptor also in cell migration. The direct demonstration of a chemokinetic effect of NPY through Y1R activation and MAPK ERK1/2 pathway recruitment in NSCs, was finally given by Thiriet et al. on rat SVZ neurospheres (Thiriet et al., 2011). The possible involvement of the Y2R has also been suggested, since Y2R null mice express a reduced number of migratory neuroblasts in both the SVZ and the rostral migratory stream, with a consequently reduced number of interneurons in the olfactory bulb (Stanic et al., 2008). It should be noted, however, that the Y2R protein was found only in close proximity to rostral migratory stream associated neuroblasts, without evidence of positivity in NSCs and/or astroglial cells (Stanic et al., 2008). Many neurodegenerative diseases induce changes in SVZ neurogenesis (Curtis et al., 2007). Alzheimer's disease and Parkinson's disease, for instance, are accompanied by a reduction in NSC proliferation, while stroke and Huntington's disease cause an enhancement of SVZ neurogenesis, resulting in an increased number of new neurons, which also migrate into damaged areas (Curtis et al., 2007). Consequently, NPY administration may be of potential interest in cell replacement-based strategies for neurodegenerative diseases affecting SVZ neurogenesis. Decressac et al. demonstrated that NPY administration in the R6/2 model of Huntington disease is able to attenuate striatal atrophy and to induce a proliferative effect on SVZ NSCs (Decressac et al., 2010). However, it did not result in an increased number of newly generated neurons migrating within the striatum. NPY administration was also ineffective in modulating dentate neurogenesis in R6/2 mice. Interestingly, a reduced expression of NPY in the hilus of R6/2 mice was observed, accompanied by a reduction in the number of Y1R positive cells in the DG, thus suggesting that alterations in the NPY system might contribute to the impairment of neurogenesis in this model of Huntington disease (Decressac et al., 2010). Effects of NPY on SCZ NPY also exerts its proliferative role in the SCZ, a caudal extension of the SVZ lying between the hippocampus and the corpus callosum that, in basal conditions, essentially generates oligodendrocytes migrating into the corpus callosum (Seri et al., 2006). Acting through the Y1R on nestin-positive cells , NPY is involved in basal and seizure-induced SCZ progenitor cell proliferation Laskowski et al., 2007). Interestingly, SCZ activity appears to be modulated by seizures, resulting in the production of glial progenitors that migrate to the injured hippocampus (Parent et al., 2006), thus raising the intriguing possibility that NPY modulates SCZ oligodendrogliogenesis as well as neurogenesis (Gray, 2008). NPY and Microglia Increasing evidence suggests that microglia play a relevant role in the neurogenic niche: unchallenged microglia contribute, through their phagocytic activity, to the maintenance of homeostasis of the neurogenic processes (Sierra et al., 2010), while the different functional phenotypic profiles that microglial cells undergo as a response to microenvironmental changes appear to have a dual role in neurogenesis (Carreira et al., 2012;Kettenmann et al., 2013;Su et al., 2014). Much evidence indicates how the pro-inflammatory cytokines released by activated microglia, such as interleukin (IL)-1beta, tumor necrosis factor (TNF)-alpha and IL-6, detrimentally affect neurogenesis (Ekdahl et al., 2003;Ekdahl, 2012;Su et al., 2014). On the other hand, in an enriched environment, activated microglia show proneurogenic properties via increased expression of insulin growth factor-1 , while, in the presence of T-helper dependent cytokines, they reduce the production of TNF-alpha . In other words, the regulatory function of microglia in neurogenesis seems to be essentially dependent on differences in instructive signals coming from the microenvironment (Ekdahl et al., 2009). Many studies support the modulatory role of NPY in the immune system, with effects ranging from the modulation of cell migration to macrophage and T helper cell differentiation, cytokine release, natural killer cell activity and phagocytosis, most likely through its Y1R (for review see Hirsch and Zukowska, 2012;Dimitrijević and Stanojević, 2013). Recent findings also indicate direct interactions between NPY and microglia, the innate defensive system in the CNS (Kettenmann et al., 2013). Ferreira et al. observed that NPY, acting via the Y1R, inhibits lipopolysaccharide-induced microglial activation and reduces the associated release of IL-1beta (Ferreira et al., 2010). This effect is mediated by NPYinduced impairment of NO synthesis and reduced inducible form of nitric oxide synthase expression (Ferreira et al., 2010). In addition, NPY also induces impairment of the phagocytic properties of activated microglia (Ferreira et al., 2011) and IL-1beta-induced microglial motility . Taken together, these observations point to the key role played by the peptide in modulating the functional activities of microglia, and consequent release of mediators during inflammation (Figure 1). Although most of these findings were obtained in in vitro systems, so that further research is needed in order to elucidate whether these interactions produce the same regulatory responses in vivo, a relevant influence of NPYmicroglia interactions in the homeostasis of the neurogenic niche may be inferred. Because of the influence exerted by neuroinflammation on neurogenesis (Carreira et al., 2012), NPY-microglia signaling could be particularly relevant in the modulation of injury-induced neurogenesis. Studies exploring the interaction between neuroinflammation and neurogenesis lead to the hypothesis that the early detrimental action of microglia after acute neuronal damage can, in some situations, be modified into a supportive state during the chronic phase (Ekdahl et al., 2009) and NPY could be involved in the modulation of these transient properties of activated microglia. Many findings emphasize the ability of NSCs to modulate their own environment through the release of signaling factors (Klassen et al., 2003;Butti et al., 2014) and mutual interaction between NSCs and microglia have been shown by recent research (Mosher et al., 2012). In this regard, we may speculate that NPY, released by NSCs or coming from the surrounding environment, could be critically involved in this process, acting as a paracrine/autocrine factor which modulates both the state of activation of microglial cells and their interactions with NSCs (Figure 2). NPY and Astrocytes Astrocytes are complex cells, whose supporting roles in the healthy CNS includes the regulation of blood flow, the modulation of synaptic function and plasticity and maintenance of the extracellular balance of ions and transmitters (Sofroniew, 2009). They also act as important regulators of the niche environment, through the secretion of diffusible factors (Lie et al., 2005;Barkho et al., 2006;Lu and Kipnis, 2010;Barkho and Zhao, 2011;Wilhelmsson et al., 2012) or through membrane-associated molecules (Barkho and Zhao, 2011). Thanks to their peculiar position between endothelial cells and neurons, astrocytes can mediate the exchange of molecules between vascular and neural compartments (Parpura et al., 2012). In addition, a specific subpopulation of astrocytes, the radial astrocytes, directly generates migrating neuroblasts, via rapidly dividing transit-amplifying cells (Seri et al., 2001;Doetsch, 2003a). Several studies indicate that the expression of NPY and NPY receptors (namely Y1R) is also extended to some astrocyte subpopulations (Barnea et al., 1998(Barnea et al., , 2001St-Pierre et al., 2000), including retinal astrocytes (Alvaro et al., 2007). It has been shown that astrocytes, like neurons, are able to synthesize NPY and show a regulated secretory pathway that is responsible for the release of multiple classes of transmitter molecules: in this regard, the activation of metabotropic glutamate receptors results in a calcium-dependent fusion of NPY-containing dense-core granules with the cell membrane and consequent peptide secretion (Ramamoorthy and Whim, 2008). It has been suggested that this process may be controlled by the RE-1--silencing transcription factor, the same factor that regulates neurosecretion in neurons (Prada et al., 2011). The expression of NPY in astrocytes is controlled by several factors: the post-natal down-regulation of glial peptide transcripts has been reported, as well as its upregulation in adult astrocytes after brain injury (Ubink et al., 2003). Interestingly, the in vivo intracerebroventricular administration of NPY significantly increases the proliferation not only of neuroblasts but also of astrocytes within the SVZ, mainly via the Y1R (Decressac et al., 2009 ; Figure 1). These findings delineate a complex scenario in which the peptide could exert its influence and, although direct evidence is still lacking, a role of NPY-gliotransmission in the modulation of critical steps of adult neurogenesis may be hypothesized, in both physiological and pathological conditions. In particular, it has been reported that the expression of astrocytic NPY also appears to be modulated in a cytokine-specific manner: in this regard, a relevant role of fibroblast growth factor (Barnea et al., 1998) and IL-beta (Barnea et al., 2001) in astrocytic NPY upregulation has emerged in in vitro studies. Both these factors can be released by astrocytes as well as by microglia: since, as previously reported, NPY inhibits microglial production of IL-1beta and IL-1betainduced phagocytosis (Ferreira et al., 2011, a role of the peptide in astroglial/microglial interplay could be speculated. It is conceivable that it may be involved in the astrocytic regulation of microglial differentiation and activation, which, in turn, differently affect neurogenesis. In addition, it has been reported that NPY increases the proliferative effect of the astrocyte-derived growth factor fibroblast growth factor-2 on NSCs, through the increased expression of fibroblast growth factor-receptor 1 on granule cell precursors (Rodrigo et al., 2010). This observation indicates the involvement of NPY also in the neuron-glial crosstalk and further reinforces the hypothesis that it could be one of the molecules significantly involved in the mutual interactions among the different components of the niche (Figure 2). NPY and the Endothelium The vasculature is a critical component of the neurogenic niche, and endothelial cells closely interact with NSCs to form ''neurovascular niches'', contributing to the regulation and maintenance of the niche (Palmer et al., 2000;Shen et al., 2004Shen et al., , 2008Tavazoie et al., 2008;Goldberg and Hirschi, 2009; for review Goldman and Chen, 2011). The molecular cross-talk between NSCs and endothelial cells is mediated by diffusible factors secreted by endothelial cells, such as BDNF and vascular endothelial growth factor (VEGF), as well as by cell-cell contact (Leventhal et al., 1999;Jin et al., 2002;Shen et al., 2004Shen et al., , 2008Snapyan et al., 2009;Sun et al., 2010; for review Goldman and Chen, 2011;Vissapragada et al., 2014). Although the characterization of NPY receptors in the cerebral endothelium has not been fully clarified (Abounader et al., 1999;You et al., 2001), much evidence suggests that the endothelium could represent one of the sources, as well as one of the targets, of this peptide (Silva et al., 2005). The angiogenic action of NPY has been confirmed in several in vitro and in vivo models: using specific receptor antagonist or transgenic Y2R knockout mice, these studies reinforced the primary role of the Y2R in mediating NPY's angiogenic response (Zukowska-Grojec et al., 1998;Ghersi et al., 2001;Ekstrand et al., 2003;Lee et al., 2003a,b;Movafagh et al., 2006; Figure 1). NPY also appears to exert a relevant role in the regulation and stimulation of angiogenesis in pathological processes and tissue repair, as evidenced in in vivo models of peripheral limb ischemia (Grant and Zukowska, 2000;Lee et al., 2003b;Tilan et al., 2013), skin wound repair (Ekstrand et al., 2003) and oxygen-induced retinopathy (Yoon et al., 2002), in which both exogenous and/or endogenous (released from neural and non-neural stores) NPY significantly contribute to tissue revascularization. Angiogenesis and neurogenesis are related processes, as evidenced by data showing that cerebral endothelial cells activated by ischemia promote proliferation and differentiation of NSCs, while neural progenitor cells isolated from the ischemic SVZ promote angiogenesis (Teng et al., 2008). In this regard, it has also been shown that both angiogenesis and the expression of pro-angiogenic factors exert important functions in different stages of neurogenesis, such as proliferation, migration and survival (Jin et al., 2002;Louissaint et al., 2002). Interestingly, among these molecules, a relevant role is played by NO signaling, which regulates both angiogenesis and neurogenesis (Carreira et al., 2013), and whose activity is modulated by NPY not only in endothelial cells (You et al., 2001;Chen et al., 2002;Lee et al., 2003b), but also in NSCs (Cheung et al., 2012) and microglia . It may be speculated that NPY, possibly released from the endothelium, acts as a diffusible factor that could influence and modulate elements of the neurovascular niche (Figure 2). Concluding Remarks and Future Perspectives In summary, existing data provide evidence that NPY modulates the neurogenic niche performing a pro-neurogenic role directly on the NSCs, while the possibility of a concomitant modulatory action on astrocytes, microglia and endothelium activities within the niche is also possible. The involvement of NPY as a key player in the complex process of communication among the different components of the niche may be speculated, and, in this regard, there is evident need for further research to definitely elucidate the mechanisms of NPY-modulated cell/cell interactions. This could yield a more heightened understanding of some critical steps of the complex mechanisms that regulate adult neurogenesis, thus possibly providing knowledge useful to identify selective targets for potential drugs aimed at modulating NSC fate. Moreover, due to the significant involvement of the NPY system also in non-neural stem cell niches, this information could contribute to clarify the systemic role of the peptide, which appears to be involved in a set of basic homeostatic body functions, ranging from food consumption and energy homeostasis to the regulation of stem cell biology in adult tissues. Authors and Contributors MCG: She gave substantial contributions to both the conception and design of the work; she contributed to the acquisition, analysis, and interpretation of data. She drafted the work and revised it critically. She gave the final approval of the version to be published. She agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. VC: She gave substantial contributions to the design of the work; she contributed to the acquisition, analysis, and interpretation of data for the work. She drafted the work and revised it critically. She gave the final approval of the version to be published. She agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. VDM: She contributed to the acquisition of data for the work. She drafted the work. She gave the final approval of the version to be published. She agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. EM: She contributed to the acquisition of data for the work. She drafted the work. She gave the final approval of the version to be published. She agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. FM: He provided substantial contributions to the design of the work; he contributed to the interpretation of data for the work. He revised critically the work. He gave the final approval of the version to be published. He agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
7,100.8
2015-03-16T00:00:00.000
[ "Biology" ]
The effects of Leishmania RNA virus 2 (LRV2) on the virulence factors of L. major and pro-inflammatory biomarkers: an in vitro study on human monocyte cell line (THP-1) Background Cutaneous Leishmaniasis (CL) is a parasitic disease with diverse outcomes. Clinical diversity is influenced by various factors such as Leishmania species and host genetic background. The role of Leishmania RNA virus (LRV), as an endosymbiont, is suggested to not only affect the pathogenesis of Leishmania, but also impact host immune responses. This study aimed to investigate the influence of LRV2 on the expression of a number of virulence factors (VFs) of Leishmania and pro-inflammatory biomarkers. Materials and methods Sample were obtained from CL patients from Golestan province. Leishmania species were identified by PCR (LIN 4, 17), and the presence of LRV2 was checked using the semi-nested PCR (RdRp gene). Human monocyte cell line (THP-1) was treated with three isolates of L. major with LRV2 and one isolate of L. major without LRV2. The treatments with four isolates were administered for the time points: zero, 12, 24, 36, and 48 h after co-infection. The expression levels of Leishmania VFs genes including GP63, HSP83, and MPI, as well as pro-inflammatory biomarkers genes including NLRP3, IL18, and IL1β, were measured using quantitative real-time PCR. Results The expression of GP63, HSP83, and MPI revealed up-regulation in LRV2 + isolates compared to LRV2- isolates. The expression of the pro-inflammatory biomarkers including NLRP3, IL1β, and IL18 genes in LRV2- were higher than LRV2 + isolates. Conclusion This finding suggests that LRV2 + may have a probable effect on the Leishmania VFs and pro-inflammatory biomarkers in the human macrophage model. Supplementary Information The online version contains supplementary material available at 10.1186/s12866-023-03140-0. CL is endemic in tropical and subtropical countries, with a global incidence ranging from 0.7 to 1 million new cases per year [7].Iran is one of the endemic areas for CL and L. major, and L. tropica are responsible for approximately 80% and 20% of cases, respectively [4,6,8].Clinical manifestations of CL are mostly limited to skin ulcers, however, atypical forms including disseminated, mucosal, and visceral involvements are also reported [9][10][11].The severity of the disease seems to be multifactorial, depending on the host immune responses, Leishmania species, and sandfly factors [12,13]. Recent evidence has highlighted the role of viruses as endosymbionts in the pathogenicity of certain protozoa [14][15][16][17].Leishmania RNA virus (LRV) was firstly identified in L. guyanensis by Tarr et al. [18].Based on the complete nucleotide sequence, LRVs are classified into two types: LRV1 (New World) and LRV2 (Old World), with less than 40% similarity in their genomes [19,20].The presence of LRV2 in Iran was mostly confirmed in L. major, and rarely in L. infantum and L. tropica [21][22][23].However, the role of LRV in treatment failure, pathogenesis of Leishmania species, and immune responses have been investigated [24][25][26]. Leishmania virulence factors (VFs) play a crucial role in pathogenesis of the parasite by influencing the host's immune responses [12,27,28].This study examines the most important pathogenesis factors, contributing to the parasite pathogenesis and cytokine regulation. Heat-shock proteins (HSP), glycoprotein phosphatase (GP63), and mannose phosphate isomerase (MPI) are the most important pathogenesis factors, which play crucial roles in the maturation of Leishmania spp., macrophage activation, immune modulation and growth of the parasite, respectively [29][30][31].HSPs or stress proteins are highly evolutionarily conserved proteins that play important roles in vital activities of Leishmania, such as protection against stress and trivalent antimonials (HSP23), and the maintenance of the cell (HSP90) [16].HSP90 (HSP83 homolog) is also considered as a viral protein in maturation of the parasite [29]. GP63, a prominent surface protein belonging to the metzincin class, is commonly expressed on the surface of Leishmania parasites.It is recognized as the primary membrane surface protein in these parasites.This activity of GP63 is related to the protection of Leishmania parasites against phagolysosomes of macrophages in hosts and digestive enzymes in the vector's midgut [31]. Additionally, GP63 is one of the main factors, which is activated during macrophage infection that modulates immune responses [31,42]. MPI is an enzyme playing a crucial role in the reversible conversion of fructose-6-phosphate and mannose-6-phosphate, which are essential for the biosynthesis of various glycoconjugates.The absence of MPI has been linked to prolonged growth time in Leishmania spp [14].Additionally, Leishmania species produce significant amounts of mannose-containing glycolipids and glycoproteins, which contribute to the virulence factors of Leishmania spp [16]. Interleukin (IL) IL-1β and IL-18 are important proinflammatory cytokines during innate immune responses to leishmaniasis, which are mediated by activation of NOD-like receptors (NLRs) [32,33].The role of NLRP3 in leishmaniasis seems to be like a double-edged sword.Although NLRP3 is thought to be protective against leishmaniasis, there is evidence suggesting a synergistic role of this inflammasome in the pathogenesis of the parasite [34]. While several studies have investigated the role of LRV1 in the pathogenesis of Leishmania spp., there is few data on the effects of LRV2 in the pathogenesis of Old World Leishmania species [17,35].This study aimed to investigate the effects of a number of L. major (three LRV2 + and one LRV2-) isolates, collected from CL patients, on the expression of VFs (GP63, HSP83, and MPI) in Leishmania isolates, and pro-inflammatory biomarkers (NLRP3, IL18, and IL1β) on human monocyte cell line (THP-1). Sample collection and cultivation Leishmania isolates were collected from CL patients whom were referred to the referral health centers in Golestan province, during December 2021 and May 2022.These patients were diagnosed based on the clinical characteristics and parasitology methods (including microscopic and culture detection).For parasitology diagnosis, suspected lesions to CL were scraped by using a sterile scalpel, and the exudate materials were stained with Giemsa, and microscopically checked.The scrapped materials were initially cultured on a two phasic medium containing Novy-MacNeal-Nicolle (NNN) medium and RPMI-1640 medium (Gibco, Germany) supplemented by 10% fetal bovine serum (FBS) (Gibco, Germany), with penicillin (100 U/mL) and streptomycin (100 µg/mL) (Sigma-Aldrich, St. Louis, USA).The culture media were incubated at 25˚C.After 6-8 days, the promastigotes were sub-cultured and incubated at 25 °C in RPMI-1640 medium, supplemented with 10% FBS and 1% penicillin/ streptomycin, for 5 days [23]. RNA extraction and cDNA synthesis Total RNA was extracted from 1 × 10 6 promastigotes according to the manufacturer's protocol (YTZ, Favorgen, Taiwan).The purity of the extracted RNA was evaluated through agarose gel electrophoresis, based on the appearance of the specific bands.Additionally, the concentration of RNA was determined using a NanoDrop spectrophotometer at 260 nm (Thermo Scientific™ Nano-Drop™ One Microvolume UV-Vis) (Suppl Fig. 1).The complementary DNA (cDNA) was synthesized from 100 ng of total RNA YTA kit (Favorgen, Taiwan) following the manufacturer's protocol [17].The amplified cDNA was stored at -20 °C till to be used for semi-nested PCR. Semi-nested PCR The initial PCR using an outer forward primer LRV F1 (5' TGTAACCCACATAAACAGTGTGC 3') and reverse primer LRV R (5' ATTTCATCCAGCTTGACTGGG 3') was performed to amplify a 526-bp external partial sequence of the RdRp gene.The semi-nested PCR was performed on the primary PCR products.A pair of primers, forward primer LRV F2 (5' AGGACAATC-CAATAGGTCGTGT 3') and reverse primer LRV R (5' ATTTCATCCAGCTTGACTGGG 3') were used to amplify a 315-bp product of the RdRp gene of LRV2.The PCR program for two steps consisted of 35 cycles of 94 °C for 35 s, 60 °C for 35 s, and 72 °C for 1 min.The final extension of the strands consisted of 72 °C for 4 min.The PCR products were analyzed by electrophoresis on a 1.5% agarose gel stained with SYBR safe gel stain (Thermo Fisher Scientific, USA) next to the 100 bp DNA marker (Fermentas, Life Sciences) [17,37]. Macrophage differentiation THP-1 cells were cultured in 25 cm 2 culture flasks (SPL Life Science Co, Korea) in a complete medium containing RPMI 1640 with 25 mM HEPES, supplemented with 10% FBS and 1% penicillin (100 U/mL) and streptomycin (100 µg/mL) (Sigma-Aldrich, St. Louis, USA).The cells were incubated at 37 °C, with 5% CO 2 .The culture medium within the flasks was changed every 2-3 days.To differentiate THP-1 monocyte to macrophage, 5 × 10 5 cells/mL were transferred to a 6-well cell culture plate (SPL Life Sciences, Korea) containing RPMI-1640, supplemented with 50 ng/mL phorbol myristate acetate (PMA) (Santa Cruz Biotechnology).The cells were incubated at 37 °C and 5% CO 2 for 48 h.Differentiated cells were identified by the presence of pseudopodia and adherence to the bottom of the wells, while non-adherent undifferentiated monocytes were washed away with RPMI 1640 media [38]. Macrophage Infection Prior to co-incubation, promastigotes of each Leishmania isolate were centrifuged at 2500 rpm for 7 min and the cell pellet was re-suspended with fresh RPMI 1640 medium with 10% FBS.THP-1 macrophages were infected with each Leishmania promastigotes isolate with a multiplicity of infection (MOI) = 3 and were incubated at 37 °C with 5% CO 2 .The expression analyzes of target genes were performed at zero (6 h after initial infection), 12, 24, 36, and 48 h after co-infection.All experiments were performed in duplicate. RNA extraction and cDNA synthesis Total RNA was extracted as described by the manufacturer (YTA, favorgen, Taiwan).The zero time-point was described as 6 h after the initial co-infection.The purity of the extracted RNA was assessed by agarose gel electrophoresis (based on the appearance of the specific bands on the gel) and the concentration of RNA was assessed using a NanoDrop spectrophotometer at 260 nm (Thermo Scientific™ NanoDrop™ One Microvolume UV-Vis).The cDNA was synthesized from 500 ng of total RNA using Superscript II Reverse Transcriptase "cDNA synthesis kit" (SMOBIO) following the manufacturer's instructions.The amplified cDNA was stored at -20 °C till used for real-time quantitative PCR. A real-time PCR was performed in a 15-µl reaction containing: 0.5 µl forward, 0.5 µl reverse primers 7.5 µL 2X SYBR green master mix (Ampliqon, Denmark), 5.5 µl distilled water, and 1 µl cDNA from baseline pure culture or post-macrophage co-infection at zero, 12, 24, 36 and 48 h.The reaction was programmed with the following details: holding stage: at 90 °C/3 min, cycling stage: 45 cycles/15 sec at 95 °C and at 60 °C/35 sec, and melt curve stage: at 95 °C/15 sec, at 60 °C/60 sec and then at 95 °C/15 sec.Results were analyzed using the relative expression software tool (REST; https://www.gene-quantification.de/rest.html).The relative expression value of each gene was determined based on the threshold cycle (Ct) value of the target genes, calculated by normalization with ALT and Β-ACTIN constitutive gene Ct values.All experiments were duplicated and data are reported as the mean ± SD (standard deviations).The level of accepted statistically significance was 95% and P-value < 0.05. Leishmania characterization and LRV2 detection In this study, four Leishmania isolates were selected from human CL patients based on the study's objectives.All four isolates were identified as L. major (Suppl Fig. 2).Among these, one isolate (S1-) was LRV2 negative, while three isolates (S2+, S3+, and S4+) were positive for LRV2 using semi-nested PCR (Table 2).These isolates were further utilized for the analysis of VFs genes and pro-inflammatory biomarkers expression using RT-qPCR methods. MPI The expression of the MPI gene was upregulated at zero time-point, with the highest expression in the S4 + isolate (4; P-value = 0.0001).At the 12 h time-point, significant upregulation was observed in the S3+ (9.4; P-value = 0.0001) and the S4+ (3.8; P-value = 0.0001) isolates.During the 24 h period after co-infection, an increase was observed in the S2+, S3+, and S4 + isolates.By the 36 h time-point, the expression of the MPI gene was significantly increased in all isolates.At 48 h, significant upregulation was observed in the S4 + isolate (2.1; P-value = 0.0176) (Fig. 3). IL-18 The results showed significant changes in the IL-18 gene expression at the time of zero compared to the control.The S1-and S2 + were upregulated, but S3 + and S4 + were downregulated.An upregulation was observed at timepoint 12, in S2+, S3+, and S4+ (2.51; P-value = 0.01), (3.06; P-value = 0.001), (2.22; P-value = 0.05), respectively.The expression of the IL-18 gene during the time-points 24 and 36 h was upregulated, while the highest gene expressions of the IL-18 gene was at 36 h and in S4+ (2.61; P-value = 0.01).At the 48 h time-point, all isolates were significantly downregulated compared to control (Fig. 5). IL-1β The results of real-time PCR showed significant changes of the IL-1β gene at the time of zero compared to control.All isolates were significantly upregulated compared to control, but S1-revealed higher upregulation compared to LRV2 + isolates (3.6; P-value = 0.0001).At time-point 12 an upregulation was still observed in S1-, S2+, S3 + and S4+, but S2 + showed higher gene expression compared to other isolates (3.32; P-value = 0.0001).The expression of IL-1β gene during the time-points 24 and 36 h was significantly decreased in all isolates, but at 48 h, all isolates were significantly upregulated compared to other time point (Fig. 6). Discussion The pathogenicity of Leishmania parasites is influenced by multiple factors, most importantly including Leishmania species, VFs expression, and the host's immune responses [15,16].Recent findings have indicated that LRVs may intensify the severity of the disease, boost invasion of Leishmania parasite, and modulate immune responses [15,17,39,40].These findings suggest an association between LRVs and the clinical outcomes of leishmaniasis.The role of LRV1 in the pathogenesis of New World Leishmania species has been investigated [15,16,41].However, there is limited data regarding the correlation between LRV2 and Old World Leishmania species [17].Therefore, in this study, we evaluated the impact of LRV2 on the expression of VFs and pro-inflammatory biomarkers in response to L. major isolates, both LRV2 + and LRV2-, at different time points after co-infection.GP63 is known to play a critical role in the attachment and entry of Leishmania promastigotes into macrophages.GP63 participates in other important processes, such as modulation of the host immune responses, degradation of host cell components, and further contributing to the pathogenesis of Leishmania infections [42,43].An in vivo study demonstrated that GP63-deficient L. major significantly reduces the development of CL lesions in mice, suggesting that GP63 does not significantly influence the pathogen-induced inflammatory cell recruitment, but may affect inflammatory cell activation and functions [43].However, conflicting results have been released about the role of LRV in the expression of GP63.Kariyawasam et al. [16].reported no significant difference in GP63 gene expression between LRV1 + and LRV1-groups.Our data revealed an increasing trend in the expression of GP63 in LRV2 + isolates compared to LRV2-.It is noteworthy that our findings on the expression of GP63 are closely aligned with the results reported by Rahmanipour et al. [17].Therefore, it seems that the GP63 gene represents higher expression in LRV2 + isolates compared to LRV2-.Nevertheless, further studies are required to validate and confirm these results. The expression of HSP83 gene in Leishmania-infected macrophages is upregulated.This upregulation plays a significant role in both parasite survival and replication [21].It was shown that a higher concentration of HSP83 is associated with active mucosal and cutaneous ulcers, suggesting a positive correlation between HSP83 and the pathogenicity of Leishmania species [44,45].In this study, the expression of HSP83 gene showed an increase at all time-points in LRV2 + compared to LRV-isolates, although a downregulation was observed at 12 h in LRV2 + isolates.However, the outcome of the presence of LRVs on the expression of HSP-related genes is controversial.For example, Rahmanipour et al. [17].observed higher levels of HSP70 gene expression in the initial hours for the LRV2 + strain, while it was downregulated at the final time-points.In contrast, Kariyawasm et al., [16] reported higher expression of HSP90 in LRV-strains compared to LRV1 + strains.Therefore, it seems that the role of species or strains of Leishmania and the presence of LRV may affect the expression of HSPs.Generally, HSP83 is thought to be constitutively expressed, which is consistent with our findings [16]. MPI is involved in the recruitment of other VFs including lipophosphoglycan (LPG) and GP63, and the lack of this protein has been associated with slow growth in Leishmania parasites [27].Kariyawasm et al., [16] reported higher expression of MPI gene in LRV-strains compared to LRV1 + strains.In contrast, current findings have shown that the expression MPI gene was increased in all isolates of LRV2 + compared to LRV2-.Therefore, it may be concluded that LRV2 plays an important role in the upregulation of VFs genes.However, to validate this observation, further investigations are required.For this purpose, monitoring of the ulcer progression, response to treatment, and clinical presentation of CL lesions should be considered. Different results have been reported regarding the role of cytokines and inflammasomes in the pathogenesis of CL [15,34,41,46].As an early response to Leishmania, activation of inflammasomes, particularly NLRP3, is a vital part of the immune response to the parasite.Upon stimulating NLRP3, caspase-1 is activated through autoproteolysis, leading to the activation of pro-IL-18 and pro-IL-1β [47,48].Ives et al., [26] suggested that LRV may directly activate inflammatory signaling in macrophages, which leads to the activation of cytokines and chemokines.Therefore, LRV is a potential ligand for the activation of toll-like receptor (TLR) 3 and subsequent activation of the NLRP3 inflammasome [26].Our results demonstrated that there was no difference in NLRP3 expression between LRV2 + and LRV2-isolates, during the initial hours, but at 48 h, LRV2 + isolates significantly increased the expression levels of NLRP3 gene compared to LRV2-.de Carvalho et al. [40].reported an inverse association between inflammasome activation and the severity of leishmaniasis, supporting a protective role of the inflammasome during Leishmania infection.Therefore, it can be concluded that the presence of LRV1 dampens NLRP3 activation to favor infection and pathogenesis of Leishmania parasite.Nevertheless, the activation of the NLRP3 plays a crucial role in determining the outcome of leishmaniasis [34].Hartley et al., [49] reported no significant difference in NLRP3 expression between LRV1 + and LRV1-in L. guyanensis.Therefore, they demonstrated that L. guyanensis evades inflammasome activation, regardless of the presence of LRV1.Indeed, there is limited understanding of the signaling pathways that trigger NLRP3 activation in response to Leishmania infection.Further research is needed to elucidate the specific mechanisms through which Leishmania parasites induce NLRP3 activation and the subsequent inflammatory responses [39]. The role of IL-1β and IL-18 in Leishmania infection has been the subject of numerous studies [39,46,48].It was reported that IL-1β can modulate the immune responses, while IL-18 shifts the T-cell activation pathway towards Th2, however, both cytokines contribute to the progression of the disease [50,51].Notably, IL-1β has been identified as a significant signaling factor for host resistance against infection, as this cytokine transmits signals through IL-1R and myeloid differentiation primary response protein (MyD) 88, leading to the induction of NOS2-mediated nitric oxide (NO) production.In addition, it was suggested that IL-1β plays a role in increasing NO production, leading to reduced parasite proliferation and enhanced resistance to Leishmania infection.In our study, the expression of IL-1β gene was higher in the LRV-isolate than the LRV2 + isolates at the initial and final hours.In the line of our results, Kariyawasam Fig. 4 The expression of the NLRP3 gene in three LRV2 + isolates (S2+, S3 + and S4+), and LRV2-isolate (S1-) compared to control (uninfected macrophage) at different times after co-infection.Data analysis was done using two-way ANOVA for repeated measurements followed by the Tukey test.Bars represent mean ± SD. * Symbol represents the meaningful difference between groups.(*P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001) et al., [15] and Carvalho et al., [40] reported similar findings, reporting higher expression of IL-1β gene in LRV1compared to LRV1+.Hence, it appears that the presence of LRV plays a significant role in suppressing the activity of the immune system and the expression of pro-inflammatory cytokines.The expression of IL-1β in LRV2 + isolates was reported higher than in LRV-isolate during the early hours, but in contrast to our findings, the expression of IL-1β gene was lower in LRV-isolate compared to LRV2 + isolate at the final hours [17]. IL-18 is a pro-inflammatory cytokine that plays a protective role against pathogenesis factors of the Leishmania parasite and contributes to innate and adaptive immunity.Evidence suggests that IL-18 plays a critical role in modulating T cell responses during L. major infection [51][52][53].Some studies indicate a positive role for IL-18 in promoting Th1 responses and resistance against Leishmania species infection, while conflicting results showed that IL-18 may enhance Th2-biased responses and causes susceptibility to the parasites [51,52].It was suggested that IL-18 may induce development of Th1 and natural killer (NK) cells and production of IFNγ via overexpression of IL-18R on Th1 and NK cells [54,55].In addition, IL-18 induces an IFNγ-independent immunity against Leishmania parasites [56].In contrast, IL-18 seems to produce and release Th2 cytokines like IL-4 and IL-13 [57][58][59], which are protective against L. donovani, while induces susceptibility to L. major [58].However, the role of IL-18 in Leishmania infections remains vague and depends to the Leishmania species and host genetics.In our study, we observed downregulation of IL-18 gene in the LRV2 + isolates compared to the LRV2-isolate in the early and middle hours, however, there was an upregulation during the final hour.However, it is necessary to fully elucidate the mechanisms behind the activation of the host's immune responses in leishmaniasis, led by IL-18. Conclusion Our observations indicate that the presence of LRV2 + in L. major in comparison to LRV2-leads to an increase in the expression of VFs (GP63, HSP83, and MPI genes), while there is a declining trend in the expression of proinflammatory biomarkers (NLRP3, IL-18, and IL-1β genes).However, it is crucial to take into account the influence of various factors, including the host immune response, different Leishmania strains, the presence of VFs, and the expression of cytokines, in addition to the LRV status. Collectively, the pathogenesis of Leishmania parasites is highly complex, particularly when attempting to establish a link between the pathogenesis and Leishmania viruses.Understanding the interplay between the parasite, the virus, and the host immune responses is a critical challenge, and further investigations and comprehensive studies are required to unravel the intricate mechanisms involved in the pathogenesis of Leishmania parasites and the potential influence of LRVs. Table 1 Primer sequences for parasite VF genes and pro-inflammatory biomarkers Table 2 Characteristics of isolates from CL patients
5,101.6
2023-12-14T00:00:00.000
[ "Medicine", "Biology" ]
Comparison of Oleo- vs Petro-Sourcing of Fatty Alcohols via Cradle-to-Gate Life Cycle Assessment Alcohol ethoxylates surfactants are produced via ethoxylation of fatty alcohol (FA) with ethylene oxide. The source of FA could be either palm kernel oil (PKO) or petrochemicals. The study aimed to compare the potential environmental impacts for PKO-derived FA (PKO-FA) and petrochemicals-derived FA (petro-FA). Cradle-to-gate life cycle assessment has been performed for this purpose because it enables understanding of the impacts across the life cycle and impact categories. The results show that petro-FA has overall lower average greenhouse gas (GHG) emissions (~2.97 kg CO2e) compared to PKO-FA (~5.27 kg CO2e). (1) The practices in land use change for palm plantations, (2) end-of-life treatment for palm oil mill wastewater effluent and (3) end-of-life treatment for empty fruit bunches are the three determining factors for the environmental impacts of PKO-FA. For petro-FA, n-olefin production, ethylene production and thermal energy production are the main factors. We found the judicious decisions on land use change, effluent treatment and solid waste treatment are key to making PKO-FA environmentally sustainable. The sensitivity results show the broad distribution for PKO-FA due to varying practices in palm cultivation. PKO-FA has higher impacts on average for 12 out of 18 impact categories evaluated. For the base case, when accounted for uncertainty and sensitivity analyses results, the study finds that marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, and water depletion are affected by the sourcing decision. The sourcing of FA involves trade-offs and depends on the specific practices through the PKO life cycle from an environmental impact perspective. Electronic supplementary material The online version of this article (doi:10.1007/s11743-016-1867-y) contains supplementary material, which is available to authorized users. Introduction Non-ionic surfactants are used in many products such as ''detergents, cleaners, degreasers, dry cleaning aids, petroleum dispersants, emulsifiers, wetting agents, adhesives, agrochemicals, including indoor pesticides, cosmetics, paper and textile processing formulations, prewash spotters, metalworking fluids, oilfield chemicals, paints and coatings, and dust control agents'' [1]. Nonylphenol ethoxylates (NPE) are popular non-ionic surfactants ''due to their effectiveness, economy and ease of handling and formulating'' [2]. However, NPE are highly toxic to aquatic organisms [1, 2] and degrade into nonylphenol (NP), which ''is persistent in the aquatic environment, moderately bioaccumulative, and extremely toxic to aquatic organisms'' [1]. Due to these concerns, the US Environmental Protection Agency (EPA) and detergents manufacturers cooperated to eliminate their use in household laundry detergents [3]. Also, EPA has laid out action plan to address widespread use in large quantities in industrial laundry detergents under the Toxic Substances Control Act [3]. Due to higher biodegradability and unobjectionable aquatic toxicity profiles of the degradation products, alcohol ethoxylates (AE) are used to replace NPE [2]. AE are also nonionic surfactants that are produced via ethoxylation of fatty alcohol (FA) with ethylene oxide (EO). This involves condensation of polyethylene glycol ether groups on FA chains. Depending on the FA structure and number of polyether units, the physical and chemical properties of AE vary [4]. When the chain length of FA ranges in C 9 -C 16 , the properties are suitable for detergents production [4] for industrial and institutional cleaning products including hard surface cleaners and laundry detergents. In addition to these product stewardship practices, sustainability minded companies are also evaluating the environmental impact of their operations, as well as the burdens from the other phases of product life cycle, including raw material sourcing. With respective to raw material sourcing, a bio-based value chain is often assumed to have less environment impact, at least from greenhouse gases (GHG) emissions perspective. For AE producers, the source of FA could be either bio-based oleochemicals (oleo-FA) or petrochemicals (petro-FA). These AE with like structures (linearity-wise and chain lengths) are readily biodegradable independent of alcohol feedstock and their aquatic toxicities are function of FA chain length, branching and amount of ethoxylation [5]. These similarities in the environmental performance at the product's use and end-of-life phases do not capture differences in environmental impacts during the raw material production. The detailed understanding of the raw material requirements, energy consumption, waste generations and disposal, and emissions, along with the resulting impacts on the environment, is important for sustainability-minded AE consumers and other supply chain participants. Such an understanding could be gained through a life cycle assessment (LCA) approach as it allows incorporation of all relevant life cycle stages along with diverse types of environmental impacts. LCA is the comprehensive evaluation of the process in a cradle-to-grave, cradle-togate or gate-to-gate fashion to understand the environmental aspects of a product or a service. LCA study involves understanding the assessment goal and scope; estimating the amount of raw materials and energy input, waste generated, and emissions from the process for all the relevant life cycle stages (Life Cycle Inventory, LCI); translating LCI results to understand and evaluate the potential environmental impacts (Life Cycle Inventory Assessment, LCIA); and formulating conclusions and recommendations based on the results. LCA has been used since the 1960s and its application for surfactants started with developing of LCI [6][7][8]. These early studies compiled data on the natural resources consumed, wastes generated, and emissions for then-industry practices for AE production from both petrochemical and oleochemical feedstocks. However, the impacts from land transformation for palm plantation were not covered and the scope was limited to LCI due to lack of agreed-upon LCIA methods. The results from these LCI studies did not find any scientific basis for any single feedstock source to be environmentally superior [6,8] as ''benefits in one direction (e.g., renewability) are offset by liabilities in another (intensive land-use requirements)'' [6]. LCA studies for detergents since then have been based on the results of these earlier studies and are for the products with AE and FA as ingredients such as that by Kapur et al. 2012 [9]. In 2007, the 'ecoinvent data v2.0' project [4] updated the LCI results from the earlier studies with land use, transportation and infrastructure information. However, again the LCIA and conclusions steps were not done. The LCA results from production of palm derived oil, which is used for FA production, have been published [10][11][12][13]. The scopes of these studies vary from evaluating the impacts of oil from palm fruits and/or palm kernels [11,12] to evaluating the various practices for palm oil mill operations [10,13]. Overall, there has been no LCA study with LCIA results evaluating impacts of feedstocks for FA production. This study aims to contribute towards this gap and presents the findings for understanding the relative environmental performances of sourcing FA from petrochemical and palm kernel oil (PKO) feedstocks. These findings are expected to contribute to the discussions towards such an understanding rather than a final conclusion as such. Experimental Methods While LCA has been around since 1960s, it was not widely adopted until the early 1990s. Currently, LCA is guided by international standards (ISO 14040 to ISO 14044), which have proposed the framework for conducting an LCA study [14]. As per this framework, LCA involves four iterative steps: (1) Goal and scope definition, (2) Life cycle inventory analysis (LCI), (3) Life cycle impact assessment (LCIA) and (4) interpretation. The intended and expected applications of the results help define the goal and scope. The results and findings of LCI are checked with goal and scope to decide whether goal and scope should be modified or additional effort should be spent on LCI step. Similarly, LCIA results and findings are evaluated against previous two steps. The results from LCI and LCIA steps are interpreted with respect to goal and scope and for robustness. The results of this fourth step are evaluated against the other three steps for any modification or additional efforts. This standard methodology was used for this study and the detailed descriptions could be found in ISO 14040 through ISO 14044. The goal of this study was to create an understanding of the relative environmental impacts for selecting between petro-FA and PKO-FA 1 for use in AE production. A comparative LCA study was performed because it allows simplification of the scope to the dissimilar parts of each process. FA are predominantly linear and monohydric aliphatic alcohols with chain lengths between C 6 and C 22 [4]. Despite the differences in FA sourcing, ''the chemical and physical properties of the final product [AE] are similar for all three pathways [petrochemical, PKO, coconut oil], provided their carbon chain length and ethoxylate distribution is similar'' [4]. However, depending on the catalyst and olefins used, not all petro-FA produced via hydroformylation technology compete with PKO-FA [15]. The scope of this study has been limited to FA that could be used interchangeably irrespective of feedstocks. Once a FA is produced and delivered, the environmental impacts are similar irrespective of FA sourcing decision. Likewise, FA sourcing decisions do not impact AE use and AE end-of-life treatment. Hence, a cradle-to-gate type boundary has been selected for this study (see Fig. 1) and all the results have been converted to one kg of FA delivered to AE production facility. In LCA terms, the functional unit for this study is one kg of FA delivered to AE production facility in Gulf Coast region of United States (US). The study has been performed through modeling in SimaPro 8.0 software for LCA studies. The modeling in LCA requires input of quantities of raw materials and energy required, waste generated and emissions from the FA production process. Similarly, the production and distribution of these raw materials and their utilization generate the environmental impacts. For PKO-FA, the impacts are also generated from the land transformation for palm plantations and from the waste generated during the palm oil mill operation. For all these processes and the impacts including the production and delivery of FA, the data used for this study are secondary and literature data. Petro-FA The petro-FA can be produced either via Ziegler process using hydrogenated catalyst triethylaluminium for alkylation of ethylene or via Oxo process using syngas for hydroformylation of long chain olefins [4]. The Ziegler process involves hydrogenation, ethylation, growth reaction, oxidation and hydrolysis of ethylene over Aluminum powder in presence of hydrocarbon solvent. While solvent is recovered, Aluminum exits the system as co-product alumina hydrate. Alkanes and oxygen-containing compounds are formed as byproducts [16]. Oxo process involves catalytic hydroformylation, catalyst recovery, catalytic hydrogenation of intermediate aldehydes and alcohol distillation of olefins and synthesis gas. While the catalyst consumption is minimal here, there are isomerization byproducts formed during hydroformylation, which are taken out during distillation as bottom heavies and overhead lights [16]. EcoInvent 3.0 (EI3.0) dataset for petro-FA production (''Fatty alcohol {RoW}| production, petrochemical | Alloc Def, U'') includes inputs and emissions reflecting a mix of 82 % of fatty alcohols produced with the Oxo process and 18 % produced by the Ziegler process. This dataset has taken the material inputs (ethylene, n-olefin, natural gas and crude oil), energy inputs (heat and electricity), solid waste generation, emissions to air, emissions to water, and impacts from transportation from literature sources while estimated water consumption and infrastructure. The disposal of solid waste is included via the process for municipal solid waste incineration and the effluent is captured through emissions to water. Further, it must be noted that this 'gate-to-gate' process also includes the impacts from some upstream processes (see Petro-upstream section). Table 1 summarizes the gate-to-gate LCI for petro-FA production. While this EI3.0 petro-FA process is fairly comprehensive, the dataset is for technology in mid-1990s as practiced in Europe for the ''Rest of World'' (RoW) region. The transportation impacts are based on the average distances and the commodity flow surveys. It is unclear how the various byproducts and wastes streams are handled. In order to address these concerns, the original dataset from EI3.0 has been modified as per the following discussions. Petro-FA Upstream Since the dataset is for a different region other than the US, there could be an effect on the results due to potential differences in the production process, difference in the electricity grid mix and heat generation mix for FA production, the differences in the transportation and so on. The dataset for petro-FA in EI3.0 for RoW region was generated via modification of the Europe region by updating the electricity grid mixes, transportation impacts and heat generation impacts. The dataset description is said to be valid from 1995 till 2013. The approach used by EI3.0 has been adapted to obtain a dataset for the US gulf coast region. The electricity grid mix was updated to Southeastern Electric Reliability Council (SERC). The heat generation process used in the petro-FA dataset and the raw material n-olefin production dataset were changed to ''Heat, central or small-scale, natural gas {SERC}| heat production, natural gas, at boiler condensing modulating \100 kW | Alloc Def, U''. This dataset for heat was derived from that for Switzerland (''Heat, central or small-scale, natural gas {CH}| heat production, natural gas, at boiler condensing modulating \100 kW | Alloc Def, U'') provided by SimaPro 8.0 by updating the natural gas source to be from North America, Fig. 1 Major process steps for the various fatty alcohol production pathways. Adapted from [4] the emissions profile for CO 2 , CO, CH 4 , N 2 O, NO X , SO 2 , lead, mercury and PM 10 as per NREL data [17] and electricity to SERC grid. Based on the AE production facility location, it is expected that the natural gas produced in US is delivered via pipeline to the FA manufacturing facility in the Gulf Coast region of US for petro-FA. This petro-FA is expected to be delivered via truck to AE manufacturing facility. The transportation distances for FA production facility to AE production facility are estimated to be *60 km for the respective plants located in US Gulf Coast region. The transportation is expected to be entirely via diesel combination trucks. The crude oil and natural gas resources require some land transformation and occupation for the drilling and other auxiliary processes. Further, the chemical plants for the processing of these and the intermediates also require land use. For the latter, the dataset ''Chemical factory, organics {GLO}| market for | Alloc Def, U'' has been included by datasets in EI3.0. For the former, the impacts are included in the datasets as well [4]. However, the impacts from the process steps are not split up due to the format of data availability. Hence, the impacts from land use change and the waste from drilling operation are accounted for in this process rather than via separate upstream process. Overall, the cradle-to-gate impacts are included. Petro-FA Catalysts Both Ziegler and Oxo routes use catalysts. EI3.0 process for petro-FA does not have aluminum powder and a hydrocarbon solvent as input and alumina hydrate as coproduct applicable for the Ziegler process. Alumina hydrate has value in catalytic processes, in ceramics and other industrial applications. Since the solvent is recovered and recycled, exclusion is reasonable. With aluminum powder and alumina hydrate, there is no indication that the corresponding impacts are included. Hence, a separate dataset was created and included to account for the upstream (Raw material to Gate) impacts. SimaPro 8.0 doesn't have any dataset for aluminum powder used for Ziegler process. This dataset, hence, was modeled with ''Aluminium, primary, ingot {GLO}| market for | Alloc Def, U'' EI3.0 dataset as a starting point. Aluminum powder is expected to be produced via gas atomization of molten ingot. The energy needed for melting (H melt ) is the primary consideration here and was estimated in J/g as per following equation from [18] where C s is the weight specific heat for solid Aluminum (0.91 J/g/°C), T m is the melting temperature of Al = 600°C, T 0 is the starting temperature (25°C assumed), H f is the heat of fusion for Al (10,580 J/mol [18]), C l is the weight specific heat of the molten Al (1.086 J/g/°C), T p is the pouring temperature (1700°C [19]). 120 % multiplication factor was used as per [18] to account for energy losses. The resulting energy is estimated to be about 90 % of total energy need as additional energy is needed in holding furnace [49]. Argon gas is expected to be used here. The volume of Argon for atomization of Ti6Al4 V from the literature [20] was adjusted for Al atomization [18]. The cooling water consumption was estimated as per process specification for ''Industrial Metal Powder Aluminum Powder Production Line'' [19]. As per Zeigler reaction stoichiometry, 1 mol of Al yields 3 mol of FA translating into 0.05 kg Al for 1 kg FA. Similarly, one mole of alumina hydrate is produced per mole of Al translating into 0.11 kg alumina hydrate per kg FA produced. The credits from Alumina co-product is as per dataset ''Aluminium oxide {GLO}| market for | Alloc Def, U'' for EI3.0. For the Oxo process, the cobalt carbonyl (HCo(CO) 4 ) catalysts are used in 0.1-1.0 wt% concentration. 2 The loss for catalyst is estimated to be \1 % [23]. This translates into 0.343-3.43 mg of Co need per kg of product. The impacts for the catalyst were accounted through ''Cobalt {GLO}| market for | Alloc Def, U'' EI3.0 dataset. Petro-FA Process Technology EI3.0 dataset for petro-FA is based on 18 % Ziegler route production and 82 % Oxo route production as per mid-1990s data. The current validity of this split was confirmed. In 2000, about 1.68 million metric tonnes of fatty alcohol was produced with 40 % being petro-FA [24]. The petro-FA production capacity in 2000 were estimated to 0.273 million tonnes for Shell's Geismar LA plant [24], 0.17 million tonnes for BASF's oxo-alcohol plant in Ludwigshafen [24], 0.10 million tonnes increase capacity for Sasol's oxo-alcohol [24] and 0.06 million tonnes for BP [25]. These translate into 0.603 million tonnes of oxo-alcohol capacity, which would account for 90 % of petro-FA produced in 2000. In 2010, 90 % capacity utilization was estimated [26]. Considering new capacity installation between 2000 and 2005 (see discussion for 2005 below), this utilization rate should be reasonable and at such utilization rate, accounted oxo-alcohols formed about 81 % of petro-FA in 2000. It must be noted that base oxo-chemical capacity of Sasol is not accounted here due to lack of information. So, the split between oxo-route and Zieglerroute holds till 2000 and any small perturbation in this split does not significantly change the overall environmental impact of petro-route. In 2005, 2.2-2.5 million tonnes of fatty alcohol production capacity has been estimated with 50 % being petro-FA [26]. The petro-FA production capacity in 2005 were estimated to 0.49 million tonnes for Shell [25,27], 0.31 million tonnes for BASF [27], 0.25 million tonnes capacity for Sasol's oxo-alcohol [28,29] and 0.0 million tonnes for BP 3 [25]. These translate into 1.05 million tonnes of oxo-alcohol capacity, which would form 86 % of petro-FA capacity in 2005. Similar to 2000, the split between oxo-route and Ziegler-route holds till 2005. In 2012, the total fatty alcohol capacity has been estimated to be 3.35 million tonnes with all of 0.8 million tonnes of capacity increase for oleo-FA [26]. Again, the split between oxo-route and Ziegler-route holds till 2012. Petro-FA Process Byproducts Both Ziegler route and Oxo route generate byproducts. With the Oxo route, *5 wt% of olefin feed gets converted to byproducts [22], 5-10 wt% of olefins remains unreacted [30,31] and *2 mol% of aldehydes being unreacted during hydrogenation [32]. These unreacted materials and byproducts are distilled out with unreacted olefins recycled to hydroformylation stage and unreacted aldehydes to hydrogenation stage [33]. The light ends are either used as high grade fuel or blend stream for gasoline [33,34]. The heavy ends are either used as fuel or solvents [31,33]. It is difficult to tell whether the existing EI3.0 dataset for petro-FA has assigned the byproducts as fuel substitute, co-products, mixture or not at all. Considering the small amount of concern here, the choice here is not expected to impact the final conclusion within the scope of this study. With the Ziegler route, besides for alumina hydrate discussed in the catalysts section, a small percentage of olefins form alkanes and oxygen-containing compounds as byproducts [16]. During the fractionation of crude alcohol formed, these byproducts could either be separated as waste or become part of certain blends. Considering the small amount of concern, the choice is not expected to impact the final conclusion within the scope of this study. Further, the EI3.0 dataset for petro-FA does account for some wastes that get incinerated. PKO-FA The oleo-FA can be produced either via fatty acid splitting route (''Lurgi direct hydrogenation'' of fatty acids obtained by splitting triglycerides from crude vegetable oil) or transesterification route (hydrogenation of methyl esters obtained by transesterification of crude or refined vegetable oil) [4]. In this study, the scope for the raw materials is limited to PKO and the production route limited to fatty acid splitting, esterification of refined PKO and esterification of crude PKO processes. In 2005, *44 % of global palm fruit were produced in Malaysia (MY) [11]. Hence, PKO is expected to be produced in Malaysia and delivered via truck to FA manufacturing facility in Malaysia. The resulting PKO-FA is then via combination of truck-shiptruck to AE manufacturing facility in the US. EI3.0 dataset for PKO-FA production (''Fatty alcohol {RoW}| production, from palm kernel oil | Alloc Def, U'') includes inputs and emissions reflecting a technology mix of 27 % produced from fatty acid splitting, 56 % produced from methyl ester on the basis of crude vegetable oil and 17 % from methyl ester out of refined oil. This dataset includes the material and energy inputs (methanol, palm kernel oil, natural gas and hydrogen), emissions to air and water, transportation and production of waste. Both processes (Fatty Acid splitting and transesterification) yield *40 wt% of PKO as glycerin. Fatty Acid splitting also yields some short-chain (C 8 -C 10 ) fatty alcohols, which could be estimated to be *5 wt% based on the average fatty acid composition for PKO [35]. For transesterification process, when the PKO is refined first, *5 wt% of PKO results in fatty acid distillate [36]. All these by-products have value. The mass-based allocations made in EI3.0 datasets for these multioutput processes were kept. Further, it must be noted that this 'gate-to-gate' process also includes the impacts from some upstream processes (see PKO-upstream section). Table 1 summarizes the gate-togate LCI for PKO-FA production. While this EI3.0 PKO-FA process is fairly comprehensive, the dataset is for the ''Rest of World'' (RoW) region with palm kernel oil sourced globally. For this study, PKO sourcing region of interest is Malaysia. Similar to petro-FA dataset in EI3.0, the transportation impacts are based on the average distances and the commodity flow surveys. In order to address these concerns, the original dataset from EI3.0 has been modified as per the following discussions. PKO-FA Upstream Datasets The dataset for PKO-FA in EI3.0 for RoW region was generated via modification of the one for Europe by updating the electricity grid mixes, transportation impacts and heat generation impacts. Such dataset is said to be valid from 2011 till 2013 as per dataset description. This approach used by EI3.0 has been adapted here to obtain a dataset for Malaysia. Since FA is produced at a facility in Malaysia, the electricity grid mix from EI3.0 dataset for PKO-FA is updated from global electricity mix to ''Electricity, medium voltage {MY}| market for | Alloc Def, U''. The heat generation process used in the PKO-FA dataset was changed to ''Heat, central or small-scale, natural gas {MY}| heat production, natural gas, at boiler condensing modulating \100 kW | Alloc Def, U''. This dataset for heat was derived from that for Switzerland (''Heat, central or small-scale, natural gas {CH}| heat production, natural gas, at boiler condensing modulating \100 kW | Alloc Def, U'') provided by SimaPro 8.0 by updating the natural gas source to be from ''Rest of World'' (due to lack of dataset for natural gas from MY) and electricity to MY grid. The transportation distances for FA production facility to AE production facility is estimated to be *20,000 km for the transoceanic shipment from Malaysia to US Gulf coast via Panama. Also, the truck transportation of *60 km is expected between the ports and production facilities. Here, the transportation impacts for the various feedstock materials and waste are considered in terms of distance to be traveled, the amount to be transported, and the mode of transportation. The capital goods and infrastructure needed for the production and transportation are only considered when already covered in EI3.0 and other datasets used in SimaPro 8.0. For methanol production related impacts, the natural gas resources (from which methanol is derived) were used. Such natural gas resources require some land transformation and occupation for the drilling and other auxiliary processes. Further, the chemical plants for the processing of these and the intermediates also require land use. For the latter, the dataset ''Chemical factory, organics {GLO}| market for | Alloc Def, U'' has been included by datasets in EI3.0. For the former, the impacts are included in the datasets as well [4]. However, the impacts from the process steps are not split up due to the format of data availability. Hence, the impacts from land use change and the waste from drilling operation are accounted for in this process rather than via separate upstream process. Overall, the cradle-to-gate impacts are included. In the existing EI3.0 dataset for PKO-FA, the raw material production datasets are for global region. The PKO production dataset was updated so that 100 % of PKO was sourced from Malaysia. PKO is a co-product of palm oil production from the palm fruits produced as 10-40 kg Fresh Fruit Bunches (FFB) on the palm trees [11]. The growing of these trees (and, hence, the production of palm fruits) require the transformation of land for palm plantations initially, and then occupation of this land [11]. The palm plantations yield on average *25 tonnes FFB per hectare [11]. FFB consists of *22 wt% empty fruit bunches (EFB), *65 wt% fleshy mesocarp (pulp) and *13 wt% in an endosperm (seed) in the fruit (Palm Kernel). The mesocarp provides Palm Oil (PO) while the seed provides Palm Kernel Oil (PKO). The yield is *22 wt% of FFB results in PO, *2.7 wt% in PKO and *3.3 wt% in Palm Kernel Extract (PKE). The kernel is protected by a wooden endocarp or Palm Kernel Shell (PKS). The solid waste left after the extraction of oils, including the fibers in pulp (*15 wt%), PKS (*7 wt%) and EFB, could be re-used as fuel substitute in energy generation and as fertilizer substitute via mulching. There is also liquid waste generated from the wastewater produced during the processing in oil mills. This wastewater effluent, termed Palm Oil Mill Effluent (POME), contains hydrocarbon contents (water and *28 wt% of FFB) that could be repurposed for fertilizer substitute or recovered for fuel substitute. There are also air emissions due to the fuel combustion for energy generation. These various aspects for PKO can be seen in Fig. 2. The economic allocation with allocation factor of 17.3 % to PKO as used in EI3.0 dataset was used to allocate the impacts and credits between PO and PKO. Even though the allocation values are based on 2006 prices, they were found to be valid based on the prices in 2014 [37,38]. EI3.0 dataset for palm plantations accounts for the benefits/impacts from growing palm trees such use of CO 2 from air. EI3.0 dataset for palm kernel oil production accounts the end-of-life treatments for the EFB, PKS and PKF via their combustion for supplying steam and electricity for the oil mills. The literature survey indicates that only PKS and PKF are used as fuel [39] and provide more than sufficient energy for oil mills [39]. EFB has been cited as ''a resource which has huge potential to be used for power generation, currently not being utilized'' [39]. The treatment of POME in EI3.0 is as standard wastewater. Recent publications [40] cited methane leaks from palm oil wastewater as a climate concern. In order to account for these differences, existing EI3.0 dataset for palm kernel oil was updated and new datasets were created to capture these differences in waste treatment. The screening level analysis suggested that PKO raw material is the single largest GHG contributor for PKO-FA accounting for the differences in GHG emissions compared to petro-FA. Hence, PKO production (including palm plantations and oil mills) processes were evaluated in details as discussed below. POME Treatment Options The end-of-life treatment for the POME could be discharge into a river without any treatment, after anaerobic digestion of organics with venting of thus-produced methane, after anaerobic digestion of organics with flaring of methane produced, or after anaerobic digestion of organics with recovery of methane for energy. The end-of-life treatment for the POME is expected to impact the pollution from the discharge of organics, generation of methane and CO 2 from organics discharge and from the discharge of nitrogen compounds. The organics emissions were estimated as per the following equation: where COD POME is the Chemical Oxygen Demand generated from discharge of organics in POME. The methane emissions were estimated as per the following equation: where B 0 is the methane producing capacity from the organics discharged and CF CH4 is the correction factor to the methane production capacity based on the conditions into which organics are discharged. The nitrogen emissions were estimated as per the following equation: where Ncontent POME is the nitrogen content discharge in the river depending on whether POME is treated or not. The values used for the parameters in Eqs. (2)-(4) for the various end-of-life treatment scenarios as per Achten et al. 2010 [41] can be found in Table S1. The emissions avoided from use of captured biogas for heat were estimated via EI3.0 dataset for cogen (''Heat, at cogen 50kWe lean burn, allocation heat/CH U''). The emissions from flaring of captured biogas were estimated via EI3.0 dataset for Refinery gas flaring (''Refinery gas, burned in flare/GLO U''). The literature survey showed that the lack of demand for thermal energy and limited/missing access to the national electricity grid has resulted in only *30 % of palm oil mills recycling POME [10,42] and only 5 % of POME gets treated to generate biogas for heat production with the rest 95 % being treated to just vent the generated biogas as shown in Table S2 [43]. Hence, a sensitivity analysis was done with the various disposal options for POME. PKS & PKF Treatment EI3.0 dataset for palm kernel oil production accounts the direct emissions from the combustion of PKS and PKF via modified 'wood chips, burned in a cogen 6400 kWth process. The modification of the 'wood chip' process accounts for the differences in dry matter, carbon content and the energy content. In this original EI3.0 approach, about 12.8 MJ of energy is generated per kg of oil produced. Of this, about 8.2 MJ is obtained from PKS and PKF. Approximately 7.84 MJ energy requirement for oil mill operation is reported in literature [10,39,[44][45][46]. This aligns with Abdullah and Sulaiman 2013 observation that PKF and PKS are sufficient to meet oil mill's energy demand [39]. Hence, the combustion impacts from original EI3.0 dataset were reduced to produce only 8.2 MJ. While this might be slightly in excess, it is expected that excess PKF & PKS will be treated the same way for convenience. EFB Treatment Options For Malaysia, 75 % of the time EFB is expected to be mulched and for the rest 25 % dumped to rot [43]. EFB rotting was based on the modeling done by Stichnothe and Schuchardt (2011) [10], which is based on IPCC guideline for estimating GHG emissions from parks and garden waste. For rest of the nutrients, 50 % leaching was assumed, except 90 % leaching for potassium based on Rabumi (1998) [47]. The initial nutrient values for EFB are shown in Table S3. For mulching, the dataset in Simapro 8.0 was used and the fertilizer value of the mulch was estimated based on literature data [44,[47][48][49][50] shown in Table S4. The mulching process was captured through EI3.0 dataset (''Mulching {GLO}| market for | Alloc Def, U'') and about 10 km trucking was assumed [44]. The recycling of EFB was similar to the POME recycling situation [10,42]. Hence, the sensitivity analysis was done with the various disposal options for EFB to evaluate the impacts from 100 % (ideal) and 0 % (the worst case) mulching. Land Use Change Options As discussed earlier, palm plantations require land. This needed land could be from secondary forests, existing cropland, primary tropical forest and/or peatland. The transformation of this land from its current primary function to another function constitutes a land use change (LUC). LUC has significant environmental implications due to biodiversity impacts, water flow impacts, soil erosion impacts, GHG emissions and such. With respect to GHG emissions, the impacts are due to disruption or destruction of carbon stocks in above ground biomass (AGB), below ground biomass (BGB), soil and dead organic matter (DOM) along with N 2 O stock for peatland [10]. ''The impact of LUC depends on various factors such as cultivation methods, type of soil and climatic conditions'' [10]. For this study, the land transformation from the existing cropland, primary tropical forest, peatlands and secondary forest have been evaluated with the base case being the current practices in Malaysia (Table S5). The literature survey indicated that ''peatland makes up 12 % of the SE [South East] Asian land area but accounts for 25 % of current deforestation. Out of 270,000 km 2 of peatland, 120,000 km 2 (45 %) are currently deforested and mostly drained'' [10] presenting a case for sensitivity with LUC. The impacts from indirect LUC 4 have been excluded from this study similar to earlier studies [41,51] as we did not find any studies with the required data or methodology. Currently, EI3.0 has datasets for existing cropland (''Palm fruit bunch {MY}| production | Alloc Def, U'') and Fig. 3 Contributions of various life cycle phases to the Life cycle GHG emissions for PKO-FA (fatty alcohol produced from palm kernel oil feedstock) and petro-FA (fatty alcohol produced from petrochemical feedstock) are shown in kg CO 2e /kg FA delivered. The various life cycle phases shown here are RMProdC2G, Transport C2G and FAProdG2G. RMProdC2G includes the raw material production (includes the impacts from the transformation of inputs from nature via various intermediate products into the raw material delivered to the fatty alcohol (FA) production site. RMC2G also includes any transportation required till RM reaches the FA production site. FAProdG2G includes the production of FA from raw materials (e.g., PKO and n-olefins and ethylene). TransportC2G includes the transportation of FA produced from the FA production site to Alcohol Ethoxylates (AE) production site. Irrespective of the feedstocks, RMProdC2G is the most impactful phase for the boundary covered in this study. It accounts for 60? and 75? % of the life cycle GHG emissions for PKO-FA and petro-FA, respectively primary tropical forest (''Palm fruit bunch {MY}| production, on land recently transformed | Alloc Def, U'') in SimaPro 8.0. The new datasets were created in SimaPro 8.0 for various types of land transformation by adjusting the value for ''Carbon, organic, in soil or biomass stock'' in primary tropical forest dataset. The values for secondary forest were derived by taking the ratio of primary forest and secondary forest in respective EI3.0 datasets for other regions. For peatland covered with primary forest, the values were assumed to be same as those for primary forest with extra BGB that gets drained. The value for BGB for peatlands were updated based on literature surveys [45,51]. These adjustments (see Table S5) for the LUC, which are not covered in the datasets in SimaPro 8.0, only captures the GHG emissions related differences. Assumptions in relation to the data: 1. Existing EI3.0 dataset for PKO production does not include negative impacts from EFB rotting, fertilizer use reduction from EFB mulching (benefit) and POME's CH 4 emissions. No transportation losses. 3. Impacts from LUC are spread over 20 years. The inventory data collected for petro-FA and PKO-FA along with assumptions capture the quantity of inputs and outputs of materials, energy, waste and emissions for the respective process. This inventory was converted to the functional unit basis (1 kg of FA delivered to AE production site). Such inventory (LCI) was modeled into SimaPro 8.0 software and then subjected to impacts assessment to understand and evaluate the potential environmental impacts by converting LCI results into impacts Fig. 4 Results of various sensitivity analyses, namely, land use change (LUC), POME (wastewater effluent from palm oil mill) treatment, and EFB (empty fruit bunches) treatment, are shown in kg CO 2 e/kg FA delivered. The base case MY mix GHG emissions represent the typical practices for palm plantations in Malaysia (MY). For LUC, the practices for the base case are 13 % LUC from peat forest, 52 % from secondary forest and rest 35 % from existing cropland. Peat forest has the most GHG emissions, while they are the least for the transformation of existing cropland with carbon debt paid off. For POME treatment, the practices for the base case are 5 % of POME being used for generation of biogas for heat production and the rest 95 % being treated emitting the resulting biogas. The venting of biogas from anaerobic treatment has the most GHG emissions, while the anaerobic treatment with the resulting methane recovered and utilized for heat generation has the least. For EFB treatment, the practices for the base case are 75 % of EFB mulched and rest 25 % dumped to rot. Mulching of EFB for a fertilizer substitute shows the least life cycle GHG emissions, while the dumping of EFB to rot has the most J Surfact Deterg (2016) 19:1333-1351 1343 and aggregating these impacts within the same impact category to obtain the characterized results. ReCiPe Midpoint (H) method as implemented in Simapro 8.0 was used to obtain the characterized results for 18 impact categories. This method by default neither credits for CO 2 intake from air for plant growth nor penalizes for biogenic CO 2 emissions. In biofuel processes, since the CO 2 intake by the plants is ultimately released with energy back into the atmosphere within a short timeframe, the credits and emissions balances out to carbon neutrality. However, in this case, the carbon intake is stored in the chemical products for a long time and may not necessarily be released as CO 2 like combustion processes. Further, since FA end-of-life is out of scope in this cradle-to-gate study, CO 2 intake needs to be included. Hence, the method was updated to account for CO 2 intake and biogenic CO 2 emissions. Also, the biogenic methane GWP factor was changed from 22 to 25 kg CO 2 e. The contribution analyses of the characterized results were performed to understand the hotspot areas of impacts and identify the key factors. For these key factors, the sensitivity analyses were performed to evaluate the various scenarios of LUC, POME end-of-life treatment and EFB end-of-life treatment. The uncertainty analyses were performed for both FA sourcing options for the base case via Monte Carlo sampling to understand the distribution. The number of samplings used was 1000 for both options. Results Both petrochemical feedstocks and PKO feedstocks used for FA production are co-products and have other uses. For example, only a fraction of crude oil is used as feedstocks for FA production. This crude oil, which is derived as coproducts, could be used for other applications such as energy. Similarly, PKO is co-product from PO production and could be used for other applications such as biodiesel or cooking oil. In other words, both feedstocks are part of large and complex supply chain. For each kg of FA delivered, on a cradle-to-gate basis, petro-FA has *2.97 kg CO 2 e emissions on average, which are *55 % of *5.27 kg CO 2 e emissions for PKO-FA on average (see Fig. 3). For petro-FA, the production of various raw materials contributed *79 % of the total *2.97 kg CO 2 e/kg FA delivered. Another *21 % are from FA production and \0.2 % from transportation of raw material for FA production and of FA for AE production. Almost all of the GHG emissions during petro-FA production are from the combustion of natural gas in the US. Of climate change impacts from raw materials, *70 % is from n-olefins production and delivery, *10 % from ethylene production and delivery, *10 % from upstream fuel production/combustion, *8 % from catalysts (aluminum powder and cobalt), and the *2 % remaining from solid waste handling and SD standard deviation chemical plant infrastructure. For PKO-FA, the production of various raw materials contributes *83 % of the total *5.27 kg CO 2 e/kg FA delivered. Another *12 % are from FA production and *5 % from transportation of raw materials for FA production and of FA for AE production. Almost all of the GHG emissions during PKO-FA production are from the combustion of natural gas in MY. Due to lower GHG intensity for the combustion of natural gas in MY, the production GHG emissions are similar to petro-FA despite twice the thermal heat consumption. Of climate change impacts from raw materials, *91 % are from PKO production, *7 % from upstream fuel production/combustion, and the rest split between those from hydrogen production and delivery, chemical plant infrastructure and those from municipal solid waste. The contribution analyses for climate change suggest that land use change, POME treatment and EFB treatment are critical factors for life cycle GHG emissions from PKO-FA production. The results of sensitivity analyses for these three key parameters are summarized in Fig. 4. EFB could be mulched and used as fertilizer or dumped to rot. In the latter case, methane, carbon dioxide and nitrous oxide could be emitted depending on the anaerobic conditions. This translates into mulching of EFB for fertilizer being a better option. Among the evaluated POME end-of-life treatment options, anaerobic treatment with the resulting methane recovered and utilized for heat generation has the least life cycle GHG emissions. The venting of methane from anaerobic treatment has the most GHG emissions, even higher than discharging untreated POME. When LUC options are considered, GHG emissions are the highest when peat forests are transformed for palm cultivation and the lowest when existing croplands (whose carbon debt has been paid off) 5 are transformed. The sensitivity analyses show that PKO-FA has lower GHG emissions with petro-FA from an environmental perspective if the existing cropland is used for palm plantation instead of land transformation. Further, in such scenario, CO 2 could be sequestered compared to petro-FA. In an ideal situation when PKO is entirely produced on existing cropland, POME is being treated with methane recovered for thermal energy generation and EFB is used for mulching to replace some fertilizer needs, PKO-FA have GHG emissions of approximately -1.5 kg CO 2 e/kg FA delivered, thereby outperforming petro-FA. However, if 100 % of PKO comes from peatlands drainage and deforestation, POME is treated with recovered methane vented, and EFB is dumped to rot under anaerobic conditions, the GHG emissions increase to *16.7 kg CO 2 e/kg FA delivered. Among the other impact categories, PKO-FA has less metal depletion, less fossil depletion, less human toxicity, less ionizing radiation emissions, less metal depletion, less ozone depletion and less water depletion on average (see Table 2). While LUC affects most other impact categories (except terrestrial ecotoxicity and agricultural land occupation), among them natural land transformation, marine eutrophication, particulate matter formation and Fig. 6 Results of uncertainty analyses (1000 runs of Monte Carlo using the in-built function in Simapro 8.0) for characterized impacts for PKO-FA (fatty alcohol produced from palm kernel oil feedstock) and petro-FA (fatty alcohol produced from petrochemical feedstock) are presented for all 18 impact categories as a percentage of the samplings for which a particular option had lower impacts. For example, petro-FA has lower or equal GHG emissions for *70 % of samplings and PKO-FA causes lower or equal water depletion for *60 % of samplings photochemical oxidant formation see significant effects. Urban land occupation and water depletion are also affected. While GHG emissions for discharging of POME without treatment is not significant, the impacts on eutrophication from this option is *100 times more than other options. Besides impacts on climate change and eutrophication, the POME treatment options also affect terrestrial ecotoxicity, particulate matter formation, photochemical oxidant formation, human toxicity and terrestrial acidification. The treatment of EFB impacts all impact categories as all of them show a positive environmental profile for mulching compared to burden for all impact categories when dumped to rot. The uncertainty analyses were performed to obtain the distribution of the environmental impacts for both petro-FA and PKO-FA. The results for all 18 evaluated impact categories have been captured in Fig. 5 via density plots. In these density plots, the broader distribution for an impact category represents higher uncertainty. For PKO-FA, the distributions of impacts for all impact categories are broader compared to the narrow distribution for petro-FA. The higher uncertainty for PKO-FA is from the variations in the practices with palm plantations and oil (palm oil and PKO) production processes. Further, the higher overlap area for an impact category in density plots represents a lower difference between the compared options. Marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, water depletion and climate change have the least overlapped area and, hence, have the largest difference in the impacts between petro-FA and PKO-FA. The extent of overlap in distribution can also be represented as the percentage of samplings for which a particular option had lower impacts. For example, petro-FA has lower or equal GHG emissions for *70 % of samplings and PKO-FA causes lower or equal water depletion for *60 % of samplings. Figure 6 summarizes the results of such representation for PKO-FA being better and/or equal to petro-FA for all 18 impact categories. Discussion Both the petrochemical and PKO feedstocks being part of large and complex supply chains is expected and documented in the literature [6,8]. Our GHG emissions results are in alignment with the literature evaluating the similar claims for palm oil (PO) for other fossil resource substitutions. While on average PKO-FA performs worse, life cycle GHG emissions for PKO-FA could be lower than those for petro-FA under limited conditions as per sensitivity analyses. Such significant variances in the GHG emissions for PKO-FA (observed from uncertainty analyses and sensitivity analyses) are in accordance with the results of previous studies [10,11,41,45,51,52] summarized in Fig. 7. These variances are expected due to the Fig. 7 Literature data on the life cycle GHG (greenhouse gas) emissions for oil produced from Palm fruit in kg CO 2 e/kg oil produced. Depending on the operating practices, the GHG emissions as per this LCA study varies from -2.7 to 15.4 kg CO 2 e/kg oil produced. Such significant variances in the GHG emissions for PKO-FA were also observed by Stichnothe and Schuchardt [10] (0.6-22.2 kg CO 2 e/kg oil produced), Achten et al. (0.4-16.9 kg CO 2 e/kg oil produced) [17] and Schmidt and Dalgaard [29] (2.2-12.7 kg CO 2 e/kg oil produced). While the variances observed by Rejinders and Huijbergts [25] (5.2-9.6 kg CO 2 e/kg oil produced) and Wicke et al. [21] (1.3-3.1 kg CO 2 e/kg oil produced) were not equally large, their ranges are within those observed. The potential emissions estimated by Jungbluth et al. [11], as part of EcoInvent 3.0 dataset, also falls within the observed ranges variances in agricultural and forestry practices such as fertilizer applications, pesticides applications, properties of soil, growth rate (and, hence CO 2 absorption) for the plants and handling of biomass and co-products. Hence, the environmental friendliness of PKO-FA for GHG emissions reduction varies with the actual practice, which is in consensus with findings by Reijnders and Huijbergts [45]. Land use change, POME end-of-life treatments and EFB end-of-life treatments are key parameters, which were also observed in previous studies [10,45]. The selection of raw material sourcing for FA production involves trade-offs as PKO-FA performs better on average in six impact categories while petro-FA performs better on average in another 12 impact categories. Such trade-offs have been observed by Stalmans et al. [6] and are expected due to inherent differences between the biobased value chain and the fossil-based value chain. Marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, water depletion and climate change are key impact categories for the considered FA sourcing options as shown in Table 3. Our findings must be interpreted in accordance with the scope of this study and the limitations due to the use of secondary data and assumptions. Further, this LCA study does not evaluate the implications of shifting to one particular feedstock, which could affect the inefficiencies and efficiencies of the individual systems. The overall larger systems to which each feedstock belongs should also be considered along with sustainability values of the specific stakeholders, the socio-economic relevance and other aspects not covered. Besides the feedstocks themselves being derived through multi-output processes, both petro-FA and PKO-FA are multi-output processes. Currently, the environmental impacts are allocated from the processes to the co-products. The changes in economics for the coproducts through supply and demand dynamics will influence how the co-products are handled and, hence, the environmental impacts. Currently, there is increasing demand for the bioderived products due to their perceived environmental benefits. The results show that the environmental impacts for PKO-FA strongly depend on palm plantations and palm oil mill operation practices. Hence, we recommend being mindful of the upstream practices specific to the suppliers when sourcing bio-derived materials. With the adoption of proper practices including decisions on land use changes, the bio-derived materials such as PKO provide a good environmentally friendly alternative to the non-renewable raw materials. While PKO and such bioderived materials provide renewability in terms of carbon recycling and regenerating through cultivation, the responsibly produced bio-derived materials are limited by the availability of suitable land. Similar to the other renewable resources there are limits for environmentally responsible harvesting for PKO. The results of this LCA study show that petro-FA has a better average life cycle environmental performance than PKO-FA for the majority of environmental impact categories we investigated. This highlights that environmentally responsible sourcing should require rigorous testing of the assumption of ''automatic environmental benefits'' for bio-derived raw materials. Also, the intrinsic sustainability values of the stakeholders based on the respective local environmental profiles would be critical in incorporating the trade-offs into decision making. Compliance with ethical standards Funding This study was funded in its entirety by Air Products and Chemicals, Inc. The third party critical review by Intertek was funded by Air Products and Chemicals, Inc. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
11,928
2016-09-12T00:00:00.000
[ "Engineering" ]
CO Diffusion and Bond Weakening on Cu(410) ―Probing Surface Structure― We discuss the possibility of using adsorbed CO for adsorbate mediated surface analysis. CO is known to exhibit properties that depend strongly on the local environment, e.g., coordination on surface sites, which manifest through its vibrational frequency. This, in turn, suggests the possibility of probing the surface structure through the changes in C − O stretching frequency during surface diffusion. Using density functional theory calculation, we demonstrate that the vibrational frequency shifts of CO manifests the corrugation of the surface, hence, its structure. To show this, we used Cu(410), a wide stepped surface, as our sample. I. INTRODUCTION Interaction of CO on metal surfaces has been extensively studied as it plays an indispensable role in industrial processes [1−8]. For example, in the Fischer-Tropsch reaction, CO reacts with H 2 on metal catalysts to produce hydrocarbons. Similarly, in the water-gas-shift reaction, CO reacts with water to produce hydrogen for energy generation. Hence, investigations regarding the catalytic reactivity of surfaces are commonly benchmarked through its interaction with CO molecule. With these, experimental techniques continue to progress to understand more about molecule-surface interactions and how we can observe its manifestations [9]. Surface investigation is mostly limited to the cleanliness of the surface. Contaminants such as CO, H, and other adsorbed species interfere with spectroscopic observations for surface analysis. Recently, we proposed adsorbate mediated surface analysis where adsorbed species serve as probes for surface investigations. We have shown that adsorbed hydrogen manifest relative displacements of surface atoms through its vibrational frequency shifts [10]. In this present work, we demonstrate how we can probe surface structure using adsorbed CO. Here we will show that observable vibrational frequency shifts of CO as it diffuses on the surface reflects surface structure. Surface diffusion of molecular species is directly influenced by the structure of the surface [11]. On flat surfaces, random walk diffusion can be observed due to the symmetry of the potential energy. On the other hand, breaking this symmetry will induce a preferable path. With the geometry of stepped surfaces, anisotropic diffusion may occur due to the asymmetric potential energy surrounding the adsorbate. Here, we use Cu(410) as the test surface structure to show such effect on an adsorbed CO molecule. II. COMPUTATIONAL DETAILS Experimental techniques such as scanning tunneling microscopy, helium atom scattering, and others make it possible to observe surface diffusion by obtaining snapshots of the final and initial adsorption states of a molecule [12,13]. However, there are still limitations in visualizing the transition states between inter-site hopping. Thus, a computational approach still remains indispensable in determining the potential energy of the molecule−surface interaction during diffusion. Here, we used the density functional theory based the total energy calculation to describe the interaction of CO on Cu(410) [14,15]. Specifically, we used Vienna ab initio Simulation Package (VASP) using projector augmented wave (PAW) formalism and plane wave basis set, with a cutoff energy of 550 eV [16−20]. The exchange correlation was described using Perdew-Burke-Enzerhof generalized gradient (GGA) exchange correlation functional [21,22]. We adopted the Monkhorst and Pack method to perform the Brillouin zone integrations, with 9 × 9 × 1 special k-points [23]. Ionic relaxations were implemented with an energy convergence of less than 10 −5 eV and with the Hellman-Feynman forces acting on each atom below 0.01 eV/Å. The calculated optimized bulk lattice parameter for Cu was a Cu = 3.634 Å. To model the Cu(410) optimized structure, we used a periodic slab 16 Cu atomic layers thick [(2 × 1) surface unit cell (cf., Figure 1), with two Cu atoms per layer, topmost 8 layers allowed to relax and the last 8 layers held at their bulk-truncated positions], separated by 15 Å of vacuum along the [410] direction. To obtain the vibrational frequencies of CO, we performed finite-difference force calculation, within the harmonic approximation, as implemented in VASP. III. RESULTS AND DISCUSSION A. Adsorption of CO on Cu(410) In Figure 2, we plot the potential energy of an incoming CO molecule towards Cu(410) through the frozen calculation, where atoms are fixed in space at every point calculation. The total energy calculation was performed with a CO bond axis fixed along the [410] direction in a C-end configuration. The potential energy curve shows the preference of CO to bind atop the step edge atom with an adsorption energy of −0.86 eV. These results obtained from the constrained calculation appear to be in agreement with similar studies regarding the preferential low coordinated adsorption of CO on the Cu surface [24−26]. We also show that the binding energy of CO on Cu(410) weakens along high coor-dinated adsorption sites. Hence, we see a decreasing trend in the adsorption energy from S to T3. However, fixing the CO bond axis does not take bond relaxation and reorientation into consideration. The CO molecular bond may align to a lower energy configuration after adsorption. In the case of Cu(410), CO on terrace sites may prefer to orient its bond axis along the [100] direction [27]. Thus, we implemented ionic relaxation in all the adsorption sites. Upon optimization of both surface atoms and the CO molecule, the adsorption energy is enhanced, reducing the energy difference between adsorption sites (cf., Table 1) [27]. Still, the preference of CO to bind stronger on the step edge than the terrace of Cu(410) is retained. In Figure 3, we plot the density of states (DOS) of CO/Cu(410) for three cases. For our reference, we show the molecular orbitals of CO when it is relatively far (ca. 7 Å) from the surface. We then show the DOS of the system when CO adsorbs atop the step edge atom, and when it is on top of the bridge site (between the step S and the nearest terrace atom T1). Here, we see the known interaction of CO with metal surfaces. For both configurations there is a donation of the σ electrons from CO to Cu and a back-donation of the d electrons to the lowest unoccupied molecular orbital (LUMO) of CO. This π* antibonding state of CO is partially filled upon adsorption and in turn weakens its molecular bond. This becomes more pronounce when CO adsorbs on the bridge site of the surface. This results in a more elongated CO bond on the bridge site S−T1 than on the step site S. B. Diffusion of CO on Cu(410) In Figure 4, we show the two-dimensional (2D) potential energy surface (PES) of CO on Cu(410) with coordinates along the [410] direction and diffusion coordinates (0−12) and (0−6) along the (100) terrace and the step edge, respectively. These diffusion coordinates locate on top, bridge and mid(top-bridge) sites of Cu(410). A step increment of 0.2 Å along the diffusion coordinate was implemented for a fine energy resolution of the potential energy surface. The Cu−Cu distances between the adjacent surface atoms along the terrace and the step edge are 2.6 and 3.6 Å, respectively. Hence, the step size allows at least 11 diffusion points between the top sites, with midpoints lying on the bridge sites. Tracing the least energy path for diffusion, we optimized the CO bond length and orientation, with C fixed along the diffusion points. From the potential energy curve of partially constrained CO, we can estimate the diffusion barriers between consecutive top sites. Here, we observe anisotropic potential energy barriers due to differently coordinated surface atoms. In Figure 4a (right panel), the potential energy curve manifests irregular peaks in-between the top sites upon partially constrained relaxation. These arise due to the geometry of the surface, resulting in an asymmetric interaction of CO between two top sites of different coordination. In addition, the relative stability of CO at points near the bridge sites contributes to the "spiky" profile along the diffusion path. Nevertheless, in general, we can see a down slope trend of the potential energy from the valley site towards the step edge site. The climbing nudge elastic band (cNEB) was then implemented to obtain a more accurate estimate of the energy barriers [28]. The barriers obtained from the partially constrained PES and cNEB shows a similar trend between diffusion paths, with an energy difference of 0.02 to 0.05 eV (cf., Figure 5). From here, we obtained the hopping rates for each diffusion paths [29,30]. In Figures 5 and 6, we represent the inter-site hopping configurations to be (initial site→final site). So, (T3→T2, T2→T1, T1→S) refers to diffusion towards the step edge and the reverse (S→T1, T1→T2, T2→T3) refers to diffusion away from the step edge. S−S refers to diffusion between the step atoms. The initial-and final-state configurations were taken from fully optimized structures of CO/Cu(410) atop the surface atoms. The hopping rates at lower temperatures imply that a CO molecule on the Cu(410) surface has a higher chance of diffusing towards the step edge than in the opposite direction. It can also be noted that a step to terrace hopping is more likely to occur than an inter-step hopping. As well, we can see that the CO molecule tends to stay along the first 3 atomic layers (S, T1, T2) of Cu(410). Hence, we can observe an anisotropic diffusion of the CO molecule on Cu(410) which may not be possible on a flat surface. On a side note, this partly explains the known reactivity of the stepped surfaces where it induces surface diffusion towards the reactive step edge site. C. Vibrational frequency of CO on Cu(410) So far, we have shown the effect of the surface structure of Cu(410) on the mobility of CO and how the CO bond is perturbed by site coordination. Now, we demonstrate how CO weakens its molecular bond as it diffuses on the Cu(410) surface. In Figure 7, we show the changes in the bond length of CO with corresponding shifts in its vibrational frequency. In an experiment on CO adsorption on Cu(410), infrared reflection-absorption spectroscopy (IRAS) results show the existence of a high frequency peak of 2099 cm −1 and an emergence of a lower broadened peak of about 2048 cm −1 at high coverages [26]. These peaks correspond to CO atop the step edge atom and the terrace atoms, respectively. With the previously mentioned diffusion coordinates, we calculated the C−O stretching frequency within the harmonic approximation. Starting atop step atom at 2032 cm −1 , the vibrational frequency of C−O stretching on top of the surface atoms (S, T1, T2, T3) exhibits a decreasing trend towards the valley. The CO molecules on T1 and T2 have lower vibrational frequencies (with shifts of around 33−37 cm −1 ) with respect to that on S. This is slightly lower than the experimental frequency difference of 51 cm −1 between CO atop S and terrace atoms. Nevertheless, the decreasing trend from the step to the terrace agrees well with the experiment [26]. This trend can also be seen along the bridge sites (S−T1, T1−T2, T2−T3). Interestingly, an alternating vibrational shift occurs in between the consecutive surface atoms. A decrease in vibrational frequency from S to S−T1 and an increase from S−T1 to T1. Note, however, that the increase from S−T1 to T1 is less than the drop from S to S−T1. This is due to the lower coordination of S as compared to T1. A comparable decrease and increase in the frequency between T1 and T2 can also be observed due to nearly the same coordination. This trend continues to T3. In the same way, the step-to-step translation shows a similar alternating trend with a significant red-shift of C−O stretching up to 1782 cm −1 relative to the gas-phase vibrational frequency of 2130 cm −1 . It is also where the greatest bond elongation was observed. From these, we can see that the vibrational trend qualitatively corresponds to the corrugation of the stepped surface. These shifts in the vibrational frequency simply reflects the surface structure of Cu(410) probed by a moving CO adsorbate. IV. CONCLUSION As a common surface contaminant, a CO adsorbate can be potentially used as a surface probe through monitoring its vibrational shifts during surface diffusion. Here, we show that the vibrational frequency shifts of CO corresponds to the atomic corrugation of the Cu(410) surface and manifests its stepped structure. With the resolution of current spectroscopic techniques, these shifts can be observable. Thus, with this simple demonstration, we propose the possibility of using adsorbed CO for adsorbate mediated surface analysis towards the fine structural investigation.
2,889.6
2020-12-25T00:00:00.000
[ "Materials Science" ]
LncRNA LINC00460 promotes EMT in head and neck squamous cell carcinoma by facilitating peroxiredoxin-1 into the nucleus Background The lncRNA LINC00460 plays crucial roles in several epithelial cancers, although its mechanisms of action differ greatly in different cellular contexts. In this study, we aimed to determine the potential clinical applications of LINC00460 and elucidate the mechanisms by which LINC00460 affects the development and progression of head and neck squamous cell carcinoma (HNSCC). Methods The biological functions of LINC00460 were assessed in several epithelial cancer cell lines. The subcellular localization of LINC00460 was evaluated by cell nuclear/cytoplasmic fractionation and fluorescence in situ hybridization. RNA pull-down assays, LS-MS/MS analysis, and RNA and chromatin immunoprecipitation assays were performed to identify the molecular mechanism by which LINC00460 promotes HNSCC progression. The clinical pathological features of LINC00460 and PRDX1 were evaluated in HNSCC tissues and paired adjacent normal tissues. Results LINC00460 enhanced HNSCC cell proliferation and metastasis in vitro and in vivo and induced epithelial–mesenchymal transition (EMT). LINC00460 primarily localized within the cytoplasm of HNSCC cells, physically interacted with PRDX1 and facilitated PRDX1 entry into the nucleus. PRDX1 promoted the transcription of LINC00460, forming a positive feedback loop. In addition, PRDX1 also promoted the transcription of EMT-related genes (such as ZEB1, ZEB2 and VIM) through enrichment on gene promoters in the nucleus. LINC00460 effectively induced HNSCC cell EMT in a PRDX1-dependent manner, and PRDX1 mainly mediated the EMT-promoting effect of LINC00460. High levels of LINC00460 and PRDX1 expression were positively associated with lymph metastasis, pathological differentiation and tumor size in HNSCC patients. Conclusions LINC00460 promoted EMT in HNSCC cells by facilitating PRDX1 entry into the nucleus. LINC00460 and PRDX1 are promising candidate prognostic predictors and potential targets for cancer therapy for HNSCC. Electronic supplementary material The online version of this article (10.1186/s13046-019-1364-z) contains supplementary material, which is available to authorized users. Background Head and neck squamous cell carcinoma (HNSCC) is the most common malignancy of the head and neck epidermis, with over 600,000 new cases reported per year [1,2]. HNSCCs are heterogeneous, solid and malignant tumors that are associated with low overall survival rates in patients, primarily due to late diagnoses, low therapeutic response rates, and high rates of recurrence and metastasis [3]. Therefore, elucidating the genetic and epigenetic molecular alterations associated with HNSCC is extremely important to improving the diagnosis, appropriate treatment and prognosis of patients with HNSCC. Various studies have demonstrated that the occurrence and development of HNSCC is closely related to long noncoding RNAs (lncRNAs) [4]. LncRNAs (> 200 nt in length, no protein-coding functions) act as key regulators by participating in gene regulation at the transcriptional, post-transcriptional and post-translational levels [5,6], and they affect many biological processes [7]. Previous reports have shown that lncRNAs have complex and wide-ranging functions in the development of HNSCC, including functions associated with cancer growth, recurrence and metastasis [8], because of their irregular and specific expression patterns in HNSCC [9]. Although the relationship between lncRNAs and HNSCC is unclear, some lncRNAs have been reported to be aberrantly expressed and to contribute to the occurrence and development of HNSCC [10][11][12]. Using orthogonal partial least squares discriminant analysis (OPLS-DA), which integrates RNA-Seq data from The Cancer Genome Atlas (TCGA) database and matching clinical information from a large cohort of HNSCC patients, we identified LINC00460 as a prognostic lncRNA signature [13]. Analyses of the expression profiles of lncRNAs in HNSCC cells from the Cancer RNA-Seq Nexus (CRN) database have shown that the expression of LINC00460 is upregulated [14,15]. Located on chromosome 13q33.2 and transcribed as a 913nt transcript, LINC00460 has been reported to play important roles in tumorigenesis and progression in various tumors and is significantly correlated with survival in the context of several cancer types, including lung cancer [16][17][18][19], esophageal cancer [20][21][22], colorectal cancer [23,24], nasopharyngeal carcinoma [25], papillary thyroid carcinoma [26], ovarian cancer [27], gastric cancer [28,29], renal carcinoma [30], meningioma [31], and bladder and urothelial carcinoma [32,33]. According to previous studies, LINC00460 exhibits aberrant expression in and may directly participate in the pathogenesis of HNSCC [13,34,35]. The emerging mechanisms of action of LINC00460 differ widely in different cellular contexts; therefore, the key effects and detailed molecular mechanisms of LINC00460 in HNSCC cells remain unclear and urgently need further study and investigation. To determine whether LINC00460 plays an important role in the occurrence and development of HNSCC and to assess its usefulness as a candidate biomarker for accurate prognostic prediction and as a potential target for cancer therapy, we investigated and identified the functions and mechanisms of action of LINC00460 in HNSCC cells. The SCC-4, SCC-9 and SCC-25 cells (also from the American Type Culture Collection) were cultured in DMEM/F12 (1:1) medium (Gibco-BRL). The media were supplemented with 10% heat-inactivated fetal bovine serum (FBS) (Gibco-BRL), penicillin (100 units/mL), and streptomycin (100 μg/mL). The cells were cultured at 37°C in a humidified 5% CO 2 atmosphere. In addition, normal oral epithelial cells were primary cultured in keratinocyte serum-free medium (KSF; Gibco-BRL) with 0.2 ng/mL recombinant epidermal growth factor (rEGF; Invitrogen, Carlsbad, CA, USA). RNA extraction and qRT-PCR Total RNA was extracted using TRIzol reagent (TaKaRa, Japan) and used to generate cDNA with a PrimeScript RT Reagent Kit (TaKaRa). All qRT-PCR was performed using an ABI StepOne Real-Time PCR System (Life Technologies, USA) with a TB Green Premix Ex Taq reagent kit (TaKaRa) as previously described [36]. The PCR primers were designed and synthesized by Sangon Biotech (Shanghai) Co., Ltd., and are listed in Additional file 2: Table S2. Western blot analysis Western blotting was performed as previously described [37]. In addition, cytoplasmic and nuclear extracts were separated and prepared using NE-PER™ Nuclear and Cytoplasmic Extraction Reagents (Thermo Fisher Scientific, USA) according to the manufacturer's instructions and a previous study [36]. Smart Silencer/siRNA or plasmid transfection The Smart Silencer and siRNA used in our study were designed and synthesized by Guangzhou RiboBio Co., Ltd. (Guangzhou, China), and the sequences were listed in Additional file 3: Table S3. The plasmids were constructed by HanYin Biotechnology Co., Ltd. (Shanghai, China). Transfection was performed using Lipofectamine 3000 reagent (Invitrogen) following the manufacturer's instructions. Lentiviral transduction and screening of stable strains LINC00460 and PRDX1 lentiviral expression vectors (wild-type and mutant) were constructed by HanYin Biotechnology Co., Ltd. The LINC00460 lentiviral expression vector (LINC00460 vector) conferred puromycin resistance, while the PRDX1 lentiviral expression vector (PRDX1 vector) was C-terminally tagged with an HA epitope and conferred blasticidin resistance. Lentiviral transduction was performed following the manufacturer's instructions. After 72 h of transfection, the culture medium was mixed with puromycin/blasticidin at a final concentration of 3-10 μg/mL. After being cultured with puromycin/blasticidin and passaged 2-3 times, the stably stained cells were screened. Transwell migration and invasion assays Cell migration and invasion assays were performed using 24-well Transwell chambers with 8-μm porosity polycarbonate filters and Transwell insert chambers (Corning, USA) coated or not coated with Matrigel (BD Biosciences, USA). A total of 150 μL of cell suspension in serum-free medium was added into each upper chamber, while 600 μL of DMEM supplemented with 10% FBS was added to the lower chambers as a chemoattractant. After incubating for 24-36 h, the migrated or invaded cells were fixed with 4% paraformaldehyde (Sangon Biotech) for 15 min and stained with 1% crystal violet (Beyotime) for 30 min. After the cells on the upper surface of the filter were removed, at least five randomly selected microscopic fields of fixed cells per filter were imaged using an inverted phase-contrast microscope. The cells were counted, and the average was calculated. Cell counting Kit-8 (CCK-8) analysis Cells transfected for 24 h with Smart Silencer/siRNA or stably lentivirus-transduced cells were seeded into 96well plates at a density of 1000 cells per well in triplicate. The cells were harvested, and 10 μL of CCK-8 reagent (Dojindo, Kumamoto, Japan) was added to 100 μL of culture medium. The cells were subsequently incubated for 2 h at 37°C, and the optical density was measured at 450 nm using a microplate reader (SpectraMax i3, Molecular Devices, USA). Colony formation assay Cells transfected for 24 h with Smart Silencer/siRNA or lentivirus-transduced stable cells were seeded into 6-well plates at a density of 1000 cells per well and incubated for 10-14 days to form cell colonies. The colonies were fixed and stained, and those with more than 50 cells were counted under a dissecting microscope. Fluorescence in situ hybridization (FISH) assay Fluorescence-labeled probes for LINC00460, 18S rRNA, and U6 RNA were designed and synthesized, and FISH experiments were performed using a Ribo™ Fluorescent In Situ Hybridization kit (RiboBio). Images were acquired on a TCS SP2 laser-scanning confocal microscope (Leica Microsystems, Germany). Isolation of nuclear and cytoplasmic RNA Nuclear, cytoplasmic and total RNA was isolated using a PARIS™ kit (Thermo Fisher Scientific) following the manufacturer's instructions. After purification and DNase I treatment, RNA from the isolated nuclear and cytoplasmic fractions was reverse transcribed and used for PCR as described above. MALAT1, NEAT1, TUG1 and U6 were used as endogenous controls for the nucleus, while BIRC5 and GAPDH were used as endogenous controls for the cytoplasm. The primers used for PCR are listed in Additional file 2: Table S2. Immunofluorescence Cells were seeded onto cover slips in 24-well plates for 24 h, fixed with 4% paraformaldehyde for 20 min and permeabilized with 0.1% Triton X-100 for 10 min. After being blocked in 3% BSA for 30 min, the cells were incubated with E-cadherin (Cat# ab15148, 1:100) or Vimentin antibodies (Cat# D21H3, 1:100) overnight at 4°C, washed with PBST and then incubated with an Alexa Fluor 549-conjugated anti-goat IgG F (ab')2 fragment (1: 200, Invitrogen) for 1 h at room temperature in the dark. The cells were costained with DAPI (Beyotime) for 5 min for detection of nuclei and then observed and photographed under a Leica TCS-SP2 laser-scanning confocal microscope. RNA pull-down assay and liquid chromatography tandem mass spectrometry (LC-MS/MS) A biotinylated RNA pull-down assay was conducted using a Target RNA Purification kit (ZEHENG Biotech, Shanghai, China) following the manufacturer's instructions, as previously described [38]. Briefly, CAL-27 cells stably transduced with LINC00460 vector were crosslinked in 1% formaldehyde for 10 min, equilibrated in glycine buffer for 5 min, washed with cold PBS three times, scraped with 1 mL of lysis buffer and incubated for 10 min. The cell samples were sonicated and then centrifuged, after which the supernatant was transferred to a 2-mL tube, and 50 μL was saved for input analysis. The lysate supernatant was incubated with LINC00460 probes (RiboBio) or a negative probe for 3 h at room temperature with rotation; then, 100 μL of Streptavidin Magnetic Beads was added, and the mixture was incubated for 1 h with stirring. The bead/sample mixture was washed twice, after which 10% of the mixture was subjected to RNA purification, while the remaining 90% was subjected to protein purification. After subsequent washes, the pulled-down complexes were analyzed by LC-MS/MS, which was performed by Applied Protein Technology (Shanghai, China). Subsequently, the protein was verified by Western blot analysis after RNA pulldown assays were performed. RNA purification was performed as described previously [38], and the efficiency of target purification was assessed by qRT-PCR. RNA immunoprecipitation (RIP) RIP was performed using an EZ-Magna RIP™ RNA-Binding Protein Immunoprecipitation Kit (Millipore, Billerica, MA, USA) according to the manufacturer's instructions. After cell lysis with RIP lysis buffer, 100 μL of the lysate was incubated with RIP buffer containing magnetic beads, which were conjugated with human anti-HA (Thermo Fisher Scientific) and normal rabbit IgG (Millipore). Among the antibodies, IgG was considered a negative control (NC). Proteinase K buffer was then added to the samples. Finally, the target RNA was extracted and purified for further study by qRT-PCR assays. The primers used for PCR are listed in Additional file 4: Table S4. Chromatin immunoprecipitation (ChIP) ChIP assays were performed on CAL-27 cells stably transduced with PRDX1-HA vector with a ChIP assay kit (Millipore) according to the manufacturer's instructions. IgG was used as the NC, and an anti-HA antibody (Thermo Fisher Scientific) was used to pull down the promoter regions of LINC00460, ZEB1, ZEB2 and VIM genes with the PRDX1 regulatory element. The DNA fragments were purified with a DNA Clean kit (Beyotime) and used for qPCR with primers for the promoters of LINC00460, ZEB1, ZEB2 and VIM (Additional file 5: Table S5). The results are presented as fold changes and were calculated by dividing the signals from ChIP obtained with the anti-HA antibody by those obtained with the IgG control. Xenograft formation and in vivo metastasis assay All animal experiments, implemented in BALB/C nude mice (4 weeks old) (Shanghai Laboratory Animal Center, Shanghai, China), were conducted in accordance with the appropriate ethical standards and national guidelines. To assess whether LINC00460 knockdown could inhibit tumorigenic capacity in vivo, we established a xenograft tumor mouse model using cholesterol-conjugated LINC00460 siRNA (si-LINC00460) for in vivo siRNA delivery. A total of 1 × 10 6 CAL-27 cells in 100 μL of serum-free DMEM were subcutaneously injected into the left and right dorsal flanks of six mice. Ten days after tumor inoculation, cholesterol-conjugated si-LINC00460 (sequences shown in Additional file 3: Table S3) from RiboBio were used for in vivo siRNA delivery in three mice, while another three mice were injected with NC siRNA. When the size of the tumors reached approximately 5 mm × 5 mm, siRNA (10 nmol in 0.1 mL of saline buffer per tumor nodule) was injected into the tumor mass once every 3 days for 3 weeks according to the methods in a previous study [39]. To determine whether LINC00460 overexpression could enhance tumorigenicity in vivo, 1 × 10 6 CAL-27 cells stably transduced with LINC00460 vector or NC cells were subcutaneously injected into the right and left flanks of six mice, respectively. During the xenograft tumor experiments, the tumor sizes were monitored with a caliper every 3 days. The tumor volume was measured using the following formula: tumor volume = length×width×width/2. After the animals were sacrificed, the tumor samples were collected, and the weights were measured. The samples were embedded in paraffin for further hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) analysis. Due to the weak invasive characteristics of HNSCC cell lines, we used A549 cells for the animal metastasis assay to verify the in vivo functions of LINC00460 via mouse tail vein injection. In our study, we administered tail vein injections with 2 × 10 6 A549 cells stably transduced with LINC00460 vector or NC cells into two groups of eight mice each. After 8 weeks, the mice were sacrificed, and the lungs were collected. The metastatic nodules formed on the lung surfaces were examined by picric acid and neutral aldehyde staining and further H&E staining and IHC analysis. Statistical analysis All statistical analyses were performed using Statistical Package for Social Science software Version 16.0 (SPSS 16.0) and GraphPad Prism 7.0. The data are presented as the mean ± standard deviation (SD) and are representative of at least three independent experiments. Differences among groups were analyzed by one-way analysis of variance (ANOVA) or t-tests (two groups). Analyses of associations between LINC00460 or PRDX1 mRNA levels and clinical features were performed using the Mann-Whitney U-test. The correlation between LINC00460 and PRDX1 was determined by Pearson analysis. All p values< 0.05 were considered to indicate statistical significance. LINC00460 facilitates HNSCC cell proliferation, migration and invasion in vitro LINC00460 was upregulated in 7 HNSCC cell lines compared with normal oral epithelial cells (p < 0.05). The expression of LINC00460 was highest in HN30 cells and lowest in SCC-9 cells, with CAL-27 exhibiting medium expression (Fig. 1a). Therefore, we performed loss-and gain-of-function with HN30, SCC-9 and CAL-27 cell lines to elucidate the biological functions of LINC00460 in HNSCC cells. CAL-27 and HN30 cells were transfected with a Smart Silencer specifically targeting LINC00460 (SS-LINC00460), while CAL-27 and SCC-9 cells were stably transduced with LINC00460 vector (Additional file 6: Figure S1A). In addition, A549 and HeLa cells were used for functional verification of LINC00460 to determine whether LINC00460 showed the similar oncogenic functions in these cell lines (Additional file 6: Figure S1B). The results showed that LINC00460 could enhance cell proliferation and migration in A549 and HeLa cells, as demonstrated by the CCK-8 assay (Additional file 6: Figure S1C), colony formation assay (Additional file 6: Figure S1D) and transwell assay results (Additional file 6: Figure S1E). The results obtained using A549 and HeLa cells were consistent with those obtained using HNSCC cells. LINC00460 induced the EMT phenotype and was primarily localized in the cytoplasm LINC00460 knockdown significantly increased the levels of E-cadherin but decreased those of N-cadherin, Vimentin, ZEB1 and ZEB2 in CAL-27 and HN30 cells, whereas LINC00460 overexpression significantly decreased the levels of E-cadherin but increased those of N-cadherin, Vimentin, ZEB1 and ZEB2 in CAL-27 and SCC-9 cells, as determined by Western blot (Fig. 2a-c) and qRT-PCR analyses ( Fig. 2d and e). Moreover, LINC00460 overexpression in CAL-27 cells was accompanied by decreases in E-cadherin and increases in Vimentin levels, as observed in immunofluorescence assays (Additional file 7: Figure S2A). Consistent with the results in HNSCC cells, LINC00460 overexpression significantly decreased the levels of E-cadherin but increased those of N-cadherin and Vimentin in A549 and HeLa cells, as determined by Western blot assays (Additional file 7: Figure S2B). To determine the subcellular localization of LINC00460 in HNSCC cells, we performed cytoplasmic/ nuclear fractionation with HN30 and SCC-9 cells and FISH assays with CAL-27 cells. The amount of LINC00460 in the cytoplasm was higher than that observed in the nucleus, revealing that LINC00460 is predominantly located in the cytoplasm ( Fig. 2f and g). LINC00460 promoted HNSCC cell growth and metastasis in vivo To identify the siRNA specifically targeting LINC00460 for animal experiments, three siRNAs contained in SS-LINC00460 were synthetized, and the silencing efficiency was detected by qRT-PCR. The results showed that si-LINC00460-1 was the most effective in knocking down LINC00460 expression (Additional file 8: Figure S3). Therefore, si-LINC00460-1 (si-LINC00460) was used to perform siRNA treatment in vivo and following rescue experiment in vitro. Knockdown of LINC00460 by siRNA treatment significantly decreased tumor growth, as shown by the significantly reduced tumor volumes and weights in the knockdown group compared with the control group ( Fig. 3a and b). Furthermore, the expression of LINC00460 in xenograft tumor tissues was confirmed by qRT-PCR, which showed that LINC00460 expression was significantly decreased in si-LINC00460-treated subcutaneous xenografts compared to control groups (Fig. 3c). The results from H&E and IHC staining of Ki-67 further confirmed the alterations in tumor formation (Fig. 3d). In contrast, significantly larger tumor volumes and weights were observed in mice subcutaneously injected with CAL-27 cells stably transduced with LINC00460 vector ( Fig. 3e and f). The overexpression of LINC00460 in xenograft tumor tissues was confirmed by qRT-PCR (Fig. 3g), and the results of H&E and Ki-67 staining further confirmed the alterations in tumor formation (Fig. 3h). Eight weeks after tail vein injections, the mice were sacrificed, and the metastatic nodules formed on the lung surfaces were examined. As shown in Fig. 3i, the lungs of mice injected with A549 cells stably transduced with LINC00460 vector exhibited significantly increased volumes, whiter color, more and larger solid nodules and more stable textures than the lungs of the control group. The presence of metastatic nodules in the mouse lungs was confirmed by H&E and Ki-67 staining, and the mice injected with A549 cells transduced with LINC00460 vector formed more nodules on their lung surfaces than the control group (Fig. 3j). PRDX1 physically interacted with LINC00460 and affected HNSCC cell proliferation and migration To investigate the mechanism by which LINC00460 affects cell proliferation, migration and EMT, we explored the putative RNA-binding proteins (RBPs) interacting with LINC00460 using RNA pull-down assays (Additional file 9: Figure S4A) followed by mass spectrometry (Additional file 9: Figure S4B and C). Western blot analysis was performed following the RNA pull-down assay to confirm that PRDX1 was an RBP binding with LINC00460 (Fig. 4a). Then, a PRDX1 lentiviral expression vector (PRDX1 vector) with HA-Tag (PRDX1-HA) was constructed (Additional file 9: Figure S4D). CAL-27 and HN30 cells stably transduced with PRDX1-HA exhibited dramatically increased the expression of PRDX1, as determined by Western blot analysis (Fig. 4b and Additional file 9: Figure S4E) and qRT-PCR ( Fig. 4c and Additional file 9: Figure S4F). The interaction between PRDX1 and LINC00460 was confirmed in CAL-27 and HN30 cells transduced with PRDX1-HA by RIP assays, and the fourth RIP primer (P4) was shown to produce positive amplification ( Fig. 4d and Additional file 9: Figure S4G). The possible binding sites between PRDX1 and LINC00460 were predicted by Protein-RNA Interaction predictor (PRIdictor, http://bclab.inha.ac.kr/pridictor) [40]. A possible binding site on PRDX1 for LINC00460 was located at the lysine (K) residue at amino acid (aa) position 120 (Additional file 9: Figure S4H). A PRDX1 mutant vector (PRDX1-Mut) was constructed based on the predicted LINC00460 binding site (Fig. 4e). After the K at aa 120 was mutated to arginine (R), the ability of PRDX1 to bind LINC00460 was significantly weakened, which suggested that the K at aa 120 of PRDX1 was important for interaction with LINC00460 ( Fig. 4f and Additional file 9: Figure S4I). In the PRIdictor database, there were four putative PRDX1-binding sites on LINC00460 (Additional file 9: Figure S4J and K). Based on the results obtained with the fourth RIP primer, we speculate that nucleotide 323 of LINC00460 might be responsible for binding with PRDX1. Therefore, a LINC00460 mutant vector (LINC00460-Mut) was constructed based on the predicted PRDX1-binding site (U at nucleotide 323) (Fig. 4g), and RNA pull-down assays were performed on CAL-27 cells transfected with LINC00460-WT or LINC00460-Mut vectors. Minimal PRDX1 could be pulled down by the LINC00460 probe in CAL-27 cells transfected with the LINC00460-Mut vector, suggesting that the nucleotides at positions 323 of LINC00460 may be responsible for the ability of LINC00460 to bind with PRDX1 (Fig. 4h). PRDX1 was observed to be upregulated in 7 HNSCC cell lines in qRT-PCR (Fig. 4i) and Western blot assays (Fig. 4j). siRNAs specifically targeting PRDX1 (si-PRDX1) were synthetized, and the silencing efficiency of the si-PRDX1 was detected by qRT-PCR in CAL-27 and HN30 cells (Fig. 4k and Additional file 10: Figure S5A) and by Western blotting (Additional file 10: Figure S5B and C), the results of which showed that si-PRDX1-1 was the most effective in knocking down PRDX1 expression. The results of the CCK-8 ( Fig. 4l and Additional file 10: Figure S5D) and colony formation assays ( Fig. 4m and Additional file 10: Figure S5E) showed that PRDX1 knockdown suppressed cell proliferation in CAL-27 and HN30 cells (p < 0.05). In addition, the results of the transwell assays demonstrated that PRDX1 knockdown suppressed cell migration in both CAL-27 and HN30 cells (p < 0.05) ( Fig. 4n and Additional file 10: Figure S5F). LINC00460 facilitated PRDX1 entry into the nucleus, and PRDX1 promoted the transcription of LINC00460 and EMT-related genes PRDX1 knockdown significantly increased the levels of E-cadherin and decreased the levels of N-cadherin, Vimentin, ZEB1 and ZEB2 in CAL-27 and HN30 cells, whereas the levels of E-cadherin were decreased and the levels of N-cadherin, Vimentin, ZEB1 and ZEB2 were increased in CAL-27 and HN30 cells stably transduced with PRDX1, as determined by Western blot analysis (Fig. 5a, b and Additional file 11: Figure S6A, B). PRDX1 knockdown significantly decreased the levels of N-cadherin, Vimentin, ZEB1 and ZEB2 in CAL-27 (Fig. 5c) and HN30 cells (Additional file 11: Figure S6C), whereas PRDX1 overexpression significantly increased the levels of N-cadherin, Vimentin, ZEB1 and ZEB2 in CAL-27 (Fig. 5d) and HN30 cells (Additional file 11: Figure S6D), as determined by qRT-PCR assays. Moreover, the expression of LINC00460 decreased when PRDX1 was silenced and increased when PRDX1 was overexpressed in CAL-27 (Fig. 5e) and HN30 cells (Additional file 11: Figure S6E), as determined by qRT-PCR. However, the abnormal expression of LINC00460 failed to affect the mRNA levels (Additional file 11: Figure S6F and G) and protein levels (Additional file 11: Figure S6H) of PRDX1. Importantly, we observed that PRDX1 levels were enriched in the nucleus and decreased in the cytoplasm when LINC00460 was overexpressed, suggesting that LINC00460 facilitated PRDX1 entry into the nucleus ( Fig. 5f and Additional file 11: Figure S6I). To further clarify how PRDX1 promoted the expression of LINC00460 and EMT-related genes, ChIP assays were performed in CAL-27 cells transduced with PRDX1-HA (Additional file 11: Figure S6J). Specific primers for the 1000 bp upstream regions of the promoters of LINC00460, ZEB1, ZEB2 and VIM were designed (Additional file 11: Figure S6K), and all the primers had good specificity (Additional file 11: Figure S6L). From the results of ChIP assays, PRDX1 was enriched in the LINC00460 promoter fragment (Fig. 5g). Moreover, compared with NC group, overexpression of LINC00460 can effectively promote the enrichment of PRDX1 in promoter regions of ZEB1, ZEB2 and VIM genes (Fig. 5h). It showed that PRDX1 promoted the transcription of LINC00460 and EMT-related genes (such as ZEB1, ZEB2 and VIM) through enrichment in gene promoters in the nucleus. PRDX1 mediated the function of LINC00460 to promote the proliferation and metastasis of HNSCC cells Silencing of PRDX1 using si-PRDX1-1 (si-PRDX1) in CAL-27 cells significantly blocked the ability of LINC00460 to promote cell proliferation (Fig. 6a), colony formation (Fig. 6c) and migration assays (Fig. 6e). Knockdown of LINC00460 using si-LINC00460-1 (si-LINC00460) in CAL-27 cells also dramatically suppressed the ability of PRDX1 to promote cell proliferation (Fig. 6b), colony formation (Fig. 6d) and migration (Fig. 6f). These results showed that LINC00460 affected HNSCC cell proliferation and migration in a PRDX1dependent manner. Upregulation of LINC00460 and PRDX1 correlated with poor clinicopathologic features in HNSCC patients The expression of LINC00460 and PRDX1 was measured by qRT-PCR in 123 paired HNSCC tissues and adjacent normal tissues. The results clearly demonstrated that LINC00460 levels in HNSCC tissues were significantly higher than those in adjacent normal tissues ( Fig. 7a and b), similar to the findings for PRDX1 (Fig. 7c and d). lymph node metastasis (p < 0.05) ( Fig. 7e and Table 1), whereas PRDX1 expression was positively associated with tumor size (p < 0.05) (Fig. 7f) Figure S7A and B). The expression of PRDX1 in tumor tissue was significantly higher than that in normal oral mucosa, as determined by IHC (Fig. 7g). Furthermore, LINC00460 expression was positively correlated with PRDX1 expression in HNSCC tissues (Fig. 7h). Discussion EMT is usually associated with tumor initiation, malignant progression, cell migration, tumor metastasis, etc. and is often defined by downregulated expression of epithelial markers (such as E-cadherin) and increased expression of mesenchymal markers (such as N-cadherin and Vimentin) [41]. Moreover, EMT-associated transcription factors (TFs), such as SNAI (SNAI1/Snail and SNAI2/Slug), ZEB (ZEB1 and ZEB2), and TWIST (TWIST1 and TWIST2) nuclear proteins, can repress Ecadherin expression and regulate the EMT process via different signaling pathways [42]. LncRNAs have been revealed to play essential roles in regulating the functions of EMT, so they are often considered promising biomarkers and therapeutic targets for EMT and metastasis [43]. Some previous studies have suggested that LINC00460 primarily affects cell invasion and migration in such cancers as esophageal cancers [21], epithelial ovarian cancer [27], colorectal cancer [23,24] and gastric cancer [28]. It has been reported that LINC00460 induces EMT in lung cancer cells [16,17]. Our study demonstrated that LINC00460 significantly enhanced cell proliferation, metastasis and the EMT phenotype in HNSCC cells. Moreover, the functions of LINC00460 in A549 and HeLa cells were consistent with those observed for HNSCC cells in our study and previous studies [18]. Owing to the lack of invasive HNSCC cell lines for a small-animal pulmonary metastasis model or EMT induction model, other highly invasive cell lines were usually used to replace HNSCC cell lines, such as MDA-MB-231 cells [44,45], A549 cells [46], etc., which can partially reveal the metastasis characteristics of HNSCC cells. Since a pulmonary metastasis model by tail vein injection of A549 cells was successfully established [47][48][49], and the similar biological characteristics between A549 cells and HNSCC cells were also demonstrated according to previous studies [50][51][52][53][54], we finally used A549 cells for animal pulmonary metastasis assay to verify the in vivo functions of LINC00460. Though A549 cells transduced with LINC00460 cannot completely represent the prometastatic characteristics of LINC00460 for HNSCC cells, it is still valuable that A549 cells was used for the pulmonary metastasis model of HNSCC cells. Since LINC00460 is primarily localized in the cytoplasm, some studies have shown that LINC00460 associates with a number of biomolecules, such as TFs, mRNAs, miRNAs and RBPs, to affect cancer development [15]. The previously reported mechanisms of LINC00460 in cancers are highly variable and contradictory. Some studies have investigated the effects of miRNAs on LINC00460 and its functions, revealing that LINC00460 promotes cell proliferation and migration by upregulating the expression of the miR-149-5p-targeted genes IL6 in nasopharyngeal carcinoma [25] and CUL4A in colorectal cancer [23], by targeting miR-342-3p/ KDM2a in gastric cancer [28], by regulating miR-338-3p in epithelial ovarian cancer [27], by targeting miR-302c-5p/FOXA1 in human lung adenocarcinoma [18], by sponging miR-613 in papillary thyroid carcinoma [26], and by targeting miR-539/MMP-9 in meningioma [31]. In HNSCC cells, LINC00460 affects STC2 and promotes autophagy by regulating miRNA-206 [35]. Other studies have confirmed that LINC00460 can regulate gene expression through interactions with RBPs. One study reported that LINC00460 interacts with hnRNPK to promote EMT and cell migration in lung cancer cells [16]. The results of other studies demonstrated that LINC00460 exerts its oncogenic effects via the LINC00460/EZH2/KLF2 signaling axis in colorectal cancer cells [23] and showed that CBP/ P300 binds to the LINC00460 promoter to activate LINC00460 transcription via histone acetylation to promote carcinogenesis in esophageal cancer cells [21]. However, the precise mechanisms by which LINC00460 affects cancer cell development and progression need to be further elucidated. In our study, we discovered that PRDX1 was an RBP of LINC00460 and directly interacted with LINC00460 to affect proliferation, migration and EMT in HNSCC cells. PRDX1 is a major 2-Cys member of the peroxiredoxin family that plays important roles in cell proliferation, differentiation, and apoptosis under stress conditions and is associated with poor prognosis in cancers [55]. PRDX1 has previously been found to be an RBP [56,57], and predicted binding sites for LINC00460 and PRDX1 are listed in the PRIdictor database. When The expression of EMT markers (E-cadherin, N-cadherin, Vimentin, ZEB1 and ZEB2) was detected by Western blot analysis in CAL-27 cells when PRDX1 was knocked down or overexpressed. c, d The expression of EMT-associated genes (E-cadherin, N-cadherin, Vimentin, ZEB1 and ZEB2) was detected by qRT-PCR analysis in CAL-27 cells when PRDX1 was knocked down (c) or overexpressed (d). e qRT-PCR analysis of LINC00460 expression in CAL-27 cells when PRDX1 was knocked down or overexpressed. f The protein levels of PRDX1 in nuclear and cytoplasmic fractions were analyzed by Western blotting in CAL-27 cells transfected with SS-LINC00460 or LINC00460 vector. g ChIP-PCR analysis of anti-HA-or IgGimmunoprecipitated LINC00460 promoter fragments from CAL-27 cells stably transduced with PRDX1-HA. h ChIP-PCR analysis of anti-HA-or IgGimmunoprecipitated ZEB1, ZEB2 and VIM promoter fragments from CAL-27 cells stably transduced with PRDX1-HA when LINC00460 was overexpressed (LINC00460) or not overexpressed (NC). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001, ns: no significance the predicted binding sites (K at aa 120 of PRDX1 or nucleotides positions 320-326 of LINC00460) were mutated respectively, the binding capacity between PRDX1 and LINC00460 decreased, suggesting that the predicted binding sites played an important role in maintaining the binding of PRDX1 and LINC00460. In our study, after PRDX1 knockdown in LINC00460overexpressing cells or LINC00460 knockdown in PRDX1-overexpressing cells, cell proliferation and migration were significantly suppressed. When LINC00460 or PRDX1 was knocked down or overexpressed in HNSCC cells, the expression of EMT-associated genes were significantly altered. Furthermore, the expression of LINC00460 increased when PRDX1 was upregulated and decreased when PRDX1 was knocked down in HNSCC cells. PRDX1 was also enriched in the nucleus when LINC00460 was overexpressed, although total PRDX1 protein levels did not change. These findings above suggested that LINC00460 physically interacted with PRDX1 and facilitated PRDX1 entry into the nucleus. The results of ChIP assays showed that PRDX1 was enriched in the LINC00460 promoter fragment from nucleotide positions − 24 to − 284 upstream of the transcription start site (TSS). Moreover, overexpression of LINC00460 can enhance the enrichment of PRDX1 in the ZEB1 promoter fragment from nucleotide positions − 283 to − 586 upstream of TSS, the ZEB2 promoter fragment from nucleotide positions − 103 to − 296 upstream of TSS, and the VIM promoter fragment from nucleotide positions − 551 to − 779 upstream of TSS. The growth curves were detected by CCK-8 assay when PRDX1 was knocked down in CAL-27 cells stably transduced with LINC00460 (a) and when LINC00460 was knocked down in CAL-27 cells stably transduced with PRDX1 vector (b). c, d The colonizing abilities were detected by colony formation assays when PRDX1 was knocked down in CAL-27 cells stably transduced with LINC00460 (c) and when LINC00460 was knocked down in CAL-27 cells stably transduced with PRDX1 vector (d). e, f The migration abilities were assessed by transwell assays when PRDX1 was knocked down in CAL-27 cells stably transduced with LINC00460 (e) and when LINC00460 was knocked down in CAL-27 cells stably transduced with PRDX1 vector (f). *p < 0.05, ***p < 0.001, ****p < 0.0001, ns: no significance PRDX1 promoted the transcription of LINC00460 and EMT-associated genes through enrichment in gene promoters in the nucleus. PRDX1 is an antioxidant that regulates cell growth, differentiation, apoptosis, and other functions and can affect gene regulation by associating with various TFs, including c-Myc, NF-кB, and AR in the nucleus [55,58,59]. These results suggested that the enhancement of LINC00460 transcription was promoted by PRDX1 possibly through its interaction with TFs. As a result, LINC00460 effectively induced the HNSCC cell EMT in a PRDX1-dependent manner, and PRDX1 mainly mediated the EMT-promoting effect of LINC00460 (Fig. 7i). In our previous study, LINC00460 was found to be associated with the prognosis of HNSCC [13]. LINC00460 is positively correlated with advanced TNM stages and lymph node metastasis in papillary thyroid carcinoma [26] and with invasion depth and TNM stage in colorectal cancer [24]. Our study showed that the expression of LINC00460 was associated with lymph node metastasis and pathological differentiation, which is consistent with the results of esophageal cancer [21]. Aberrant PRDX1 expression occurs in numerous cancers [55]. PRDX1 has been reported to be an independent prognostic factor for disease recurrence and reduced survival in nonsmall-cell lung cancer [60] and gastric cancer [61]. PRDX1 was overexpressed in oral leukoplakia and oral cancers [62], and associated with local recurrence, which may be clinically useful in guiding treatment for HNSCC patients [63,64]. Our results also showed that the expression of PRDX1 was associated with tumor size, and was positively correlated with the expression of LINC00460. Thus, the association between LINC00460 and PRDX1 may be a biomarker for accurate prognosis and a potential target for cancer therapy in the context of HNSCC. Conclusions In this study, LINC00460 was shown to promote HNSCC cell proliferation and metastasis in vitro and in vivo. LINC00460 primarily localized within the cytoplasm, physically interacted with PRDX1 and facilitated PRDX1 entry into the nucleus in HNSCC cells. In the nucleus, PRDX1 promoted the transcription of LINC00460 and EMT-related genes through enrichment in gene promoters in the nucleus. LINC00460 effectively induced the HNSCC cell EMT phenotype in a PRDX1dependent manner, and PRDX1 mainly mediated the EMT-promoting effect of LINC00460. LINC00460 expression in HNSCC was correlated with lymph node metastasis and pathological differentiation, while PRDX1 expression was correlated with tumor size. LINC00460 and PRDX1 may serve as biomarkers for accurate prognostic prediction and as potential targets for cancer therapy in HNSCC patients. Additional files Additional file 1:
8,302
2019-08-20T00:00:00.000
[ "Medicine", "Biology" ]
Nutritional Partitioning among Sympatric Ungulates in Eastern Tibet Simple Summary Alpine musk deer, red serow, and white-lipped deer coexist in the Nyenchen Tanglha Mountains of Tibet. We aimed to understand the mechanisms of their coexistence by studying their dietary preferences using DNA barcoding. All of the species exhibited broad dietary ranges with distinct food preferences. Furthermore, our findings revealed genus-level dietary specializations and the mechanisms facilitating their coexistence. The results of this study provide valuable insights for the development and implementation of effective conservation strategies and management measures in the local area. Abstract Wild ungulates play crucial roles in maintaining the structure and function of local ecosystems. The alpine musk deer (Moschus chrysogaste), white-lipped deer (Przewalskium albirostris), and red serow (Capricornis rubidus) are widely distributed throughout the Nyenchen Tanglha Mountains of Tibet. However, research on the mechanisms underlying their coexistence in the same habitat remains lacking. This study aimed to investigate the mechanisms underlying the coexistence of these species based on their dietary preferences through DNA barcoding using the fecal samples of these animals collected from the study area. These species consume a wide variety of food types. Alpine musk deer, white-lipped deer, and red serow consume plants belonging to 74 families and 114 genera, 62 families and 122 genera, and 63 families and 113 genera, respectively. Furthermore, significant differences were observed in the nutritional ecological niche among these species, primarily manifested in the differentiation of food types and selection of food at the genus level. Owing to differences in social behavior, body size, and habitat selection, these three species further expand their differentiation in resource selection, thereby making more efficient use of environmental resources. Our findings indicate these factors are the primary reasons for the stable coexistence of these species. Introduction Wild ungulates often play an important role in maintaining the structure and function of ecosystems and are commonly regarded as one of the ecological indicators of the health status of forest ecosystems [1].Furthermore, wild ungulates are often important components of grassland and woodland food webs and frequently exert a certain degree of direct and indirect influence on the composition, structure, energy flow, and community succession patterns of forest vegetation through feeding, trampling, and excretion [2].In addition, a widespread decrease in the population of top predators and the implementation of increasingly stringent animal conservation measures may contribute to an increase in the number of wild ungulates in the same area, thereby potentially leading to intensified intra-and inter-species competition [3].Moreover, wild ungulates can potentially damage farmland and compete with livestock for resources, which may in some cases escalate Animals 2024, 14, 2205 2 of 13 human-wildlife conflicts.Notably, intense intra-and inter-species competition, along with potential human-wildlife conflicts, may pose new challenges for conservation and management efforts [4]. The mechanism underlying the coexistence of closely related species in ecological niches is an important research topic in animal ecology.Studying the occurrence and maintenance mechanisms of species coexistence is important for community ecology research [5].Various hypotheses and theories have been proposed to elucidate the mechanisms of species coexistence, including the niche differentiation hypothesis [6], neutral theory [7], adaptive boundary theory [8], environmental heterogeneity hypothesis [9], and the competitive exclusion principle [10].The competitive exclusion principle has been widely applied in the study of coexistence mechanisms [11].Competition occurs in environments with limited resources if two or more species share the same resources and have a significant ecological niche overlap.The core concept of competitive exclusion is competitive pressure, and in the presence of intense competition between species, species that are better adapted to resource utilization have a higher chance of survival and reproductive success.Meanwhile, species that are less adapted to resource utilization may be excluded from habitats or forced to seek alternative resource-utilization strategies or habitats [12].Furthermore, competitive exclusion is closely related to niche differentiation.Niche differentiation occurs in environments with intense competition, wherein species avoid direct competition by adapting to different resource-utilization strategies or occupying different ecological niches, thereby achieving coexistence [13].This differentiation can be achieved through adaptive changes such as species morphology, behavior, and feeding habits.For example, many studies have shown significant overlap in food resources among coexisting species [14].Investigating the dietary composition and nutritional niche overlap among small, medium, and large ungulate species within the same habitat can reveal the occurrence and maintenance mechanisms of species coexistence within food webs [15].This suggests that food, as a crucial resource, can be shared and competed for, and it fluctuates in availability.Understanding the adaptive strategies that allow these ungulate species to coexist despite overlapping resource use is an important part of understanding their community dynamics. Alpine musk deer (Moschus chrysogaste) [16] and red serow (Capricornis rubidus) [17] are classified as browsers, whereas white-lipped deer (Przewalskium albirostris) [18] do not exhibit a clear browsing preference.All three species primarily feed on plants, and their nutritional niches are relatively similar.In the wild, these species often exhibit overlapping ranges.Exploring the relationship between feeding habits and nutritional niches is important for the conservation and management of forest ungulates, as well as for revealing the mechanisms of coexistence among ungulate species in the same habitat. Many methods are used to study wildlife diets.The simplest traditional approach is direct observation of feeding behavior, which provides valuable information about an animal's diet.However, observing the feeding habits of some elusive, timid, and nocturnal wildlife species is challenging in the wild, requiring substantial effort and resources [19].Consequently, alternative methods have emerged, such as food residue analysis, stomach content examination, fecal microanalysis, and stable isotope analysis.The fecal microanalysis technique, which offers advantages like convenient sampling, non-invasiveness, and lower cost, has been widely used to study wildlife dietary habits since the 1970s [20].Yet, this method has limitations-it requires experienced examiners for microscopic observation and is time-and labor-intensive.Additionally, the identification resolution may be low due to morphological similarities in plant cells and variations in animal digestive processes toward different plant species [21]. With the advancement of molecular biology, researchers have employed molecular biology techniques to analyze wildlife fecal samples, overcoming the limitations of traditional dietary research methods.However, these traditional methods often suffer from low resolution and high cost and effort [21].The new molecular biology approach allows for accurate, rapid, and straightforward acquisition of animal dietary data, to a certain extent compensating for the shortcomings of traditional methods.DNA barcoding [22][23][24][25][26] has already been applied in some studies on the dietary composition of wild animals. Three ungulate species, alpine musk deer, white-lipped deer, and red serow, coexist in the Tibet Nyenchen Tanglha Mountains; however, research on the mechanisms underlying their coexistence is lacking.We investigated the dietary composition of these three ungulate species, determined whether there could be significant dietary partitioning or overlap among them, and contributed to the understanding of whether dietary composition is one of the mechanisms underlying their coexistence in this area.We hypothesize that the three species of ungulates exhibit distinct nutritional partitioning in their diets.This nutritional partitioning serves as one of the mechanisms facilitating their coexistence Study Area The study area (93.906983 • E-94.812076 • E and 31.462707• N-31.036659 • N) is located in Eastern Tibet, northwest of Chamdo City, at the northern foot of the Nyenchen Tanglha Mountains.This region is characterized by rugged mountains with an average elevation of over 4000 m and is adjacent to the Nujiang River system.The climate type of the area is highland temperate semi-humid.The warm season in the study area is from July to October, with average temperatures above 8-15 • C.This is also the period when thunderstorms and hailstorms are more common in the region.The vertical zonation is significant in this area.Owing to limited human activities and spontaneous animal conservation efforts by the Tibetan people because of religious reasons, this region has abundant flora and fauna.In addition to the alpine musk deer, red serow, and white-lipped deer, this region also harbors other animal species, such as Tibetan brown bears (Ursus arctos pruinosus), snow leopards (Panthera uncia), and wild yaks (Bos mutus).The forest types in the research area are mainly mixed coniferous and broad-leaved forests, as well as pure coniferous forests.The main tree species include Asian white birch (Betula platyphylla), purple cone spruce (Picea purpurea), Chinese weeping cypress (Cupressus funebris), willow (Salix sp.), and Sikang pine (Pinus densata). Sample Collection and Preservation To investigate whether the three ungulate species display significant dietary differences to alleviate competition pressures, we collected fecal samples from alpine musk deer, whitelipped deer, and red serow in the study area.Due to the high elevation of the research area, during cold climate periods, fecal samples can be easily covered by heavy snow, which would create significant difficulties and dangers for sample collection.Therefore, we have chosen to conduct the fecal sample collection during the period from 20 July 2023 to 3 October 2023.Twelve vertical transect lines, with a total length of 72.35 km, were designed through communication with local Tibetan guides.These transects spanned an elevation range of 3600 to 4200 m, encompassing the major habitat types present within the study area.The sampling area exhibited distinct vertical zonation, with habitat types arranged from lower to higher elevations as follows: mixed coniferous and broad-leaved forests, coniferous forests, alpine shrub meadows, and alpine meadows.The habitat type of each collected sample was determined by combining this information with photographs of the sampling sites.The collected fecal samples were labeled with the following information: date of collection, freshness, coordinates, and elevation.The fecal samples were placed in 25 mL sterile centrifuge tubes with color-changing silica gel and stored in a −20 • C freezer for preservation.In total, 170 suspected ungulate fecal samples were collected.In order to provide some reference materials for the subsequent identification of dietary plants, we also collected plant samples from the vicinity of the fecal sampling sites.The plant collection bag consisted of size 10 self-sealing bags, Kraft paper envelopes, and color-changing silica gel.The plant samples were stored in well-ventilated, cool, dry places. Species Identification from Fecal Samples To conduct species identification on the collected fecal samples, we carried out the following procedures: Three milliliters of the fecal sample was transferred into a 15 mL centrifuge tube with 5 mL of PBS (phosphate buffered saline), and the tubes were incubated for 2 min with vigorous shaking to facilitate the detachment of animal intestinal cells from the fecal surface.Total DNA was extracted from the fecal samples using a TIANamp Genomic DNA Kit (Cat.No. 4992254; Tiangen, Beijing, China).DNA extraction results were examined using 1% agarose gel electrophoresis.The extracted DNA was stored in a freezer at −20 • C. The universal primers 16S-F/R [27], designed for vertebrates, were used to amplify an approximately 550 bp fragment of the ungulate mitochondrial gene [27].Each reaction system of total reaction volume of 25 µL contained 3 µL DNA template, 12.5 µL 2× Premix Taq (Tiangen), 1 µL each forward and reverse primer (10 µM), 3.5 µL ddH 2 O, and 4 µL bovine serum albumin (20 µg/µL, A8010, Solarbio, Beijing, China).The polymerase chain reaction (PCR) conditions included an initial denaturation at 95 • C for 5 min, followed by 30 cycles of denaturation at 94 • C for 30 s, annealing at 49 • C for 30 s, and extension at 72 • C for 45 s, followed by a final extension at 72 • C for 5 min.Successfully amplified DNA samples were sequenced (SinoGenoMax Limited Company, Beijing, China) using an ABI 3730XL sequencing instrument (Applied Biosystems Inc., Foster City, CA, USA).The raw sequences were trimmed and assembled to obtain aligned sequences approximately 126 bp in length using Geneious Prime 2022.0.1 (Biomatters Ltd., Auckland, New Zealand).Using the Basic Local Alignment Search Tool (BLAST) provided by the National Center for Biotechnology Information (NCBI), the alignmentready sequences were aligned with the GenBank database for online species matching.The fecal sample was identified as originating from the species corresponding to the best-matching sequence, with a coverage of 100% and a similarity of ≥98% with the query sequence. Dietary Identification 2.4.1. DNA Extraction In order to ensure the dietary identification results accurately reflected the animals' diets during the study period, we selected fresh fecal samples (with moist fecal surfaces) from the successfully identified samples to perform DNA extraction.An E.Z.N.A. Soil DNA Kit (Omega Bio-tek, Inc., Norcross, GA, USA) was used to extract genomic DNA from the fecal samples.The quality and concentration of the DNA were measured using a Nanodrop 2000 spectrophotometer (Thermo Fisher Scientific, Inc., Waltham, MA, USA).The DNA samples were stored at −20 • C. PCR The chloroplast rbcl2 region was amplified using the universal primers rbcl2 F/R (5 -CTTACCAGYCTTGATCGTTACAAAGG-3 )/(5 -GTAAAATCAAGTCCACCRCG-3 ).To distinguish between different samples, an 8 bp barcode sequence was added to the 5 end of both the upstream and downstream primers.The synthesized universal primers with barcode sequences were amplified using an ABI 9700 PCR instrument (Applied Biosystems Inc., Foster City, CA, USA).The PCR amplification system consisted of 2 µL DNA template (total DNA 30 ng), 1 µL each of forward and reverse primers (5 µM each), 3 µL bovine serum albumin (2 ng/µL), 12.5 µL 2× Taq Plus Master Mix, and 5.5 µL ddH 2 O, resulting in a total reaction volume of 25 µL.The PCR conditions were as follows: initial denaturation at 94 • C for 5 min, followed by 35 cycles of denaturation at 94 • C for 30 s, annealing at 55 • C for 30 s, and extension at 72 • C for 60 s, followed by a final extension at 72 • C for 7 min.The amplified PCR products were analyzed for band size using 1% agarose gel electrophoresis at 170 V for 30 min.The purified PCR products were subjected to automated purification using Agencourt AMPure XP (Beckman Coulter, Inc., Indianapolis, IN, USA).Sequencing was performed on an Illumina Miseq/NovaSeq 6000 platform (Illumina, Inc., Hayward, Animals 2024, 14, 2205 5 of 13 CA, USA) using a paired-end (PE) sequencing strategy with read lengths of 250 (PE250) or 300 bases (PE300). Dietary Data Analysis The sequencing results yielded PE sequencing data.The FASTQ data obtained were subjected to quality control processing to obtain high-quality data.The FASTQ data were split into different samples based on the barcode sequences.Pear software (version 1.8.0) was used for quality control of the FASTQ data by removing sequences with ambiguous bases and primer mismatches.The sequences were trimmed to remove bases with quality values <Q20.The PE reads were merged based on their overlapping relationship, with a minimum overlap of 10 bp and a p-value cutoff of 0.0001.This process generated FASTA sequences, which were used to remove chimeric and short sequences using Vsearch software (version 2.15.0).Subsequently, the high-quality sequences were subjected to operational taxonomic unit (OTU) clustering with a sequence similarity threshold of 97%.To assign taxonomic information to each OTU, a sequence alignment against the NCBI database was performed using the BLAST algorithm. The principles for species identification based on sequence comparison were as follows: (1) When the identity in the comparison results was <95%, it was recorded as unidentified. (2) When the identity in the comparison results was >95% and no species were recorded in the matched local species distribution, the species with the highest identity was selected.The identification result was recorded as the lowest taxonomic unit that encompassed all species with the highest identity and was consistent with local species records.(3) When the identity in the comparison results was >95% and <98%, the species with the highest identity, which also matched the local species distribution records, was selected.The identification results were recorded as the lowest taxonomic unit that encompassed all local species with the highest identity.(4) When the identity in the comparison results was >98% and multiple species matched the local distribution records, identification was made at the genus level; if only one species matched the local distribution records, identification was made at the species level; if no species matched the local distribution records, identification was made at the genus level, indicating that the identified species belonged to the same genus as those recorded in the local distribution.Based on these principles, each sequence was subjected to species identification.Species identification information was obtained from iplant (https://www.iplant.cn/)(accessed on 28 October 2023), iflora (http://www.iflora.cn/)(accessed on 28 October 2023), and species 2000 (http://col.especies.cn/)(accessed on 28 October 2023), as well as from a plant catalog compiled based on plant collection and identification within the research area. Data Statistical Analysis The spatial overlap among the three species was quantified using the Minimum Convex Polygon (MCP) method, which helped delineate their spatial distributions and assess the potential for coexistence.The Shannon, Simpson, and Pielou's evenness indices were calculated to measure the biodiversity of the three species.Niche breadth was calculated using Levins' niche breadth index.Relative abundance (abundance of a species in a sampling unit/total abundance of all species in a sampling unit) was used to measure the dietary habits of the three species.Ecological niche overlap was measured using Pianka's overlap index.Individual specialization was assessed using the individual specialization index (the ratio of average individual niche breadth to total population niche breadth).The individual specialization index is a dimensionless index that ranges from one, when all individuals consume the same prey in the same proportions (no individual specialization), down to zero, when each individual uses a unique type of prey (maximal individual specialization) [28].AMOVA (Analysis of Molecular Variance) was used to detect significant differences in dietary patterns among the species.ANOVA (Analysis of Variance) was employed to examine significant variations in biodiversity indices, niche breadths, and elevation ranges.Kruskal-Wallis analysis was utilized to identify significant differences in food types consumed.ANOSIM (Analysis of Similarities) was applied to determine significant differences in dietary compositions between and within species. Spatial Distribution A total of 35 fecal samples from alpine musk deer were successfully identified.Of these, 6 were collected from alpine meadows, 20 from alpine shrub meadows, 7 from coniferous forests, and 2 from mixed coniferous and broad-leaved forests.Additionally, 22 samples were collected from white-lipped deer, including 2 from alpine meadows, 14 from alpine shrub meadows, 5 from coniferous forests, and 1 from mixed coniferous and broad-leaved forests.For red serow, 15 samples were collected, with 7 from alpine shrub meadows, 4 from coniferous forests, and 4 from mixed coniferous and broad-leaved forests (Figure 1a). A total of 35 fecal samples from alpine musk deer were successfully identified.Of these, 6 were collected from alpine meadows, 20 from alpine shrub meadows, 7 from coniferous forests, and 2 from mixed coniferous and broad-leaved forests.Additionally, 22 samples were collected from white-lipped deer, including 2 from alpine meadows, 14 from alpine shrub meadows, 5 from coniferous forests, and 1 from mixed coniferous and broad-leaved forests.For red serow, 15 samples were collected, with 7 from alpine shrub meadows, 4 from coniferous forests, and 4 from mixed coniferous and broad-leaved forests (Figure 1a). To investigate the spatial distribution relationships among the alpine musk deer, red serow, and white-lipped deer, a 3D scatter plot (Figure 1b) was generated based on the latitude, longitude, and elevation data of the sampling locations.The spatial overlap among the three species was quantified using the minimum convex polygon (MCP) method.The spatial overlap index between the alpine musk deer and red serow was 0.29, between the red serow and white-lipped deer was 0.53, and between the alpine musk deer and white-lipped deer was 0.48.The spatial overlap data and the 3D scatter plot visually demonstrate substantial spatial overlap among the three species.The high overlap values between the red serow and white-lipped deer (0.53), as well as between the alpine musk deer and white-lipped deer (0.48), indicate that these species occupy the same geographical region and exhibit significant spatial overlap.This suggests the potential for intense interspecific competition and ecological interactions within the community. Food Composition We utilized DNA barcoding to reveal the dietary habits of the three ungulate species within the study area over a 2.5-month period during the summer of 2023.DNA barcoding revealed that the white-lipped deer consumed a diet consisting of plants belonging to 62 families and 122 genera.Red serow consumed a diet consisting of plants belonging to 63 families and 113 genera.Alpine musk deer consumed a diet consisting of plants belonging to 74 families and 144 genera.The overlapping portions of the diets included plants belonging to 44 families and 62 genera.Relative abundance stacked bar plots and To investigate the spatial distribution relationships among the alpine musk deer, red serow, and white-lipped deer, a 3D scatter plot (Figure 1b) was generated based on the latitude, longitude, and elevation data of the sampling locations.The spatial overlap among the three species was quantified using the minimum convex polygon (MCP) method.The spatial overlap index between the alpine musk deer and red serow was 0.29, between the red serow and white-lipped deer was 0.53, and between the alpine musk deer and whitelipped deer was 0.48.The spatial overlap data and the 3D scatter plot visually demonstrate substantial spatial overlap among the three species.The high overlap values between the red serow and white-lipped deer (0.53), as well as between the alpine musk deer and white-lipped deer (0.48), indicate that these species occupy the same geographical region and exhibit significant spatial overlap.This suggests the potential for intense interspecific competition and ecological interactions within the community. Food Types and Ecological Niche The diets of alpine musk deer, white-lipped deer, and red serow mainly consisted of trees, shrubs, herbs, ferns, and mosses, with other food types accounting for <1% (Figure 3a).Significant differences were observed in the food types of tree and moss among the three species based on the Kruskal-Wallis analysis (among df = 2, p < 0.05).The diet of the alpine musk deer mainly consisted of herbs (50.6%) and shrubs (47.5%).The diet of the white-lipped deer mainly consisted of herbs (48.5%), followed by shrubs (37.2%) and a moderate amount of mosses (7.5%) and trees (6.5%).The diet of red serow mainly consisted of shrubs (35.2%) and trees (32.4%), with herbs (26.1%) as a secondary food source.Analysis of the elevation of the sampling points using ANOVA revealed that the elevation of sample point locations from the white-lipped deer and alpine musk deer was similar and significantly higher than that of the red serow (among df = 1, p < 0.05) (Figure 3b).Based on the analysis of food diversity and ecological niche width of the three animal species (Table 1 and Figure 3c,d), the Shannon index, the Simpson index, and Pielou's evenness of alpine musk deer were lower than those of white-lipped deer and red serow, whereas the ecological niche width of the alpine musk deer was smaller than that of the white-lipped deer and red serow.Individual specialization indices were calculated based on individual and population ecological niche width data.Alpine musk deer had the highest individual specialization index (0.53), followed by red serow (0.41) and white-lipped deer (0.36). highest individual specialization index (0.53), followed by red serow (0.41) and whitelipped deer (0.36).Based on genus-level dietary data, the nutritional niche overlap indices among alpine musk deer, white-lipped deer, and red serow were calculated using the Pianka index.The highest overlap index was observed between alpine musk deer and red serow (0.384), followed by red serow and white-lipped deer (0.248) and alpine musk deer and white-lipped deer (0.166). Non-metric multidimensional scaling (NMDS) analysis demonstrated compositional differences in the genus-level dietary composition among alpine musk deer, white-lipped deer, and red serow (Figure 4a).The horizontal and vertical ranges of fecal samples for the alpine musk deer were −2.08-0.13 and −0.77-1.22,respectively.For the white-lipped deer, the horizontal and vertical ranges of fecal samples were −0.36 to 1.09 and −0.66 to 1.16, respectively.Lastly, for the red serow, the horizontal and vertical ranges of fecal samples were −1.50 to 0.54 and −1.40 to −0.35, respectively.These high non-metric fit (r 2 = 0.958) and linear fit (r 2 = 0.817) values indicate that the NMDS analysis has very good quality (Figure 4b).NMDS and ANOSIM analyses revealed a significant difference in the genus-level dietary composition among the three animal species (p < 0.01).Based on genus-level dietary data, the nutritional niche overlap indices among alpine musk deer, white-lipped deer, and red serow were calculated using the Pianka index.The highest overlap index was observed between alpine musk deer and red serow (0.384), followed by red serow and white-lipped deer (0.248) and alpine musk deer and whitelipped deer (0.166). Non-metric multidimensional scaling (NMDS) analysis demonstrated compositional differences in the genus-level dietary composition among alpine musk deer, white-lipped deer, and red serow (Figure 4a).The horizontal and vertical ranges of fecal samples for the alpine musk deer were −2.08-0.13 and −0.77-1.22,respectively.For the white-lipped deer, the horizontal and vertical ranges of fecal samples were −0.36 to 1.09 and −0.66 to 1.16, respectively.Lastly, for the red serow, the horizontal and vertical ranges of fecal samples were −1.50 to 0.54 and −1.40 to −0.35, respectively.These high non-metric fit (r 2 = 0.958) and linear fit (r 2 = 0.817) values indicate that the NMDS analysis has very good quality (Figure 4b).NMDS and ANOSIM analyses revealed a significant difference in the genus-level dietary composition among the three animal species (p < 0.01). Discussion To study the diet and nutritional niche of alpine musk deer, white-lipped deer, and red serow, we employed DNA barcoding technology to determine the dietary compositions of the three animal species.This study aimed to explore the coexistence of alpine musk deer, white-lipped deer, and red serow from the perspective of food resource utilization.The experimental results showed that within the study area, white-lipped deer, red serow, and alpine musk deer consumed plants belonging to 62 families and 122 genera, 63 families and 113 genera, and 74 families and 144 genera, respectively, indicating highly diverse diets of these three ungulate species.Compared with previous studies, a significant increase was observed in the number of plant species consumed at the family and genus levels by these three animal species in this study [16,29,30], which may be attributed to improvements in DNA barcoding technology in terms of the accuracy of identification of dietary species [21].Furthermore, sampling was conducted between July and October, which corresponds to the warm season in the study area.During this period, a rich variety of plant species exists owing to the favorable water and thermal conditions, and animals have more opportunities to choose different foods and expand their dietary range by consuming different types of food owing to the abundant food resources in the environment [31]. In an ecosystem, different species or individuals often select different food resources to reduce direct competition and maximize the utilization of available resources.This differentiated resource utilization allows species or individuals to establish their own niches and resource-utilization strategies within an environment, thereby promoting species diversity and ecosystem stability.To investigate the extent of nutritional niche differentiation among alpine musk deer, red serow, and white-lipped deer, the analysis was conducted in terms of four aspects.First, the dietary food type results revealed the significant differences in the consumption of trees and moss among the three ungulates, indicating a certain level of dietary partitioning in their utilization of food resources (Kruskal-Wallis analysis, among df = 2, p < 0.05).Second, the distances and clustering patterns among the samples were visually evident based on the visualization results of the NMDS plot.The three ungulates exhibited significant differences in their dietary composition at the genus a b Discussion To study the diet and nutritional niche of alpine musk deer, white-lipped deer, and red serow, we employed DNA barcoding technology to determine the dietary compositions of the three animal species.This study aimed to explore the coexistence of alpine musk deer, white-lipped deer, and red serow from the perspective of food resource utilization.The experimental results showed that within the study area, white-lipped deer, red serow, and alpine musk deer consumed plants belonging to 62 families and 122 genera, 63 families and 113 genera, and 74 families and 144 genera, respectively, indicating highly diverse diets of these three ungulate species.Compared with previous studies, a significant increase was observed in the number of plant species consumed at the family and genus levels by these three animal species in this study [16,29,30], which may be attributed to improvements in DNA barcoding technology in terms of the accuracy of identification of dietary species [21].Furthermore, sampling was conducted between July and October, which corresponds to the warm season in the study area.During this period, a rich variety of plant species exists owing to the favorable water and thermal conditions, and animals have more opportunities to choose different foods and expand their dietary range by consuming different types of food owing to the abundant food resources in the environment [31]. In an ecosystem, different species or individuals often select different food resources to reduce direct competition and maximize the utilization of available resources.This differentiated resource utilization allows species or individuals to establish their own niches and resource-utilization strategies within an environment, thereby promoting species diversity and ecosystem stability.To investigate the extent of nutritional niche differentiation among alpine musk deer, red serow, and white-lipped deer, the analysis was conducted in terms of four aspects.First, the dietary food type results revealed the significant differences in the consumption of trees and moss among the three ungulates, indicating a certain level of dietary partitioning in their utilization of food resources (Kruskal-Wallis analysis, among df = 2, p < 0.05).Second, the distances and clustering patterns among the samples were visually evident based on the visualization results of the NMDS plot.The three ungulates exhibited significant differences in their dietary composition at the genus level (AMOVA, among df = 2, p < 0.01).Furthermore, the dietary differences among species were significantly greater than the differences within species (ANOSIM, p < 0.01).Third, the sum of the relative abundances of the top three genera in the diets of the three ungulate species accounted for >38.5% of the total dietary composition.Among the top three genera in terms of relative dietary abundance, only Rosa was consumed by both the alpine musk deer and red serow.At the genus level, significant differences were observed in the preferences for these plants among the three animal species.Fourth, based on the results of the nutritional niche overlap index, the highest and lowest values were 0.384 and 0.166, respectively, indicating a relatively low degree of overlap [32] and no significant nutritional niche overlap among alpine musk deer, red serow, and white-lipped deer.Summarizing the conclusions from these four aspects, we believe that during the warm season in this area, the differentiation of the nutritional niches among alpine musk deer, red serow, and white-lipped deer is promoted by the selection of different food types and plant genera by these species, which ultimately reduces the overlap of nutritional niches and helps avoid conflicts resulting from feeding competition. Based on the data of dietary genera with relative abundances > 1%, comparing the sum of relative abundances, alpine musk deer (92.0%) have a higher sum than white-lipped deer (82.7%) and red serow (86.3%).However, when comparing the number of dietary genera, the number for alpine musk deer (12 genera) was lower than that of white-lipped deer (20 genera) or red serow (14 genera).This indicates that alpine musk deer exhibit a higher selectivity toward certain dietary genera (such as Rosa, Chamerion, and Bistorta).In contrast, white-lipped deer and red serow tended to utilize a more diverse and even range of dietary genera.This is consistent with the results of Pielou's evenness index and also explains the findings of the diversity analysis, where alpine musk deer consumed a higher number of dietary genera than that of red serow and white-lipped deer but exhibited lower Shannon's index, Simpson's index, and niche breadth values compared with those for red serow and white-lipped deer.Further discussing the food selection strategy of the alpine musk deer, we propose the following possible explanations: (1) Alpine musk deer are browser species [16] and tend to prefer tender and nutrient-rich parts of plants that are easily digestible to meet their nutritional requirements.(2) The alpine musk deer is a solitary animal, facing less intraspecific competition than social animals, thereby exhibiting more opportunities to select preferred food.In contrast, red serow and white-lipped deer are social animals that share limited resources, leading to greater resource competition and pressure.Consequently, individuals in the population need to adapt and utilize a wider ecological niche and exhibit a higher degree of specialization to access a greater variety of food resources to meet the nutritional requirements of the entire group, which is consistent with the results of the ecological niche width and individual specialization indices [33]. (3) White-lipped deer and red serow have significantly larger body sizes than alpine musk deer.A larger body size means they need to consume a greater quantity of food.The amount provided by just a few dietary items would be insufficient to meet the feeding requirements of the larger-bodied animals.Therefore, white-lipped deer and red serow tend to select a more diverse array of food sources as their primary dietary components in order to fulfill their nutritional needs [34]. Furthermore, the differences in food types among the three animals, to some extent, may reflect ecological niche differentiation between species.The study area exhibits a clear vertical vegetation distribution.Below an altitude of 3900 m, coniferous forests and mixed coniferous and broad-leaved forests are the dominant vegetation types.From 3900 m to 4200 m, alpine shrub meadows and alpine meadows are the primary vegetation types.Alpine musk deer [35] primarily inhabit coniferous forests and alpine shrubs.White-lipped deer are mainly active in high mountain grasslands during the summer, whereas red serow typically inhabit forested areas.Analysis of food types indicated that red serow primarily consumed trees, shrubs, and herbs, whereas white-lipped deer and alpine musk deer mainly fed on shrubs and herbs.By comparing the altitudes of the fecal sample collection points for the three species, we found that the collection points for alpine musk deer and white-lipped deer were close in altitude and higher than those for red serow.Therefore, we speculate that the red serow has a lower altitude distribution than alpine musk deer and white-lipped deer, indicating a certain degree of spatial ecological niche differentiation among the three species.The study conducted by Shi [36], which placed infrared cameras in the Yarlung Zangbo Grand Canyon, also showed that alpine musk deer were distributed at higher altitudes than red serow.Nevertheless, the spatial distribution relationship between alpine musk deer and white-lipped deer requires further investigation. Conclusions This study concluded that, during the warm season, alpine musk deer, white-lipped deer, and red serow in the study area have a wide variety of food choices.Furthermore, our findings revealed a clear differentiation in the nutritional ecological niche among the three species, which was primarily manifested in the differentiation of food type selection and selection of plant species at the genus level.Owing to differences in social behavior, body size, and habitat selection, the three species further expanded their differentiation in resource selection, thereby utilizing environmental resources more efficiently.We believe that these are the main reasons for the stable coexistence of the three species in the study area.The Nyenchen Tanglha Mountain region in Tibet has abundant ungulate resources for two main reasons.First, the local area benefits from the advantageous steep mountainous terrain, rich forest resources, and suitable climatic conditions, which provide an ideal habitat for alpine musk deer, red serow, and white-lipped deer.Second, the local Tibetan people have a unique religious belief system and spontaneously engage in wildlife conservation, thereby reducing human disturbances and threats to wildlife populations.However, the local area has a large population of grazing livestock, consisting mainly of yaks and goats.Excessive grazing behavior may lead to increased inter-species competition, habitat destruction, and spread of infectious diseases among wild ungulates.To protect the local ungulate species, proper management and control measures must be implemented in the local grazing industry. Figure 1 . Figure 1.(a) Distribution of animals across habitat types by elevation.(b) 3D scatter plot of animal fecal samples. Figure 1 . Figure 1.(a) Distribution of animals across habitat types by elevation.(b) 3D scatter plot of animal fecal samples. 14, x FOR PEER REVIEW 7 of 14 standard error of dietary intake among individual plots were generated based on dietary data at the family and genus levels in the fecal samples (Figure2a-d). Figure 2 . Figure 2. (a) Stacked bar plot showing the relative abundance of food at the family level for the three animal species.(b) Stacked bar plot showing the relative abundance of food at the genus level for the three animal species.(c) Dietary relative abundance at the family level.(d) Dietary relative abundance at the genus level. Figure 2 . Figure 2. (a) Stacked bar plot showing the relative abundance of food at the family level for the three animal species.(b) Stacked bar plot showing the relative abundance of food at the genus level for the three animal species.(c) Dietary relative abundance at the family level.(d) Dietary relative abundance at the genus level. Figure 3 . Figure 3. (a) Dietary types of alpine musk deer, white-lipped deer, and red serow.(b) Box plot of the elevation of sampling points.(c) Box plot of the ecological niche width and Shannon index of alpine musk deer, white-lipped deer, and red serow.(d) Box plot of Pielou's evenness and Simpson indices of alpine musk deer, white-lipped deer, and red serow. Figure 3 . Figure 3. (a) Dietary types of alpine musk deer, white-lipped deer, and red serow.(b) Box plot of the elevation of sampling points.(c) Box plot of the ecological niche width and Shannon index of alpine musk deer, white-lipped deer, and red serow.(d) Box plot of Pielou's evenness and Simpson indices of alpine musk deer, white-lipped deer, and red serow. Table 1 . Comparison of ecological niche width and dietary diversity among alpine musk deer, white-lipped deer, and red serow (mean ± standard error). Table 1 . Comparison of ecological niche width and dietary diversity among alpine musk deer, white-lipped deer, and red serow (mean ± standard error).
8,710.8
2024-07-30T00:00:00.000
[ "Environmental Science", "Biology" ]
Future AI Will Most Likely Predict Antibody-Drug Conjugate Response in Oncology: A Review and Expert Opinion Simple Summary This review explores the potential of artificial intelligence (AI) to predict the effectiveness of antibody-drug conjugates (ADCs) in cancer treatment. The problem addressed is the need for more accurate methods to predict how well cancer therapies will work, particularly in personalized medicine. This study’s aim is to discuss how AI can enhance the precision of ADC therapy by analyzing data from clinical trials and molecular biomarkers. This review highlights that AI can significantly reduce the time and cost associated with drug discovery and improve the targeting of cancer cells, reducing side effects and increasing treatment efficacy. We conclude that as more data become available from ongoing clinical trials, AI has the potential to become a standard tool in predicting ADC responses, thereby improving patient outcomes and advancing cancer treatment. This research is valuable as it could lead to more effective and personalized cancer therapies, benefiting society by potentially saving lives and reducing healthcare costs. Abstract The medical research field has been tremendously galvanized to improve the prediction of therapy efficacy by the revolution in artificial intelligence (AI). An earnest desire to find better ways to predict the effectiveness of therapy with the use of AI has propelled the evolution of new models in which it can become more applicable in clinical settings such as breast cancer detection. However, in some instances, the U.S. Food and Drug Administration was obliged to back some previously approved inaccurate models for AI-based prognostic models because they eventually produce inaccurate prognoses for specific patients who might be at risk of heart failure. In light of instances in which the medical research community has often evolved some unrealistic expectations regarding the advances in AI and its potential use for medical purposes, implementing standard procedures for AI-based cancer models is critical. Specifically, models would have to meet some general parameters for standardization, transparency of their logistic modules, and avoidance of algorithm biases. In this review, we summarize the current knowledge about AI-based prognostic methods and describe how they may be used in the future for predicting antibody-drug conjugate efficacy in cancer patients. We also summarize the findings of recent late-phase clinical trials using these conjugates for cancer therapy. Introduction Many aspects of society have been influenced by the recent advancements in artificial intelligence (AI).Medicine is one field with the potential for a gradual revolution through the use of AI in the development of drugs and their implementation in clinical trials, stratification of patients for treatment, and prediction of response to cancer therapy.Overall, the purpose of AI in medicine is to reduce humans' workload while achieving objectives more effectively.It fits in all aspects of medicine, ranging from communication and managerial organization to aiding the more complex issue of selecting therapies for patients. AI primarily functions through Machine Learning (ML).Deep learning (DL) is a subset of ML that employs artificial neural networks.DL involves more sophisticated and interconnected elements than ML, which resemble electrical impulses in the human brain [1].When artificial neural networks receive an input, they are trained based on it and use single or multiple linked algorithms to solve problems [2].The three types of artificial neural networks are multilayer perceptron networks, recurrent neural networks, and convolutional neural networks.They use either supervised or unsupervised training procedures [2,3]. Pharmaceutical companies have used these new AI technologies recently for faster testing of new drugs [4].Worth noting is that newly discovered drugs have been ranked based on efficacy values (IC 50 and binding affinity) through molecular simulations and ultimately via in vitro validation experiments [5,6].This could be used to discover new drugs more efficiently.Therefore, feeding such AI databases could derive more powerful and targeted pharmaceutical products [5,6]. Historically, the process of drug development has been very slow and expensive.The steps from initiation of a drug discovery program to approval by a national drug regulatory agency take 12-15 years [1].Also, the average cost to bring a drug to the market is USD 2.5 billion [7].Demonstration of the effectiveness of AI-based methods in shortening these times and reducing these costs in future clinical trials will prove their validity.Recently, a Boston Consulting Group investigation evinced that AI could cut drug discovery costs and time by 25-50% up to the clinical testing stage and that in a 2022 analysis, 20 AI-intensive companies had developed 158 drug candidates compared with 333 candidates developed by other 20 big pharmaceutical companies, which are the world's largest pharmaceutical companies [4].This provides a glimpse at how fast this field is evolving and the way it could be used to predict therapy efficacy holds immense implications. In contrast with conventional chemotherapy, which can damage healthy cells, antibodydrug conjugates (ADCs) deliver chemotherapeutic agents to cancer cells more specifically [8].ADCs rely on a monoclonal antibody's recognition of a specific receptor target expressed on the surface of cancer cells.And after its binding ADC is internalized by the cell, the ADC then releases the cytotoxic drug via a linker attached to the antibody inside the cancer cell, permitting the specific release of the drug to the cancer cells.Fully human monoclonal antibodies are highly targeted, have long circulating half-lives, and have low immunogenicity.The role of the linker in this process is paramount because they should firmly keep the payload bound to the antibody.These drug conjugates should be constructed to be stable enough to prevent cleavage of the linker before they become internalized in cancer cells [8,9].If the payload is accidentally released before reaching its target, it could cause toxicity.Among the benefits of this type of therapy related to the specificity of antibody-receptor recognition is a reduction in toxicity because much fewer normal cells are targeted than in conventional chemotherapy.Therefore, dose escalation could be more easily performed using ADCs, enhancing the efficacy of treatment [10].Currently, 13 ADCs are approved by the U.S. Food and Drug Administration (FDA), and 100 are going through clinical trials [10]. In this review, we summarize the current knowledge about AI-based prognostic methods and describe how they may be used in the future for predicting antibody-drug conjugate efficacy in cancer patients.We also summarize findings of recent late-phase clinical trials using these conjugates for cancer therapy. Prediction of Cancer Responsiveness and Resistance to ADCs Various AI methods have been developed to develop new cancer drugs, cancer prognoses, and responses to cancer therapies.These technologies are discussed below to show how they can potentially be employed in the construction of new AI algorithms for the use of ADCs, specifically, in identifying potential challenges in the field of oncology and cancer therapy selection and determining how they could be solved based on the knowledge generated in other related fields where AI has produced promising results. Because drug discovery is beyond the scope of this review, we mention only a few to explain how they are being employed in medical research around the world.The mainstream AI methods employed for drug discovery use a wide variety of data resources, such as ChEMBL and DrugBank.After the drugs' potential efficacy is ranked, their toxicity, bioactivity, and physicochemical properties are ranked [11].Interestingly to ADC drug discovery, the Response Algorithm for Drug Positioning and Rescue (Lantern Pharma) is an AI platform capable of rapidly developing novel ADCs, including cryptomycin-derived ADCs.; AtomNet is another effective technology predicting the binding activity of novel chemicals to their intended therapeutical targets [12].Various AI-based tools are capable of identifying the physicochemical properties of drugs.Each pharmaceutical company may have a patent-protected AI drug discovery method, which complicates the comparison of the methods.These technologies integrate data from preclinical and clinical tests, such as data in CellMinerCDB with The Cancer Genome Atlas, the Catalogue of Somatic Mutations in Cancer, the Gene Expression Omnibus, and identify published articles to generate new insights into the drug structures and targeting of proteins of interest [13][14][15][16][17].A more comprehensive review of AI drug discovery methods was performed by Paul et al. [1]. Conceivably, these algorithms and databases could be adapted to test ADC responsiveness during clinical trials.In this review we would like to share our expert opinion on how the technologies of AI could be employed in the near future to generate a self-learning algorithm, based on the information provided from the current clinical trials, to best predict the outcomes.Once hopefully, properly tested, validated and consolidated, having such an incredible technology at hand could be of paramount importance to guide clinicians best decide on which course of therapy would be most effective without incurring errors.Of note is that the potential of an AI system depends on the quality of the data used to feed the ML process.Table 1 summarizes the current databases that could be used to create AI models for cancer therapy response prediction and drug design.With the accrual of information from clinical studies on molecular biomarkers in tumor tissue, circulating tumor DNA, or circulating cell-free DNA, more data are generated that could help to predict the responsiveness of cancer to therapy; having AI systems to help process such data more efficiently would be beneficial [18][19][20][21].This could result in the provision of real-time information to physicians regarding the potential responsiveness of cancer to ADCs and what courses of action could be planned in case a drug is statistically likely to fail in a specific case. Currently, AI-aided methods of cancer prognosis have demonstrated notable advances when compared with image-based prognosis.For example, the combination of radiomics and AI has successfully extracted and processed multidimensional data from cancer images, such as magnetic resonance imaging, computed tomography, ultrasound (US), digital subtraction angiography, and X-ray images [22].For hepatocellular carcinoma (HCC) patients, AI coupled with radiomics has shown the potential to improve tumor characterization and offer a better prognosis than conventional radiological methods.This coupling yields insights into the complex relationship between radiomic variables and clinical outcomes [23].The process of automatic segmentation in programming ML, which delineates the volume of interest, could help predict treatment response [24,25].Also, DL can bypass the conventional steps of ML radiomic analysis.The output is calculated via DL through filtering and calculations of unprocessed images of HCC lesions serving as inputs.The outputs can include prediction of response or non-response to treatment.Furthermore, convolutional neural networks are capable of learning, thereby increasing the accuracy of their overall prediction of ML [26].Notably, DL can incorporate time as a variable during the evaluation of lesion enhancement patterns in images [27,28].DL requires more computational power than ML and is more dependent on training with large data sets and a variety of data.DL has greater potential than ML to predict the response of cancers to therapy.In the future, this could be used for ADC-based therapy response prediction as well.Zhang et al. used a DL system to make an automatic tumor segmentation model capable of integrating clinical variables and preprocedural digital subtraction angiography videos to predict the response of ADCs to transarterial chemoembolization [27].The authors observed a marked difference in the 3-year progression-free survival rate between responders and non-responders with their fully automated framework (DSA-Net).Their DSA-Net entails a U-net model employed to automate tumor segmentation (Model 1) and a ResNet model that is used to predict response to therapy to the first TACE (Model 2).Both models were tested in 360 patients.For validation, 124 internal patients and 121 external patients' data were used.Also, Peng et al. [29] developed a PyRadiomics method to predict the response of TACE treatment based on a conventional ML model that was capable of predicting the initial response of cancer to transarterial chemoembolization by exploiting pretreatment computed tomography images.They showed that patients predicted to be treatment responders had longer progression-free and overall survival than predicted non-responders.Additionally, Peng and colleagues applied this model to 46 HCC patients with data in The Cancer Genome Atlas to analyze the differential gene expression across their cohort and the TCGA-HCC cohort to explore the potential mechanisms of action of transarterial chemoembolization.They further used ML to incorporate TCGA genetic data into their data, again showing how versatile this ML method can be in processing large data sets. Researchers have also examined post-ablation prognosis for cancer therapy using AI.For example, Ma et al. compared the performance of a DL model trained using contrast-enhanced US (CEUS) with that of a conventional ML model trained using static US to predict HCC recurrence after ablation.As expected, the DL model outperformed the ML model, possibly because CEUS, besides providing morphological images, can provide real-time dynamic blood perfusion information that correlates well with the success of ablation [28]. In addition, Liu et al. used clinical data as well as features extracted from CEUS images to predict the 2-year progression-free survival rate in early-stage HCC patients who underwent radiofrequency ablation and surgical resection as well as to determine the optimal treatment for these patients.They found that 17.3% and 27.3% of the patients receiving radiofrequency ablation and surgical resection, respectively, would have had better outcomes if they had received the other treatment instead.A multicenter study with more patients is needed to determine the statistical power of this study.However, this study still demonstrates the potential of AI methods in selecting optimal ADC-based treatments for cancer patients [10]. Despite the encouraging findings, these image-based AI methods require further testing and standardization before they can be effectively integrated into clinical practice.They are operator-dependent and involve different machines, variables, and contrast doses as well as timing [30]. These and similar AI models used for cancer prognostication must be improved to ensure safe and effective patient care.They also must be submitted for and receive FDA approval before implementation in clinical settings.Recently, the FDA proposed a pathway that could lead to the use of ML software applications as medical devices [31].The AI model should include the following: (1) good ML practice, which means it should be evidence-based for reproducibility purposes, have standardized steps (e.g., the extraction algorithms), use different time points to permit generalizability, and have the consistency of AI analysis and increase the operability across clinical institutions around the world; (2) avoidance of algorithm biases, which should be ensured by validating the testing process with external data to confirm the generalizability of the model; and (3) transparency of the AI models' logic, which could be achieved by clearly explaining the mechanisms of the AI decision-making process and familiarizing oncologists with these new models [22,[32][33][34][35][36]. Standardization of the protocols can be achieved by specifically following commonly approved steps and protocols.One such step is having open databases where previous ADC data could be stored and made available for training purposes. For decades, prediction tools have been used to support clinical decisions regarding therapy selection, including the ABCD score, the Framingham Risk Score, the Model for End-Stage Liver Disease, and the Nottingham Prognostic Index.In recent years, hundreds of more prediction model studies have appeared [37][38][39][40][41][42].To prevent the scientific community from becoming mesmerized by the AI revolution and to enable ML prediction models to be appropriately developed, tested, and, if needed, tailored to different contexts before they can be employed in daily medical practice, steps have been taken.In response, new methods have been deemed necessary to resolve the issue of incomplete reporting of models in prediction model studies [43,44].Specifically, the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) method was designed to guide the key items to report in new studies or update clinical prediction models [45][46][47].In AI-based discovery of medical diagnosis, one must also consider that some FDA-approved, clinician-free, AI-based imaging diagnostic tools used for the identification of wrist fractures and strokes in adults have given false diagnoses [48].This shows the importance of having methods to facilitate the organic, healthy development of new AI-based prognostic methods.It also shows how today's AI is not unfailing. Previously, the TRIPOD method was based on the use of regression models.However, a new TRIPOD initiative specific to ML has been developed.This initiative aims to use ML prediction algorithms to establish long-term standardized methodologies for the prediction of prognostic and diagnostic prediction models.New guidelines for the efficient use of prognostic models should be made available with the TRIPOD-Artificial Intelligence (TRIPOD-AI) tool and the Prediction model Risk of Bias Assessment Tool-Artificial Intelligence (PROBAST-AI) [49].These guidelines are valuable for many AI-based prognostic models, including future methods to predict ADC efficacy.TRIPOD-AI and PROBAST-AI are being developed following guidance from the EQUATOR Network, which consists of five stages: (1) two systematic reviews to examine the quality of the published ML prediction model studies, (2) consultation with key stakeholders using the Delphi method to identify items that should be included in the method, (3) virtual consensus meetings to consolidate and prioritize the key items to be included, (4) development of a TRIPOD-AI checklist and the PROBAST-AI tool, and (5) dissemination of information about the new written algorithms the TRIPOD-AI and PROBAST-AI in journals, conferences, and social media [49]. Another field in which AI has recently shown great promise is cancer immunotherapy.Immunotherapy consists of controlling and eliminating tumors in the human body by eliciting the body's immune system against cancer, leading to an antitumor immune response.The two main cancer immunotherapy types are immune checkpoint blockade and adoptive cell therapy [50].AI technology can be used for neoantigen recognition, antibody design, and immunotherapy response prediction [51].Also, AI can be used to predict new tumor antigens in patients' cancer rapidly and accurately, reducing experimental screening and validation costs.AI-enhanced antibodies that have the potential for further success than conventional therapies in cancer treatment can be developed.Finally, AI can be used to identify patients whose disease may respond to immunotherapy using multimodal, multiscale biomarkers and immune microenvironments feeding the algorithms for prediction [51]. Anticancer ADCs That Have Entered Clinical Trials Years of research and refinement, significant technological advancements, and a deeper understanding by the scientific community of ADC mechanisms have culminated in the FDA's approval of 11 ADCs, each offering tangible benefits to cancer patients.Among them, fam-trastuzumab deruxtecan-nxki (Enhertu) stands out, as it is poised to capture a substantial market share within the ADC landscape.Its versatility in treating various breast cancer subsets (HER2+, HR+/HER2−, and triple-negative) and extended treatment duration underscore its potential positive impact on breast cancer therapy. Despite the inherent risks associated with drug development, the trajectory of novel anticancer therapies suggests an imminent surge in ADC approvals.Whether through the introduction of novel ADCs or chemical modification of previous drugs, the outlook for ADC-based cancer therapy is promising.Since the inception of the first ADC clinical trial in 1997, the field has witnessed remarkable proliferation, with 266 additional ADCs undergoing evaluation in more than 1200 clinical trials.This surge indicates a paradigm shift toward targeted cancer therapy. Presently, 275 clinical ADC trials are active (Table 2), in which investigators are testing different ADCs for accurate delivery of cytotoxic agents (Figure 1), which in the future could be achieved with the help of AI (Figure 2).Notably, discontinued ADCs also underwent rigorous clinical testing, reflecting the commitment to scientific rigor and patient safety regarding treatment with these agents.Although cancer has served as the proving ground for ADC-based therapies, their applicability across diverse medical domains is increasingly being recognized.With growing interest from major pharmaceutical companies, the ADC market is poised for sustained expansion, fueling optimism for the emergence of blockbuster ADCs in the near future.The use of AI to predict their response poses a hopeful avenue across the different and difficult medical domains.Although cancer has served as the proving ground for ADC-based therapies, their applicability across diverse medical domains is increasingly being recognized.With growing interest from major pharmaceutical companies, the ADC market is poised for sustained expansion, fueling optimism for the emergence of blockbuster ADCs in the near future.The use of AI to predict their response poses a hopeful avenue across the different and difficult medical domains. Discussion Over the past decade, advances in AI have pushed the boundaries of the medical field [1,22].Despite the successful development and use of AI-based diagnostic tools for prediction of cancer treatment response, response to certain targeted therapies remains unpredictable.However, in the field of ADCs, in which cancer patients are stratified for treatment based on the expression of a receptor on the cancer cell membrane that can be specifically bound by an antibody carrying the cytotoxic payload, more accurate prognostic methods that can predict whether patients' disease would respond to ADCs are needed.ML has shown great potential in many fields, such as radiology or mammography, for early breast cancer detection.It can be used to predict the chemistry of novel compounds against cancer.For such reasons, AI models could play an important role in this prediction of ADC response in the future.Data from ADC clinical trials are always becoming more available biomarkers retrieved from liquid biopsy from circulating tumor DNA, cell-free DNA, tissue samples, or even the tumor microenvironment.Such data could be of paramount importance to feed new AI models to predict ADC therapy [18][19][20][21].This review has limitations as the method of AI-based ADCs therapy response is still in its conceptual early stage.However, in this review, we summarize the current knowledge in this complex field, ranging from AI models for chemical structure prediction to ongoing clinical trials testing ADCs, without implementing AI now.We hope that the knowledge we have summarized here could serve as a useful tool for generating new AI models in the future based on our hypothesis.Based on our knowledge, using AI models could be paramount for the prediction of ADC efficacy in the near future.As technology advances, it would be impossible to think that such achievements would leave out the field of medicine, in particular oncology, where there is a lot of hope filling lives [52]. The implementation of new AI models, similar to the ones currently available for other prognostic models, would need the close collaboration of software engineers, data scientists, and decision-making medical doctors and scientists.The first step would be for Discussion Over the past decade, advances in AI have pushed the boundaries of the medical field [1,22].Despite the successful development and use of AI-based diagnostic tools for prediction of cancer treatment response, response to certain targeted therapies remains unpredictable.However, in the field of ADCs, in which cancer patients are stratified for treatment based on the expression of a receptor on the cancer cell membrane that can be specifically bound by an antibody carrying the cytotoxic payload, more accurate prognostic methods that can predict whether patients' disease would respond to ADCs are needed.ML has shown great potential in many fields, such as radiology or mammography, for early breast cancer detection.It can be used to predict the chemistry of novel compounds against cancer.For such reasons, AI models could play an important role in this prediction of ADC response in the future.Data from ADC clinical trials are always becoming more available biomarkers retrieved from liquid biopsy from circulating tumor DNA, cell-free DNA, tissue samples, or even the tumor microenvironment.Such data could be of paramount importance to feed new AI models to predict ADC therapy [18][19][20][21].This review has limitations as the method of AI-based ADCs therapy response is still in its conceptual early stage.However, in this review, we summarize the current knowledge in this complex field, ranging from AI models for chemical structure prediction to ongoing clinical trials testing ADCs, without implementing AI now.We hope that the knowledge we have summarized here could serve as a useful tool for generating new AI models in the future based on our hypothesis.Based on our knowledge, using AI models could be paramount for the prediction of ADC efficacy in the near future.As technology advances, it would be impossible to think that such achievements would leave out the field of medicine, in particular oncology, where there is a lot of hope filling lives [52]. The implementation of new AI models, similar to the ones currently available for other prognostic models, would need the close collaboration of software engineers, data scientists, and decision-making medical doctors and scientists.The first step would be for software engineers to go through the different AI systems and modify them into systems for ADC therapy prediction.Once the data science has been sorted, the contribution of the medical doctors would be to provide information on response to therapy and blood-based biomarkers, or even breath-based biomarkers, from ongoing and completed clinical trials involving ADC in diseases.Secondly, the prediction system should be tested in a small subset of cancer patients.The data generated should be used to train the machine learning for its refinement.Thirdly, the model would be tested in a larger cohort of patients.After all these steps have been completed, the method could be commercialized.The coming of new brave ideas will require a shift in the way we are thinking medicine. Conclusions While AI has been implemented in different fields, ranging from the prediction of chemical structure and diagnostic in radiology to other aspects of society, there is a lack of tests for the prognostic of targeted therapies, such as ADCs in oncology at the moment.As these technologies become more popular, more data from clinical trials such as from the summarized ADC clinical trials, become more widely accessible, we envision that such methods will become part of the standard of care. Figure 1 . Figure 1.Clinically tested ADCs.This bar graph shows the 277 ADCs that have undergone clinical trials along with their trial status (completed, active/recruiting, not yet recruiting, suspended, and unknown).Additionally, to the right of the main Total bar, the active/recruiting ADCs are broken down into additional columns to highlight their highest developmental stage (phases 1-4 [P1-P4]). Figure 1 . Figure 1.Clinically tested ADCs.This bar graph shows the 277 ADCs that have undergone clinical trials along with their trial status (completed, active/recruiting, not yet recruiting, suspended, and unknown).Additionally, to the right of the main Total bar, the active/recruiting ADCs are broken down into additional columns to highlight their highest developmental stage (phases 1-4 [P1-P4]). Figure 2 . Figure 2. Artificial intelligence assisted antibody-drug conjugate selection for the treatment of cancer. Table 1 . Current database resources that could be used for building AI models for therapy prediction. Table 2 . List of active Phase III clinical trials investigating an antibody-conjugated drug in solid and blood malignancies.
6,031.2
2024-09-01T00:00:00.000
[ "Medicine", "Computer Science" ]
TAM: ACCEPTANCE OF E-LEARNING TECHNOLOGY TO STUDENTS IN MASTERS OF MANAGEMENT LEARNING Background: The implementation of learning with the e-learning method requires that it be studied more deeply by looking at how the response and acceptance of students to the e-learning-based learning process, so that it can be seen the results of the form or method of e-learning-based learning desired by students. Aim: This study aims to evaluate the relationship between technology acceptance factors and online learning for Master of Management students at University Muhammadiyah Yogyakarta in terms of perceived usefulness, perceived convenience, subjective norms, attitudes towards use, and behavioral intentions. Method: The object used in this research sample is a master of a management student. The sample of this study consisted of 140 respondents who were selected by the sampling method. The data analysis technique used in this study was using Structural Equation Modeling (SEM) with the help of AMOS 23 software. Analysis of the quality testing of the data instrument in this study used validity tests, reliability tests. Findings: The results showed that perceived usefulness had a significant effect on usage attitudes, usage attitudes had a significant effect on behavioral intentions, perceived usefulness had a significant effect on perceived convenience, perceived ease of use had a significant effect on usage attitudes, perceived usefulness had a significant influence on subjective norms, norms subjective does not have a significant effect on attitudes to use, subjective norms have no significant effect on behavioral intentions. INTRODUCTION Technology can be something that is useful or something that is destructive if it is not used wisely, therefore education about technology must be developed or promoted early. Advances in the internet and wireless technology have provided the basis for the development of electronic learning (e-learning) (Müller & Wulf, 2020). Despite this, the tremendous development of internet networks and technology, acceptance and use of e-learning in higher education is at an early stage. according to Teo and Van Schaik (2012), understanding users' intentions to use technology has become one of the most challenging problems for information systems researchers today. In the absence of regulations or outlines for the use of information technology in learning, many teachers or students feel confused by the information technology-based learning system, the unpreparedness of the infrastructure and the unpreparedness of the teachers in providing material causes education to become chaotic and ineffective. So it is undeniable that the material delivered by online teaching becomes ineffective and cannot be accepted by students (Maudiarti, 2018). The use of the internet today has become an inseparable part of the lifestyle of all levels of Indonesian society. According to survey data conducted by the Association of Indonesian Internet Network Providers or (APJII) survey data in 2016 shows that students are the largest internet users in Indonesia with a percentage of 89.7%, for the second place is students with a percentage of 69.8%. However, access to online or online education pages is still very lacking. This is a problem that needs to be addressed by educators or teachers by directing students or students to be able to use the internet in the realm of online education. Universitas Muhammadiyah Yogyakarta has long had e-learning which was developed using UMY MyKlass and has been used as a form of learning recognized by university leaders, students in the university environment have used the e-learning method, either in the form of uploading documents (RPS, lecture material) or in online learning activities such as discussions, online lectures, and online assignment collection. The implementation of learning with the e-learning method requires that it be studied more deeply by looking at how the response and acceptance of students to the e-learning-based learning process, so that it can be seen the results of the form or method of e-learning-based learning desired by students. This study aims to determine student perceptions of acceptance of e-learning-based education. Hypothesis Relationship Between Perceived Usefulness and Attitudes Towards Use Perceived usefulness according to Davis is a person's belief that using a certain technology will improve his performance, this understanding is used in research to predict and explain user intentions to use technology (Teo & Van Schaik, 2012). In other studies, there is a significant correlation in the use of the system which indicates that perceived usefulness has a direct effect on attitudes towards use (Al-Rahmi et al., 2019), the results of other studies in Malaysia, teaching with a cloud system, teachers can improve OD so that they can increase motivation to use e-learning technology (Yim, et al., 2019). H₁: There is a significant effect between perceived usefulness and attitudes towards use Relationship of Attitude towards Use with Behavioral Intention Attitude toward use is an attitude toward behavior that is defined as belonging to an individual, if the user has a positive attitude then he will show a strong intention to use it. The results of research conducted by Kuo Huang regarding living assistant technology show that attitudes towards use have an effect on behavioral intentions, it is in line that respondents have a positive desire to use life support tools (Kuo et al., 2020). In a study conducted by (Chaouali & El Hedhli, 2019), it was explained that the behavior of using mobile banking had an effect on customers' habits in life. H₂: There is a significant effect between attitudes towards users and behavioral intentions defines perceived ease of use as an individual's perception of the simple and easy operation of a particular technology system. It is an assessment of the effort involved in the use of technology, the results of research conducted by Park et al. (2018) show that perceived usefulness has a positive impact on the perceived ease of use of TPACK in South Korea. The results of research conducted by Aref and Okasha (2020) Egypt state that perceived usefulness has a positive impact on perceived ease of use when using online shopping, and these results are in accordance with the hypothesis that I am developing in this study. defines perceived ease of use as an individual's perception of the simple and easy Davis operation of a particular technology system. It is an assessment of the effort involved in the use of technology, the results of research conducted by Young Ju, Park, et al. (2018) show that perceived usefulness has a positive impact on the perceived ease of use of TPACK in South Korea. The results of research conducted by Aref and Okasha (2020) Egypt state that perceived usefulness has a positive impact on perceived ease of use when using online shopping, and these results are in accordance with the hypothesis in this study. Relationship Between Perceived Usefulness and Perceived Ease of Use H₃: There is a significant effect between perceived usefulness and perceived ease of use Relationship Between Perception Ease of Use and Attitude Towards Use Several studies related to the hypothesis that I am developing include research conducted by Jiménez- Barreto and Campo-Martínez (2018) this research is about online co-creation research and the results of this study indicate that the perception of convenience has a positive impact on attitudes towards use that respondents voluntarily participate in online co-creation. and other research is about the perception of ease of showing a positive impact with attitudes towards the use of language, people in Jordan have started using e-banking because they are used to it (Anouze and Alamro, 2019). Other research is about the acceptance of technology for the results of this study indicate that the perception of convenience affects the attitude towards the use of people who will buy technological equipment (Liu and Chou 2020), and these results are in accordance with the hypothesis that I am developing in this study. H₄: There is a significant effect between perceived ease of use and attitudes towards use Relationship between Perceived Usefulness and Subjective Norms Subjective norm refers to the degree of influence of important people around the individual during e-learning, research conducted by Muñoz-Leiva et al. (2018) on the adaptation of the home-sharing platform by involving people from different cultures and the results of this study are perceived usefulness has a positive effect on subjective norms in the use of HSP for all groups and has a perceived impact. And other research is regarding the acceptance of peer to peer payments in Spain and the results of the research carried out are perceived usefulness showing a positive influence on subjective norms in the acceptance of peer to peer payment systems (Kalinić, Liébana-Cabanillas, et al., 2019), and these results are in accordance with the hypothesis in this study. H₅: There is a significant effect between perceived usefulness on subjective norms Relationship of Subjective Norms and Attitudes Towards the Use A person's attitude toward use is determined by his or her salient beliefs about the consequences of performing the behavior multiplied by the evaluation of those consequences . research conducted by (Yim et al., 2019) in research Predicting teachers' continuance in a virtual learning environment with psychological ownership and the results of this study with a cloud system, teachers can improve OD and are able to increase motivation to use e-learning, so that subjective norms have a positive effect on attitudes towards use. In another study regarding the acceptance of technology in m-learning that was carried out by Buabeng-Andoh (2018) the study showed a significant influence between subjective norms and attitudes towards the use of m-learning, and these results are in accordance with the hypothesis that I am developing in this study. H₆: There is a significant effect between subjective norms and attitudes towards use Relationship of Subjective Norms with Behavioral Intentions Subjective norms are defined as the individual's perception that most people who are important to him think that he should or should not perform the intended behavior (Ajzen, 2011). Several studies have been conducted to see the relationship between the two constructs, among others, such as that conducted by Marakarkandy et al. (2017) about enabling internet banking adoption an empirical examination with an augmented, the results of this study indicate that subjective norms have a positive effect on behavioral intentions in the intensity of internet banking use. Other studies about the usage intention of e-learning for police education and training from Rui-Hsin & Lin (2018) the results of the study show that there is a significant influence between subjective norms and behavioral norms in police education and training using e-learning, and these results are in accordance with the hypothesis that I am developing in this study. H₇: There is a significant influence between subjective norms on behavioral intentions METHOD This research was conducted at the postgraduate program at Muhammadiyah University of Yogyakarta, the subject in this study was the student postgraduate program at the Muhammadiyah University of Yogyakarta. With the number of people as many as 345 people. The sampling technique used by the author is purposive sampling. In the research, the researchers got as many as 140 respondents. This study aims to collect empirical evidence, this research is also called causality research, which aims to analyze the relationship and influence (Cause and effect) of two or more phenomena. The tools that match the indicators are designed using a Likert scale. The data collected was processed numerically and quantitatively analyzed by hypothesis testing with the analytical model used, namely Structural Equation Modeling (SEM) using Amos. The distribution of questionnaires in this study was carried out from March to July 2021. Before collecting data, the researchers asked the respondents whether the respondents knew and used the e-learning, then the researcher distributed questionnaires to respondents who were following predetermined criteria. Data were collected with a period of 5 Months for taking the questionnaire. There were 140 questionnaires were received. RESULTS AND DISCUSSION Based on the results of research that has been carried out on 140 respondents through the distribution of questionnaires, it is obtained information about the characteristics of the respondents studied. These characteristics include gender, age, media, and how long to use elearning. In the results of testing the validity and reliability, it was found that of the 15 statements that were tested the validity of the whole was declared valid. Because the instrument meets the acceptable standard, namely, the factor loading value 0.50 (Ghozali, 2017). In the reliability test results, it was found that of the five variables tested for reliability, all of them were declared reliable. Because it meets the acceptable standard, namely with the provisions of 0.70 at the cut-off value of Construct Reliability (CR) to find out whether the data is reliable or not (Ghozali, 2017). The Influence between Perceived Usefulness and Attitudes towards Use The first hypothesis, which states that there is a significant effect of perceived usefulness on usage attitudes, is accepted with a statistical t value of 5.976 < 1.96 at a significant level pvalue of 0.000 > 0.05. the magnitude of the regression effect is 0.609 (60.9%). The Influence between Attitude towards Use and Behavioral Intention The second hypothesis, which states that there is a significant effect of usage attitudes on behavioral intentions, is acceptable with a statistical t value of 8.636 > 1.96 at a significant level p-value of 0.000 < 0.05. with the magnitude of the regression effect is 0.885 (88.5%). The Influence between perceived usefulness and perceived ease of use The third hypothesis, which states that there is a significant effect of perceived usefulness on perceived ease of use, is accepted with a statistical t value of 4.580 > 1.96 at a significant level p-value of 0.000 < 0.05. with the magnitude of the regression effect is 0.512 (51.2%). The Influence between Perception ease of use and Attitude towards use The fourth hypothesis states that there is a significant effect of perceived ease of use on attitudes of use. acceptable with a statistical t value of 2.751 < 1.96 at a significant level pvalue of 0.006 > 0.05. with the magnitude of the regression effect is 0.165 (16.5%). The Influence between Perceived Usefulness and Subjective Norms The fifth hypothesis, which states that there is a significant effect of perceived usefulness on subjective norms, is accepted with a statistical t value of 5.364 > 1.96 at a significant level p-value of 0.000 < 0.05. with the magnitude of the regression effect is 0.558 (55.8%). The Influence of Subjective Norms and Attitudes towards use The sixth hypothesis, which states that there is a significant influence of subjective norms on attitudes to use is unacceptable with a t-statistical value of 1.168 < 1.96 at a significant level p-value of 0.243 > 0.05. with the magnitude of the regression effect is 0.095 (9.5%). The Influence of Subjective Norms with Behavioral Intentions The seventh hypothesis, states that there is a significant effect of subjective norms on behavioral intentions. Unacceptable with a statistical t value of 1.428 < 1.96 at a significant level p-value of 0.153 > 0.05. with the magnitude of the regression effect is 0.092 (9.2%). Based on the results of the hypothesis testing conducted by researchers, it can be seen that: (H1) which states that there is a significant effect of perceived use on attitude use, is accepted with a statistical t value of 5.976 < 1.96 at a significant level p-value of 0.000 > 0.05. the magnitude of the regression effect is 0.609. (H2) which states that there is a significant effect of usage attitudes on behavioral intentions, is acceptable with a statistical t value of 8.636 > 1.96 at a significant level p-value of 0.000 < 0.05. with the magnitude of the regression effect is 0.885. (H3) which states that there is a significant effect of perceived usefulness on perceived ease of use, is accepted with a statistical t value of 4.580 > 1.96 at a significant level p-value of 0.000 < 0.05. with the magnitude of the regression effect is 0.512. (H4) which states that there is a significant effect of perceived ease of use on attitudes of use. acceptable with a statistical t value of 2.751 < 1.96 at a significant level p-value of 0.006 > 0.05. with the magnitude of the regression effect is 0.165. (H5) which states that there is a significant effect of perceived usefulness on subjective norms, is accepted with a statistical t value of 5.364 > 1.96 at a significant level p-value of 0.000 < 0.05. with the magnitude of the regression effect is 0.558. (H6) which states that there is a significant influence of subjective norms on attitudes to use is unacceptable with a t-statistical value of 1.168 < 1.96 at a significant level p-value of 0.243 > 0.05. with the magnitude of the regression effect is 0.095. (H7) which states that there is a significant effect of subjective norms on behavioral intentions. Unacceptable with a statistical t value of 1.428 < 1.96 at a significant level p-value of 0.153 > 0.05. with the magnitude of the regression effect is 0.092. CONCLUSION Based on the results of the hypothesis testing and discussion conducted previously, the following conclusions can be drawn: There is a significant effect of perceived usefulness on attitude use so that it causes changes in usage attitudes towards students who use e-learning. There is a significant influence of attitude use on behavioral intentions so that students have the will to continue using e-learning. There is a significant effect of perceived usefulness on perceived ease of use. From this, we can see that students feel helped by e-learning. There is a significant effect of perceived ease of use on the attitude of use with the existing ease, it will change students' perspective on e-learning. There is a significant effect of perceived usefulness on subjective norms, students can feel the usefulness of e-learning and can change their perspective on e-learning because of the influence of other variables. There is no significant influence of subjective norms on attitudes to use, from this the subjective norms of students will not change their attitudes towards the use of e-learning. There is no significant effect of subjective norms on behavioral intentions. These results indicate that students' views on subjective norms will not change their intentions towards e-learning.
4,202.8
2022-03-20T00:00:00.000
[ "Business", "Computer Science", "Education" ]
Low-shear QCD plasma from perturbation theory We argue that the phenomenologically inferred ratio of shear viscosity to entropy density of the quark-gluon plasma, $\eta/\s<0.5$ near the deconfinement temperature $T_c$, can be understood from perturbative QCD. To rebut the widespread, opposite view we first show that, and why, the existing leading order result in (fixed) coupling should not be further expanded in logarithms. Emphasizing then that the resummation mandatory for screening also settles the often neglected question of scale setting for the running coupling, we establish a temperature dependence of $\eta/\s$ which agrees well with constraints from hydrodynamics. RHIC and LHC experiments have provided substantial evidence that the quark-gluon plasma (QGP) behaves as an almost ideal fluid [1], with an upper bound on the ratio of shear viscosity to entropy density, η/s ∼ < 0.5. While this remarkably low value clearly indicates a 'strongly coupled' system, it remains a theoretical challenge to understand better why it is so low. One popular approach to this question is via the AdS/CFT correspondence [2], which allows one to explore the strong coupling behavior of certain conformal field theories. Although the conjectured lower limit η/s ≥ 1/(4π) from supersymmetric Yang-Mills theories does compare favorably with the observations, a rigorous connection to real-world QCD is lacking. First attempts to compute η by lattice QCD corroborate small values [3], but are hampered by the methodological difficulties of applying a static approach for a non-equilibrium phenomenon. On the other hand, there is a widespread belief that QCD perturbation theory, as a weak-coupling method, fails to explain η/s ∼ < 0.5. This is the perception we will scrutinize here. It appears to be largely based on the next-to-leading log (NLL) formula where T is the temperature and α the coupling strength. The coefficients b and c were extracted from the leading order (LO) result η LO computed numerically in a QCD effective kinetic framework [4]. In the quenched limit (n f = 0 quark flavors), the case we will consider mostly for argument's sake, b ≈ 0.34 and c ≈ 0.61. On general grounds, the viscosity should decrease for stronger interactions (that equilibrate velocity gradients more rapidly), which is described by (1) only for α < α = c/ √ e (e is Euler's number), at which point η NLL (α) has a minimum. Numerically, Min [η NLL ] = 2beT 3 /c 2 turns out to be close to the free entropy s 0 = (16 + 21 2 n f ) 4π 2 90 T 3 , see Fig. 1. Thus, since near the deconfinement temperature T c the entropy of the interacting QGP is notably smaller than s 0 , (1) is indeed incompatible with the quite conservative bound η/s ∼ < 0.5. To weigh up this fact, we should see the minimum of η NLL (α) as a precursor to its singularity at α = c (marking the ultimate break-down of the NLL approximation) -which an elementary consideration will reveal to be unphysical: In kinetic theory we may estimate [5] η ≈ 1 3 npλ from the density n of particles that can transport a typical momentump over a distance λ. For binary interactions of relativistic particles λ = (nσ tr ) −1 , where σ tr (s) = 0 −s dt ( 1 2 |t|/s) dσ/dt is the transport cross section in terms of Mandelstam variables. Although the 'transport weight' 1 2 |t|/s = 1 − cos θ suppresses the influence of the small-angle scatterings that prevail in gauge theories, σ tr would still diverge logarithmically at tree-level due to the t-channel gluon exchange term in dσ tree /dt ∝ α 2 [−us/t 2 − ts/u 2 − ut/s 2 + 3]/s 2 . Since this would imply zero viscosity for, notably, any value of the coupling, it is a necessity to go beyond the tree-level approximation. In a hot QGP, the exchanged gluon acquires a self-energy of the order µ 2 ∼ αT 2 and is thus screened, schematically dσ scr /dt ∼ α 2 /(t − µ 2 ) 2 for small t. The typical invariant energy s ∼ T 2 is much larger than µ 2 for α 1, thus screening can be mimicked by a The viscosity, for n f = 0, to LO and NLL accuracy, and from our estimate (7). To illustrate that η NLL cannot explain η/s ∼ < 0.5 (but η LO may), we also show the constraint for the entropy, 4T 3 ≤ s ≤ s0 for T > 1.2Tc (see main text). simple cut-off imposed on dσ tree /dt, This reproduces [withp ∼ T in (2)] the parametric αdependence of (1), but also shows that the singularity of η NLL (α) is related to coinciding integration bounds in (3). Thus the reason why η NLL cannot be extrapolated to larger α has to do with kinematic simplifications that become illegitimate, rather than an 'breakdown' of perturbative QCD per se at α c. To validate this insight beyond the scope of (2), the viscosity has to be calculated from the energy-momentum tensor of the particle distribution f (p, x, t) governed by the Boltzmann equation, (∂ t + v∇)f = C[f ], when set up for the case of a collective small-gradient flow u that drives f slightly out of local equilibrium. As detailed in Refs. [4,6], η can be obtained by extremizing a functional constructed from the collision term C[f ]. The gist of this somewhat technical calculation is [ if dσ/dt (as a kernel in C[f ]) depends only on the Mandelstam variables, and omitting terms sub-leading to the dominant small-angle binary scattering contributions. As an aside, with σ tr factorized from a positive weight P (s) (that depends on how the system departs from equilibrium, see later), the convolution (4) specifies more rigorously the 'typical' momentump in the elementary Ansatz (2). Calculated with a screened cross section dσ/dt, (4) resum powers of both 1/(ln α −1 ) and α -as does η LO . The inverse-log expansion of η LO was shown in [4] to have zero radius of convergence. We show here that the expansion in α is also ill-defined. To that end, we defer QCD particularities and argue on the basis of (4) [17] applied to the simple model dσ scr /dt which, now with correct kinematic limits, amends (3) to Here g(a) = ln 1+a a −1/(1+a) is a monotonously decreasing, positive function of a = µ 2 /s ∝ α. By contrast, its 'NLL' approximation, g = ln a −1 − 1 + O(a), becomes obviously unphysical for a > 1/e, leading to the same issues as seen in (1) and (3). We note first that this problem cannot be cured by higher order terms in the expansion due to the convergence radius, a = 1, set by the pole at t = µ 2 (off the physical sheet) in dσ scr /dt. This feature of a finite radius of convergence will carry over to QCD. What is more, expanding σ scr tr in µ 2 /s ∝ α before convoluting it in (4) with P (s) is forbidden: The coefficients of α n (the negative moments of P ) are infrared-divergent, with increasing severity, since P (0) > 0 because [7] cannot be strictly linear in p [8].) Note also that by crossing symmetry in dσ/dt, we obtain the same contribution (5) from the dressed u-channel term. Now, since even the model with dσ scr /dt does not have a weak-coupling expansion that allows for extrapolation to larger α, we cannot expect so when taking into account QCD features more accurately. In other words: Unless α c, estimates of η cannot be based on the NLL formula (1) but require at least the unexpanded (resummed) LO result η LO . As a function of the coupling parameter, η LO (α) is monotonously approaching zero, which provokes the question for 'the' value of α. [18] Before addressing this question to back up that perturbative QCD can indeed explain η/s ∼ < 0.5, let us briefly point out that η LO (α) is fairly well reproduced by our approximation (4-6). Without needing to discuss further details of P (s) we can simply rewrite the convolution in (4) using the mean value theorem, . Here we could sidestep solving the Boltzmann equation for χ(p) and infer that 1/(2 dsP (s)) = b (the factor 2 accounts for t ↔ u crossing symmetry) since (7) has to reproduce (1) at LL accuracy. Furthermore, a = µ 2 /s = κ · α could be determined from a 'log moment' of P (s), but we will rather adjust it to match c in (1), viz. κ → (ce) −1 . This effectively re-incorporates sub-dominant contributions of t, u, but also s-channel and inelastic scatterings that were omitted in our simple scheme. To quantify the uncertainty of this artifice, we vary κ by factors 2 ±1/2 in Fig. 1, which confirms a good agreement of (7) with η LO (α) even for α ∼ > α, where the NLL result becomes qualitatively incorrect, as discussed. Figure 1 also depicts the rigorous bound s > 4T 3 on the entropy for T > 1.2T c known from lattice calculations [9], to affirm that η NLL cannot explain η/s ∼ < 0.5. On the other hand, for α large enough η LO could be compatible with η/s ∼ < 0.5 -which brings us back to the task of specifying α at a given T . A common prescription in the literature is to take α as the running coupling (where β 0 = (11 − 2 3 n f )/(4π) and Λ is the QCD parameter) at a 'typical thermal scale', usually the lowest Matsubara energy modulo a factor ξ of order one, Q T = 2πT · ξ. To then have η LO (α)/s latt ∼ < 0.5 at, e. g., T = 1.2T c would require α ∼ > 0.4, see Fig. 1. While the resulting ξ ∼ < 0.5Λ/T c would be ∼ 1, quantifying the coupling (and thus the viscosity) should be based on firmer grounds. This loose end (of having to specify the coupling a posteriori) arises because in Ref. [4] α is treated as if it was constant. Imposing then Q T as the relevant scale seems counterintuitive given the importance of a whole range of momenta, parametrically [µ, T ]. Rather, as put forward early [10] but rarely taken into account in finite-T QCD phenomenology, the relevant scale of the running coupling in, say, t-channel scattering should be t. [19] This rectifies (3) to hence the overall factor α −2 in (1) is to be understood as a geometric mean of the running coupling at T ∼ Q T and at the soft screening scale µ. To consolidate this as our second key point: Running of the coupling emerges from vacuum fluctuations, which are inseparable from thermal fluctuations. Thus for observables that require thermal screening, like the viscosity, the 'scale setting' for α(Q 2 ) is unambiguous. For this coupling renormalization, several types of radiative corrections are needed -of which, however, only the gluon self-energy Π = Π vac + Π T contributes in Coulomb gauge due to its Abelian-like Ward identities [11]. This noteworthy feature simplifies our argument. Although rarely used for vacuum QCD, in Coulomb gauge it is evident that dressing e. g. a t-channel Born amplitude ∼ α/t with Π vac (Q) = αβ 0 −1 + ln(−Q 2 /L 2 ) Q 2 (in dimensional regularization with scale L, and Q 2 = t) gives the renormalized M vac ∼ α(t)/t with, indeed, the coupling (8) at the scale t. At T > 0 (where Coulomb gauge is customary for other reasons), the self-energy receives the finite contribution Π T = α ϑ, where the function ϑ ∼ T 2 depends on q 0 and q. Then the renormalized amplitude becomes M ∼ α(Q 2 )/(Q 2 − α(Q 2 ) ϑ) [12], where we emphasize that Q 2 also emerges as the scale for the coupling in the thermal self-energy. This dependence of the running coupling on the virtuality carries over to the other scattering channels and then to dσ/dt ∼ | M i | 2 . Juxtapose this consistent renormalization with the common (fixed-α) procedure: There the vacuum part in the self-energy is dropped, to give M fix ∼ α/(Q 2 − α ϑ) with the value of the bare coupling α left unspecified. This analysis allows us to easily re-instate running in the fixed-coupling calculation [4], where the infrared sensitive terms in dσ tree /dt were screened with hard thermal loop (HTL) insertions, replacing e. g. Here Y µν = (P 1 − 1 2 Q) µ (P 2 + 1 2 Q) ν , and D µν = (D −1 0 − Π T ) −1 µν is the Coulomb HTL propagator. The matrix element αD , which corresponds to M fix , separates into transverse and longitudinal contributions (i = {t, }), with D i = 1/(Q 2 − αϑ i ). Promoting now α to be Q 2dependent restores the vacuum contribution and gives the renormalized amplitude The same goes for the α 2 (−ts)/u 2 contribution in dσ tree /dt, with Q 2 → u in (10). It remains to discuss the terms α 2 (3 − ut/s 2 ) in dσ tree /dt and α 2 /4 in (9), which only give sub-leading (finite) contributions to σ tr even without thermal screening. Accordingly, the scale for the running coupling in these terms is irrelevant for us; we set it to (stu) 1/3 . On par is the effect of inelastic scatterings -which we neglect altogether as they affect η LO by merely a few percent [4]. We note that although the running coupling (8) becomes unphysical in the farinfrared domain, |Q 2 | ∼ < Λ 2 , this effect is rendered unimportant by thermal screening ∼ ϑ i [7]. The HTL screening in (9, 10) is justified only for soft momenta |Q 2 | ∼ < T 2 (which is sufficient for LO accuracy). Adapting the Braaten-Yuan method [13] (as done in [4]), we omit screening for |Q 2 | > |t | and then vary |t | ∈ [ 1 2 , 2]T 2 to probe the sensitivity to this class of higher order contributions. Figure 2 shows a factor of two uncertainty of η for relevant T , which justifies our simplifying assumptions on the scale setting and omitting inelastic scatterings. Such improved estimates of the viscosity only depend on the QCD scale Λ which is of the order of T c . In light of the overbearing sensitivity of η on t we set Λ → T c for the viscosity shown in Fig. 2, normalized by the interacting entropy from lattice QCD calculations [9]. For n f = 0 our results are compatible with existing lattice calculations of the viscosity [3], which may give some guidance despite their limitations. Interestingly, η(T )/s(T ) hardly changes when including quarks; apparently the increased interaction rate is compensated by the density. Our results compare favorably to recent constraints from hydrodynamics [14] testing the average value and the Tdependence of the viscosity. A fairly mild increase in η(T )/s(T ) reflects the QCD feature of an effective coupling which weakens logarithmically. Figure 2 also illustrates that the NLL formula (1), supplemented by running coupling at the scale Q T = 2πT · ξ with ξ ∈ [ 1 2 , 2], overestimates η/s by half an order of magnitude. We have demonstrated that this estimate is misleading for two reasons, namely due to compromising Ref. [14] T /T c n f = 3 n f = 0 Refs. [3] NLL with α(Q 2 T ) FIG. 2: The viscosity in units of the interacting entropy [9]; the full lines show our resummed result with running coupling, the bands give the uncertainty from varying t ∈ [ 1 2 , 2]T 2 , see text. The left panel, for the quenched limit, shows also existing lattice results [3], and by the dotted lines the NLL result (1) varying the often imposed scale QT = 2πT in running coupling by a factor of two. Overlayed on the right, for the physical case (n f = 3), are the two permissible (out of the five tested) scenarios from hydrodynamics [14]. Hatched region: η/s ≤ 1/(4π). the fixed-α LO (resummed) result by another (log) expansion and an ad hoc choice for the value of α. In fact, both issues are closely related: Resummation accounts for thermal screening which results from loop corrections to tree level amplitudes -as does running coupling. Let us conclude with two general comments, discussing first the applicability of weak-coupling methods at 'larger coupling', as often relevant for heavy-ion phenomenology. Perturbation theory may at best give asymptotic expansions, hence higher loop corrections are not guaranteed to improve accuracy (due to lack of convergence). Since the optimal order is expected to decrease with the characteristic α [15], low-order approximations can make for useful and in fact more reliable estimates. In the case of η, the first (and only available) candidate is the LO result for which we have emphasized the importance of running coupling: After all, α(Q 2 ) varies most where it is large. With our second remark we justify a posteriori the use of kinetic theory which relies on the mean interparticle distancer ∼ n −1/3 being sufficiently smaller than the transport mean free path λ [16]. The latter can be calculated systematically from the gain (or loss) term of our renormalized collision operator C, with the result that λ/r remains larger than one (althought only by a small margin) even near T c [7]. Apparentlty, the interactions of a few partons is sufficient to maintain local equilibrium. Treating the vacuum and thermal parts of loop corrections on the same footing, we arrive at a consistent position regarding a long-standing question: The reckoned constraint η ∼ < 0.5 s for the QGP produced in heavy-ion collisions can be understood on the basis of the LO viscosity -rather than being a genuinely non-perturbative effect.
4,267.2
2017-04-20T00:00:00.000
[ "Physics" ]
Autonomous materials discovery driven by Gaussian process regression with inhomogeneous measurement noise and anisotropic kernels A majority of experimental disciplines face the challenge of exploring large and high-dimensional parameter spaces in search of new scientific discoveries. Materials science is no exception; the wide variety of synthesis, processing, and environmental conditions that influence material properties gives rise to particularly vast parameter spaces. Recent advances have led to an increase in the efficiency of materials discovery by increasingly automating the exploration processes. Methods for autonomous experimentation have become more sophisticated recently, allowing for multi-dimensional parameter spaces to be explored efficiently and with minimal human intervention, thereby liberating the scientists to focus on interpretations and big-picture decisions. Gaussian process regression (GPR) techniques have emerged as the method of choice for steering many classes of experiments. We have recently demonstrated the positive impact of GPR-driven decision-making algorithms on autonomously-steered experiments at a synchrotron beamline. However, due to the complexity of the experiments, GPR often cannot be used in its most basic form, but rather has to be tuned to account for the special requirements of the experiments. Two requirements seem to be of particular importance, namely inhomogeneous measurement noise (input-dependent or non-i.i.d.) and anisotropic kernel functions, which are the two concepts that we tackle in this paper. Our synthetic and experimental tests demonstrate the importance of both concepts for experiments in materials science and the benefits that result from including them in the autonomous decision-making process. Introduction Artificial intelligence and machine learning are transforming many areas of experimental science.While most techniques focus on analyzing "big data" sets, which are comprised of redundant information, collecting smaller but information-rich data sets has become equally important.Brute-force data collection leads to tremendous inefficiencies in the utilization of experimental facilities and instruments, in data analysis and data storage; large experimental facilities around the globe are running at 10 to 20 percent utilization and are still spending millions of dollars each year to keep up with the increase in the amount of data storage needed [16,14,1,35].In addition, conventional experiments require scientists to prepare samples and directly control experiments, which leads to highly-trained researchers spending significant effort on micromanaging experimental tasks rather than thinking about scientific meaning.To avoid this problem, autonomously steered experiments are emerging in many disciplines.These techniques place measurements only where they can contribute optimally to the overall knowledge gain.Measurements that collect in mm ∈ [0, 1] versus a temperature in • C ∈ [5,500], we should find different length scales for different directions of the parameter space.Also, there might be different differentiability characteristics in different directions.It is therefore vitally important to give the model the flexibility to account for those varying features.This can either be done by using an altered Euclidean norm or by employing different norms that provide more flexibility of distance measures in different directions.The general idea including the concepts proposed in this paper are visualized in Figure 1. This paper is organized as follows: First, we introduce the traditional theory of Gaussian process regression with i.i.d.noise and standard isotropic kernel functions.Second, we make formal changes to the theory to include non-i.i.d.noise and anisotropy.Third, we demonstrate the impact of the two concepts on synthetic experiments.Fourth, we present a synchrotron beamline experiment that exploited both concepts in autonomous control. 2 Gaussian Process Regression with non-i.i.d.Noise and Anisotropic Kernels Prerequisite We define the parameter space X ⊂ R n , which serves as the index set or input space in the scope of Gaussian process regression and elements x ∈ X .We define four functions over X .First, the latent function f = f (x) can be interpreted as the inaccessible ground truth.Second, the often noisy measurements are described by y = y(x) : X → R d .To simplify the derivation, we assume d = 1; allowing for d > 1 is a straightforward extension.Third, the surrogate model function is then defined as ρ = ρ(x) : X → R. Fourth, the posterior mean function m(x), which is often assumed to equal the surrogate model, i.e., m(x) = ρ(x), but this is not necessarily the case.We also define a second space, a Hilbert space H ⊂ R N × R N × R J , with elements [f y f 0 ] T , where N is the number of data points, J is the number of points at which we want to predict the model function value, y are the measurement values, f is the vector of unknown latent function evaluations and f 0 is the vector of predicted function values at a set of positions.Note that scalar functions over X , e.g.f (x), are vectors (bold typeface) in the Hilbert space H, e.g.f .We also define a function p over our Hilbert space which is just the function value of the Gaussian probability density functions involved.For more explanation on the distinction between the two spaces and the functions involved see Figure 2. Gaussian Process Regression with Isotropic Kernels and i.i.d. Observation Noise Defining a GP regression model from data D = {(x 1 , y 1 ), ..., (x N , y N )}, where y i = f (x i ) + (x i ), is accomplished in a GP regression framework, by defining a Gaussian probability density function, called the prior, as and a likelihood where µ µ µ = [µ(x 1 ), ..., µ(x N )] T is the mean of the prior Gaussian probability density function (not to be confused with the posterior mean function m(x)).The prior mean can be understood as the position of the Gaussian. x ∈ X is the covariance of the Gaussian process, with its covariance function, often referred to as the kernel, k(φ, x i , x j ), where φ are the hyper parameters, and where σ 2 is the variance of the i.i.d.observation noise.The problem here is that, in practice, the i.i.d.noise restriction rarely holds in experimental sciences, which is one of the issues to be addressed in this paper.The kernel k is a symmetric and positive semi-definite function, such that k : X × X → R. As a reminder, X is our parameter space and often referred to as index set or input space in the literature. A well-known choice [37] is the Matérn kernel class defined by where B ν is the Bessel function of second kind, Γ is the gamma function, σ 2 s is the signal variance, l is the length scale, r = ||x i − x j || l2 is the Euclidean distance between input points and ν is a parameter The ellipses show the found anisotropy visually.The take-home message for the practitioner here is that the method will find the most likely model function given all collected data with their variances.The model function will not pass directly through the points but find the most likely shape given all available information.(a) A function over X .This can be the surrogate model ρ(x), the latent function f (x) to be approximated through an experiment, the function describing the measurements y(x) or the predictive mean function m(x).x 1 and x 2 are two experimentally controlled parameters (e.g., synthesis, processing or environmental conditions) that the measurement outcomes potentially depend on.(b) The Gaussian probability density function over H which gives GPR its name.For noise-free measurements, y = f at measurement points, meaning that we can directly observe the model function.Generally this is not the case and the observations y are corrupted by input dependent (non-i.i.d) noise.that controls the differentiability characteristics of the kernel and therefore of the final model function.The well-known exponential and squared exponential kernels are special cases of the Matérn kernels.The signal variance σ 2 s and the length scale l are hyper parameters (φ) that are found by maximizing the log-likelihood, i.e., solving where where I is the identity matrix.In the isotropic case, we only have to optimize for one signal variance and one length scale (per kernel function).The mean function µ(x) is often assumed to be constant and therefore does not have to be part of the optimization.The mean function assigns the location of the prior in H to any x ∈ X .The mean function can therefore be used to communicate prior knowledge (for instance physics knowledge) to the Gaussian process.Provided some hyper parameters, the joint prior is given as where where κ κ κ i = k(φ, x 0 , x i ), K K K = k(φ, x 0 , x 0 ) and, as a reminder, K ij = k(φ, x i , x j ).Intuitively speaking, Σ, K and k are all measures of similarity between measurement results y(x) of the input space.While K stores this similarity between all data points, Σ stores the similarity between all data points and all unknown points of interest, and κ contains the similarity only between the unknown y(x) of interest.k contains the instruction on how to calculate this similarity.The reader might wonder: "how do we find the similarity between unknown points of interest?"The answer lies in the formulation of the kernels that calculate the similarity just by knowing locations x ∈ X and not the function evaluations y(x).x 0 is the point where we want to estimate the mean and the variance.Note here that, with only slight adaption of the equation, we are able to compute the mean and variance for several points of interest.The predictive distribution is defined as and the predictive mean and the predictive variance are therefore respectively defined as which are the posterior mean and variance at x 0 , respectively.N (•, •) stands for the normal (Gaussian) distribution with a given mean and covariance. Gaussian Processes with non-i.i.d. Observation Noise To incorporate non-i.i.d.observation noise one can redefine the likelihood (2) as where V is a diagonal matrix containing the respective measurement variances.The matrix V can also have non-diagonal entries if the measurement noise happens to be correlated.We will only discuss non-correlated measurement noise. From equations ( 6) and ( 11), we can calculate equation ( 8), i.e., the predictive probability distribution for a measurement outcome at x 0 , given the data set.The mean and variance of this distribution are respectively.Note here, that the matrix of the measurement errors V replaces the matrix σ 2 I in equations ( 9) and (10).However, this does not follow from a simple substitution, but from a significantly different derivation.The log-likelihood (15) changes accordingly, yielding This concludes the derivation of GPR with non-i.i.d.observation noise.Figure 3 illustrates the effect of different kinds of noise on an one-dimensional model function.As we can see, while some details of the derivation change when we account for inhomogeneous (also known as input dependent or non-i.i.d) noise, the resulting equation are very similar and the computation exhibits no extra costs. Gaussian Processes with Anisotropy For parameter spaces X that are anisotropic, i.e., where different directions have different characteristic correlation length, we can redefine the kernel function to incorporate different length scales in different directions.One way of doing this for axial anisotropy is by choosing the l 1 norm as distance measure and redefine the kernel function as where the superscripts m, n mean point labels, the subscript i means different directions in X and d = dim(X ).Defining a kernel per direction gives us the flexibility to enforce different orders of differentiability in different directions of X .The main benefit, however, is the possibility to define different length scales in different directions of X (see Figure 4).Unfortunately, the choice of the l 1 norm can lead to a very recognizable checkerboard pattern in the surrogate model, but the predictive power of the associated variance function is significantly improved compared to the isotropic case. A second way, which avoids the checkerboard pattern in the model but does not allow different kernels in different direction, is to redefine the distances in X as where M is any symmetric positive semi-definite matrix playing the role of a metric tensor [36].This is just the Euclidean distance in a transformed metric space.In the actual kernel functions, any r/l can then be replaced by the new equation for the metric.We will here only consider axis-aligned anisotropy which means the matrix M is a diagonal matrix with the inverse of the length scales on its diagonal.The extension to general forms of anisotropy is straightforward but needs a more costly likelihood optimization since more hyper parameters have to be found.The rest of the theoretical treatment, however, remains unchanged.The mean function µ(x), the hyper parameters φ i and the signal variance σ 2 s are again found by maximizing the marginal log-likelihood (15).The associated optimization tries to find a maximum of a function that is defined over R d+1 , if we ignore the mean function as it is commonly done.We therefore have to find d + 1 parameters which adds a significant computational cost.If M is not diagonal we have to maximize the log-likelihood over R (d 2 −N )/2+1 .However, the optimization can be performed in parallel to computing the posterior variance, which can hide the computational effort.It is important to note that accounting for anisotropy can make the training of the algorithm, i.e. the optimization of the log-likelihood, significantly more costly.The extent of this depends on the kind of anisotropy considered.As we shall see, taking anisotropy into account leads to more efficient steering and a higher-quality final result, and is thus generally worth the additional computational cost. Synthetic Tests Our synthetic tests are carefully chosen to demonstrate the benefits of the two concepts under discussion, namely: non-i.i.d.observation noise and anisotropic kernels.To demonstrate the importance of including non-i.i.d.observation noise into the analysis, we consider a synthetic test based on actual physics which we used in previous work to showcase the functionality of past algorithms [27].We are choosing an example given in a closed form, because it provides a noise-free "ground truth" that we can compare to, whereas experimental data would inevitably include unknown errors.To showcase the importance of anisotropic kernels as part of the analysis, we provide an high-dimensional example based on a simulation of a material that is subject to a varying thermal history.In x 1 direction we have assumed that the model function is not differentiable.Therefore we used the exponential kernel.In x 2 direction, the model can be differentiated an infinite number of times.We therefore chose the squared exponential kernel.For other orders of differentiability, other kernels can be used.Fixing the order of differentiability also gives the user the ability to incorporate domain knowledge into the experiment. The shown synthetic tests explore spaces of very different dimensionality.There is no theoretical limitation to the dimensionality of the parameter space.Indeed the autonomous methods described herein are most advantageous when operating in high-dimensional spaces, since this is where simpler methods-and human intuition-typically fail to yield meaningful searches. Non-i.i.d. Observation Noise For this test, we define a physical "ground truth" model (f (x)), whose correct function value at x is inaccessible due to non-i.i.d measurement noise, but can be probed by our simulated experiment through y(x).In this case, we assume that the measurements are subject to Gaussian noise with a standard deviation of 2% of the function value at x.The ground-truth model function is defined to be the diffusion coefficient D = D(r, T, C m ) for the Brownian motion of nanoparticles in a viscous liquid consisting of a binary mixture of water and glycerol: where k B is Bolzmann's constant, r ∈ [1 , 100] nm is the nanoparticle radius, T ∈ [0 , 100] • C is the temperature and µ = µ(T, C m ) is the viscosity as given by [8], where C m ∈ [0.0, 100.0] % is the glycerol mass fraction.This model was used in [27] to show the functionality of Kriging based autonomous experiments.The experiment device has no direct access to the ground truth model, but adds an unavoidable noise level, i.e., To demonstrate the importance of the noise model, we first ignore noise , then approximate it assuming i.i.d.noise, and finally model it allowing for non-i.i.d.noise.Figure 5 shows the results after 500 measurements, and a comparison to the (inaccessible) ground truth.Figure 6 compares the decrease in the error, in form of the Euclidean distance between the models and the ground truth, with increasing number of measurements N , between the three different types of noise. The results show that treating noise as i.i.d. or even non-existent can lead to artifacts in the surrogate model.Additionally, the discrepancy between the ground truth and the surrogate mode is reduced far more efficiently if non-i.i.d.noise is accounted for. Anisotropy Allowing anisotropy can increase efficiency of autonomous experiments significantly for any dimensionality of the underlying parameter space.However, as the dimensionality of the parameter space increases, the importance of anisotropy increases substantially, purely due to the number of directions in which anisotropy can occur.To demonstrate this link, we simulated an experiment where a material is subjected to a varying thermal history.That is, the experiment consists of repeatedly changing the temperature, and taking measurements along this time-series of different temperatures.The temperature at each time step can be thought of as one of the dimensions of the parameter space.The full set of possible applied thermal histories thus become points in the high-dimensional parameter space of temperatures.In particular, we consider the ordering of a block copolymer, which is a self-assembling material that spontaneously organizes into a well-defined morphology when thermally annealed [10].The material organizes into a defined unit cell locally, with ordered grains subsequently growing in size as defects annihilate [23].We use a simple model to describe this grain coarsening process, where the grain size ξ increases with time according to a power-law where α is a scaling exponent (set to 0.2 for our simulations) and the prefactor k captures the temperaturedependent kinetics Here, E a is an activation energy for coarsening (we select a typical value of E a = 100 kJ/mol), and the prefactor A sets the overall scale of the kinetics (set to 3 × 10 11 nm/s α ).From these equations we construct an instantaneous growth-rate of the form: Block copolymers are known to have a order-disorder transition temperature (T ODT ) above which thermal energy overcomes the material's segregation strength, and thus the nanoscale morphology disappears in favor of a homogeneous disordered phase.Heating beyond T ODT thus implies driving ξ to zero.We describe this 'grain dissolution' process using an ad-hoc form of: where we set k diss = 1.0 nm s −1 K −1 and T ODT = 350 • C. We also apply ad-hoc suppresion of kinetics near T ODT and when grain sizes are very large to account for experimentally-observed effects.Overall, this simple model describes a system wherein grains coarsen with time and temperature, but shrink in size if temperature is raised too high.The parameter space defined by a sequence of temperatures will thus exhibit regions of high or low grain size depending on the thermal history described by that point; moreover there is non-trivial coupling between these parameters since the grain size obtained for a given step of the annealing (i.e. a given direction in the parameter space) sets the starting-point for coarsening in the next step (i.e. the next direction of the parameter space).We select thermal histories consisting of 11 temperature selections (temperature is updated every 6 s), which thus defines an 11-dimensional parameter space for exploration.Each temperature history defines a point (x ∈ X ) within the 11-dimensional input space.As can be seen in Figure 7(a), the majority of thermal histories one might select terminate in a relatively small grain size (blue lines in figure).This can be easily understood since a randomly-selected annealing protocol will use temperatures that are either too low (slow coarsening) or too high (T > T ODT drives into disordered state).Only a subset of possible histories terminate with a large grain size (dark, less transparent lines in Figure 7), corresponding to the judicious choice of annealing history that uses large temperatures without crossing ODT.While this conclusion is obvious in retrospect, in the exploration of a new material system (e.g. for which the value of material properties like T ODT are not known), identifying such trends is non-trivial.Representative slices through the 11-dimensional parameter space (Fig. 7(b) and (c)) further emphasize the complexity of the search problem, especially emphasizing the anisotropy of the problem.That is, different steps in the annealing protocol have different effects on coarsening; correspondingly the different directions in the parameter space have different characteristic length scales that must be correctly modeled (even though every direction is conceptually similar in that it describes a 6 s thermal annealing process). Autonomous exploration of this parameter space enables the construction of a model for this coarsening process.Moreover, the inclusion of anisotropy markedly improves the search efficiency, reducing the model error more rapidly than when using a simpler isotropic kernel (Fig. 7(d)).As the dimensionality of the problem and the complexity of the physical model increase, the utility of including an anisotropic kernel increases further still. Autonomous SAXS Exploration of Nanoscale Ordering in a Flow-Coated Polymer-Grafted Nanorod Film The proposed GP-driven decision-making algorithm that takes into account non-i.i.d.observation noise and anisotropy has been used successfully in autonomous synchrotron experiments.Here we present, as an illustrative example, the results of an autonomous x-ray scattering experiment on a polymer-grafted gold nanorod thin film, where a combinatorial sample library was used to explore the effects of film fabrication parameters on self-assembled nanoscale structure.Unlike traditional short ligand coated particles, polymer-grafted nanoparticles (PGNs) are stabilized by high molecular weight polymers at relatively low grafting densities.As a result, PGNs behave as soft colloids, possessing the favorable processing behavior of polymer systems while still retaining the ability to pack into ordered assemblies [7].Although this makes PGNs well suited to traditional approaches for thinfilm fabrication, the nanoscale assembly of these materials is inherently complex, depending on a number of variables including, but not limited to particle-particle interactions, particle-substrate interactions, and process methodology. The combinatorial PGN film sample was fabricated at the Air Force Research Laboratory.A flowcoating method [7] was used to deposit a thin PGN film on a surface-treated substrate where gradients in coating velocity and substrate surface energy were imposed along two orthogonal directions over the film surface.A 250 nM toluene solution of 53 kDa polystyrene-grafted gold nanorods (94% polystyrene by volume), with nanorod dimensions of 70 ± 6 nm in length and 11.0 ± 0.9 nm in diameter (based on TEM analysis), was cast onto a functionalized glass coverslip using a motorized coating blade.The resulting film covered a rectangular area of dimensions 50 mm × 60 mm.The surface energy gradient on the glass coverslip was generated through the vapor deposition of phenylsilane [13].The substrate surface energy varied linearly along the x direction from 30.5 mN/m (hydrophobic) at one edge of the film (x = 0) to 70.2 mN/m (hydrophilic) at the other edge (x = 50 mm).Along the y direction, the film-casting speed increased from 0 mm/s (at y = 0) to 0.5 mm/s (y = 60 mm) at a constant acceleration of 0.002 mm/s 2 .The film-casting condition corresponds to the evaporative regime where solvent evaporation occurs at similar timescales to that of solid film formation [4].In this regime, solvent evaporation at the meniscus induces a convective flow, driving the PGNs to concentrate and assemble at the contact line.The film thickness decreased with increasing coating speed, resulting in transitions from multilayers through a monolayer to a sub-monolayer with increasing y.This was verified by optical microscopy observations of the boundaries between multilayer, bilayer, monolayer and sub-monolayer regions, the last of which were identified by the presence of holes in the film, typically 1 µm or greater as seen in the optical images. The autonomous small-angle x-ray scattering (SAXS) experiment was performed at the Complex Materials Scattering (11-BM CMS) beamline at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory.As described previously [27,28], experimental control was coordinated by combining three Python software processes: bluesky [22] for automated sample translations and data collection, SciAnalysis [21] for real-time analysis of newly collected SAXS images, and the above GPR-based optimization algorithms for decision-making.The incident x-ray beam was set to a wavelength of 0.918 Å (13.5 keV x-ray energy) and a size of 0.2 mm × 0.2 mm.The PGN film-coated substrate was mounted normal to the incident x-ray beam, on a set of motorized xy translation stages.Transmission SAXS patterns were collected on an area detector (DECTRIS Pilatus 2M) located at a distance of 5.1 m downstream of the sample, with exposure time of 10 s/image.The SAXS results indicate that the polymer grafted nanorods tend to form ordered domains in which the nanorods lie flat and parallel to the surface and align with their neighbors.The fitting of SAXS intensity profiles via real-time analysis allowed for the extraction of quantities such as: the scattering-vector position q for the diffraction peak corresponding to the in-plane inter-nanorod spacing d = 2π/q; the degree of anisotropy η ∈ [0, 1] for the in-plane inter-nanorod alignment, where η = 0 for random orientations and η = 1 for perfect alignments [30]; the azimuthal angle χ or the factor cos(2χ) for the in-plane orientation of the inter-nanorod alignment; and the grain size ξ of the nanoscale ordered domains, which is inversely proportional to the diffraction peak width and provides a measure of the extent of in-plane positional correlations between aligned nanorods.The analysis-derived best-fit values and associated variances for these parameters were passed to the GPR decision algorithms. Three analysis-derived quantities ξ, η, and cos(2χ) were used as signals to steer the SAXS measurements as a function of surface coordinates (x, y).For the initial part of the experiment, N < 464 (first 4 h), where N is the number of measurements completed up to a given point in the experiment, the autonomous steering utilized the exploration mode based on model uncertainty maxima [28] for ξ, η, and cos(2χ).For the latter part of the experiment (464 ≤ N ≤ 1520 or next 11 h), the feature maximization mode [28] was used for η, while keeping ξ and cos(2χ) in the exploration mode.We found that the nanorods in the ordered domains tended to orient such that their long axes were aligned along the x direction [cos(2χ) ≈ 1], i.e., perpendicular to the coating direction, and that ξ and η are strongly coupled.Figure 8A (top panels) show the N -dependent evolution of the model for the grain size distribution ξ over the film surface.It should be noted that the entire experiment took 15 h, and that the GPR-based autonomous algorithms identified the highly ordered regions in the band 5 < y < 15 mm (between red lines in Fig. 8A), corresponding to the uniform monolayer region, within the first few hours.By contrast, grid-based scanning-probe transmission SAXS measurements would not be able to identify large regions of interest at these resolutions in such a short amount of time. The collected data is corrupted by non-i.i.d.measurement noise.While all signals are corrupted by noise, we draw attention to the peak position q because it shows the most obvious correlation of noni.i.d.measurement noise and model certainty.The green circles in Figure 8B (middle panel) and C (right panel) highlight the areas where the measurement noise affects the Gaussian-process predictive variance significantly.Note that we have not used q for steering in this case, but the general principle we want to show remains unchanged across all experiment results.Figure 8A shows the time evolution of the exploration of the model and the impact of non-i.i.d.noise on the model but also on the uncertainty.If q had been used for steering without taking into account non-i.i.d.noise into the analysis, the autonomous experiment would have been misled because predictive uncertainty due to high noise levels would not have been taken into account.Figure 8 shows that the next suggested measurement strongly depends on the noise.We want to remind the reader at this point that the next optimal measurement happens at the maximum of the GP predictive variance.The locations of the optima (Figure 8C) are clearly different when non-i.i.d.noise is taken into account.The objective function without measurement noise (Fig. 8C, left panel) shows no preference for regions of high noise (green circles in Fig. 8B, middle panel), where preference means higher function values of the GP predictive variance.In contrast, the variance function that takes measurement noise into account (Fig. 8C, right panel) gives preference to regions (green circles) where measurement noise of the data is high.This is a significant advantage and can only be accomplished by taking into account non-i.i.d.measurement noise.In conclusion, the model that assumes no noise looks better resolved, which communicates a wrong level of confidence and misguides the steering.The model that takes into account non-i.i.d.noise finds the correct most likely model and the corresponding uncertainty.The algorithm also took advantage of anisotropy by learning a slightly longer length scale in the x-direction which increased the overall model certainty.Note that the algorithm used an objective function formulation that put emphasis on high-amplitude regions of the parameter space.This led to a higher resolution in areas of interest. The above autonomous SAXS experiment revealed interesting features from the material fabrication perspective as well.First, a somewhat surprising result is that the grain size is not observed to change significantly with surface energy (Figure 8A).Previous work on the assembly of polystyrene-grafted spherical gold nanoparticles [7] demonstrated a significant decrease in nanoparticle ordering when fabricating films on lower surface energy substrates (greater polymer-substrate interactions).Although the surface energies used in this study are similar, a different silane was used to modify the glass surface (phenylsilane vs octyltrichlorosilane) which may differ in its interaction with polystyrene.We also note that PGN-substrate interactions will be sensitive to molecular orientation of the functional groups, which is known to be highly dependent on the functionalization procedure [13].Second, an unexpected well-ordered band was identified at 20 < x < 35 mm and y > 15 mm (between blue lines in Figure 8A), corresponding to the sub-monolayer region with an intermediate surface-energy range.We believe that this effect arises from instabilities associated with the solution meniscus near the middle of the coating blade (x ∼ 25 mm).Rapid solvent evaporation often leads to undesirable effects including the generation of surface tension gradients, Marangoni flows, and subsequent contact line instabilities.This can result in the formation of non-uniform morphologies as demonstrated by the irregular region of larger grain size centered in the middle of the film and spanning the entire velocity range.Further investigations into these issues are currently in progress. Discussion and Conclusion In this paper, we have demonstrated the importance of including inhomogeneous (i.e.non-i.i.d.) observation noise and anisotropy into Gaussian-process-driven autonomous materials-discovery experiments. It is very common in the scientific community to rely on Gaussian processes that ignore measurement noise or only include homogeneous noise, i.e. noise that is a constant for every measurement.In experimental sciences, and especially in experimental material sciences, strong inhomogeneity in measurement noise can be present and only accounting for homogeneous (i.i.d) measurement noise is therefore insufficient and leads to inaccurate models and, in the worst case, wrong interpretations and missed scientific discoveries.We have shown that it is straightforward to include non-i.i.d noise into the steering and modeling process.Figure 5 undoubtedly shows the benefit of including non-i.i.d measurement noise into the Gaussian process analysis.Figure 6 supports the conclusion we drew from Figure 5 visually, by showing a faster error decline. The case for allowing anisotropy in the input space can be made when there is a reason to believe that data varies much more strongly in certain direction than in others.This is often the case when the directions have different fundamentally physical meanings.For instance, one direction can mean a temperature, while another one can define a physical distance.In this case, accounting for anisotropy can be vastly beneficial, since the Gaussian process will learn the different length scales and use them to lower the overall uncertainty.Figure 7 shows how common anisotropy is, even in cases where it would normally not be expected, and how including it decreases the approximated error of the Gaussian process posterior mean.In our example, all axes carry the unit of temperature; even so, anisotropy is present and accounting for it has a significant impact on the approximation error. In our autonomous synchrotron x-ray experiment we have seen how misleading the no-measurementnoise can be.While the Gaussian process posterior mean, assuming no noise, is much more detailed in Figure 8, it is not supported by the data which is subject to non-i.i.d.noise.In addition, we have seen that the steering actually accounts for the measurement noise if included, which leads to much a smarter decision algorithm that knows where data is of poor quality and has to be substantiated.We showed, that without accounting for non-i.i.d.noise this phenomenon would not arise.We would therefore place measurements sub-optimally, wasting device access, staff time and other resources. It is important to discuss the computational costs that come with accounting for non-i.i.d.noise and anisotropy.While non-i.i.d.noise can be included at no additional computational costs, anisotropy potentially comes at a price.The more complex the anisotropy, the more hyper parameters have to be found.The number of hyper parameters translates directly into the dimensionality of the space the likelihood is defined over.The training process to find the hyper parameters will therefore take longer, the more hyper parameters we have to find.However, the cost per function evaluation will not change significantly.Therefore, instead of avoiding the valuable anisotropy, we should make use of modern, efficient optimization methods. While our results have shown that accounting for non-i.i.d.noise and anisotropy is highly valuable for the efficiency of an autonomously steered experiment, we have only scratched the surface of possibilities.Both proposed improvements can be seen as part of a larger theme commonly referred to as kernel design.The possibilities for improvements and tailoring of Gaussian-process-driven steering of experiments are vast.Well-designed kernels have the power to extract sub-spaces of the Hilbert space of functions, which means we can put constraints on the function we want to consider as our model.We will look into the impact of advanced kernel designs on autonomous data acquisition in the near future.(middle, row B, from the left) An exact Gaussian-process interpolation of the complete measured data-set for the peak position q.The data is corrupted by measurement errors which corrupt the model if standard, exact interpolation techniques are used (including GPR).The green circles mark the regions of the largest variances in the model and the corresponding high errors (measurement variances) that were recorded during the experiment.On the right is the Gaussian process model of q, taking into account the non-i.i.d.measurement variances.This model does not show any of the artifacts that are visible in the exact GPR interpolation.(bottom row, C) The final objective functions for no noise and non-i.i.d.noise in q which has to be maximized to determine the next optimal measurement.If the experiment had been steered using the posterior variances in q without accounting for non-i.i.d.observation noise, the autonomous experiments would have been misled significantly. Figure 1 : Figure 1: Schematic of an autonomous experiment.The data acquisition device in this example is a beamline at a synchrotron light source.The measurement result depends on parameters x.The raw data is then sent through an automated data processing and analysis pipeline.From the analyzed data, the autonomousexperiment algorithm creates a surrogate model and an uncertainty function whose maxima represent points of high-value measurements; they are found by employing function optimization tools.The new measurement parameters x are then communicated to the data acquisition device and the loop starts over.The main contribution of the present work is that the model computation and uncertainty quantification account for the anisotropic nature of the model function and the input-dependent (non-i.i.d.) measurement noise.The surrogate model (bottom) shows how the model function is evolving as the experiment is steered and more data (N ) is collected.The red dots indicate the positions of the measurements and their size represents the varying associated measurement variances.The numbers l x and l y indicate the anisotropic correlation lengths that the algorithm finds by maximizing a log-likelihood function.The ellipses show the found anisotropy visually.The take-home message for the practitioner here is that the method will find the most likely model function given all collected data with their variances.The model function will not pass directly through the points but find the most likely shape given all available information. Figure 2 : Figure 2: Figure emphasizing the distinction between the spaces and functions involved in the derivation.(a)A function over X .This can be the surrogate model ρ(x), the latent function f (x) to be approximated through an experiment, the function describing the measurements y(x) or the predictive mean function m(x).x 1 and x 2 are two experimentally controlled parameters (e.g., synthesis, processing or environmental conditions) that the measurement outcomes potentially depend on.(b) The Gaussian probability density function over H which gives GPR its name.For noise-free measurements, y = f at measurement points, meaning that we can directly observe the model function.Generally this is not the case and the observations y are corrupted by input dependent (non-i.i.d) noise. Figure 3 : Figure 3: Three one-dimensional examples with (a) no noise, (b) i.i.d.noise and (c) non-i.i.d.noise, respectively.For the no-noise case, the model has to explain the data exactly.In the i.i.d.noise-case, the algorithm is free to choose a model that does not explain the data exactly but allows for a constant measurement variance.In the non-i.i.d.noise case, the algorithm finds the most likely model given varying variances across the data set.Note the vertical axis labels; y(x) are the measurement outcomes, m(x) is the mean function, i.e., the most likely model, ρ(x) is the surrogate model, often assumed to equal the mean function and f (x) is the "ground truth" latent function. Figure 4 : Figure 4: Model function with different length scales and different orders of differentiability in different directions.In x 1 direction we have assumed that the model function is not differentiable.Therefore we used the exponential kernel.In x 2 direction, the model can be differentiated an infinite number of times.We therefore chose the squared exponential kernel.For other orders of differentiability, other kernels can be used.Fixing the order of differentiability also gives the user the ability to incorporate domain knowledge into the experiment. Figure 5 : Figure 5: The result of the diffusion-coefficient example on a three-dimensional input space.The figure shows the result of the GP approximation after 500 measurements for three different nanoparticle radii.While the measurement results are always subject to differing noise, the model can take noise into account in different ways.Most commonly noise is ignored (left column).If noise is included, it is common to approximate it by i.i.d.noise (middle column).The proposed method models the noise as what it is, which is non-i.i.d.noise (right column).The iso-lines of the approximation are shown in white while the isolines of the ground truth are shown in red.Observe how the no-noise and the i.i.d.noise approximations create localized artifacts.The non-i.i.d.approximation does a far better job of creating a smooth model that explains all data including noise. .d. noise non i.i.d.noise Figure 6 : Figure 6: The approximation errors of the surrogate model during the diffusion-coefficient example (Figure 5), for three different noise models noted in the legend.The bands around each line represent the standard deviation of this error metric computed by running repeated synthetic experiments. Figure 7 :CFigure 8 : Figure 7: Visualization of the grain size as a function of temperature history for a simple model of block copolymer grain size coarsening.The figure demonstrates that when describing physical systems in highdimensional spaces, strong anisotropy is frequently observed; only by taking this into account when estimating errors, will experimental guidance be optimal.(a) 10, 000 simulated temperature histories and their corresponding grain size represented by color.The majority of histories terminate in a small grain size (blue lines).A small select set of histories yield large grain sizes (dark red lines).(b) Example twodimensional slice through the 11-dimensional parameter space.The anisotropy is clearly visible.(c) A different two-dimensional slice with no significant anisotropy present.(d) The estimated maximum standard deviation across the 11-dimensional domain as function of the number of measurements during a synthetic autonomous experiment.
9,305.8
2020-06-03T00:00:00.000
[ "Computer Science", "Engineering", "Materials Science" ]
Roadmap on photonic, electronic and atomic collision physics: III. Heavy particles: with zero to relativistic speeds We publish three Roadmaps on photonic, electronic and atomic collision physics in order to celebrate the 60th anniversary of the ICPEAC conference. Roadmap III focusses on heavy particles: with zero to relativistic speeds. Modern theoretical and experimental approaches provide detailed insight into the wide range of many-body interactions involving projectiles and targets of varying complexity ranging from simple atoms, through molecules and clusters, complex biomolecules and nanoparticles to surfaces and crystals. These developments have been driven by technological progress and future developments will expand the horizon of the systems that can be studied. This Roadmap aims at looking back along the road, explaining the evolution of the field, and looking forward, collecting nineteen contributions from leading scientists in the field. 12. Highly-charged radionuclides 29 13. QED in strong fields 31 14. Interactions of HCIs with clusters 33 15. Interaction of HCIs with surfaces and 2D materials 35 28 Author to whom any correspondence should be addressed. Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. To celebrate the 60th anniversary of the ICPEAC conference, we publish a series of three Roadmaps on photonic, electronic and atomic collision physics. One for each of the three classes of projectile that comprise the breadth of topics encompassed by ICPEAC;I. Light-matter interaction;II. Electron and antimatter interactions; andIII. Heavy particles: with zero to relativistic speeds. Each of the Roadmaps is intended to provide an overview of the present status of the field, how it was arrived at and address current and future challenges faced by those working in the broad area of research. As with all IOP Roadmaps, the three articles have been authored collaboratively by leading researchers in the areas and each aims to provide an impression of current trends in the respective field of research. Roadmap III: Heavy particles is dedicated to recent advances in collisions involving heavy particles, from highly relativistic ions to extremely cold atoms and molecules, from exotic Rydberg atoms to extremely highly-charged heavy ions, from simple atoms and molecules to clusters, large biomolecules, nanoparticles, novel 2D-materials, surfaces and solids. This field of research is at the heart of ICPEAC. The present Roadmap glimpses into the future of the field by explaining important and promising theoretical and experimental trends and developments. It comprises 19 contributions by leading scientists distributed over four topic sections: topic section 1 on Rydberg atoms and cold collisions, topic section 2 on collisions involving heavy projectiles, topic section 3 on highly-charged ions and topic section 4 on new facilities for new challenges. In topic section 1, Merkt describes the use of molecular Rydberg states for ion chemistry using three examples, PFI-ZEKE and MQDT-assisted Rydberg spectroscopy for accessing a broad range of electronic, vibrational, rotational and hyperfine levels of the cation states and/or their energy intervals and a chip-based Rydberg-Stark deflector and decelerator for ion-molecule reactions within the Rydberg molecule. Gallagher looks back on the history of resonant energy transfer with Rydberg atoms, via dipole-dipole interactions without collisional motion, and discusses their applications from quantum gates and simulators to the construction of ultralong-range molecules. Dunning describes the current status of studies involving high-Rydberg atoms and future directions such as studying the dynamics of strongly-coupled Rydberg atoms, two-electron excited states (planetary atoms) and ultralong-range Rydberg molecules. Scheier and Echt discuss future directions of research using doped helium nanodroplets (HNDs). In particular they outline the prospects of using up to 1 μm large, monodisperse droplets to synthesize and characterize novel structures like nanowires, Coulomb crystals or ligand-free catalysts. In topic section 2, Kirchner sketches the path to a fully non-perturbative treatment of ion-atom collision processes. Advances in computational methods have already allowed us to answer some long-standing questions on the role of electron correlation effects in multiparticle dynamics. However, many electron systems will remain a challenge into the future. Fritzsche and Surzhykov show how high-resolution angle as well as polarization resolved studies in relativistic ion-atom collisions can lead to new insights into quantum dynamics in extremely strong fields if combined with advanced theoretical techniques. The FAIR facility will also make such polarization studies possible for negative-continuum dielectronic recombination (NCDR), a process in which a free electron is captured into a bound state of a heavy high-Z ion, while an electron-positron pair is emitted. Ma discusses the future challenges for experimental investigations of ion-atom collisions. On his list are new concepts to extract relative scattering phases from two-center interference patterns in ion-molecule collisions, as well as the incorporation of photon detection into reaction microscopes. Rivarola and Fojon further elaborate on these interference effects in ion-molecule collisions. From a theoretical point of view they suggest the investigation of the role of electron correlation not only in the description of molecular orbitals but also in the molecular continuum in the exit channels. Experimentally they propose a complete mapping of all resulting particles including the orientation of the molecular target. Tribedi deals with collisions between ions and very large molecules. In the case of larger molecules or clusters, the presence of further collision partners, to which energy can be transferred, leads to the appearance of new decay mechanisms such as interatomic Coulombic decay. In his contribution, he points out that it will be a great challenge in the future to conduct collision experiments with bio-molecules in their natural (liquid) environment. Such processes play an important role in radiation therapy, as well as in the use of metallic nanoparticles to enhance the biological effectiveness of ion irradiation via collective excitation effects. With her contribution Lamour opens a window into a hitherto unexplored collision regime, the interaction of ions with ions in the intermediate velocity regime. These experiments and reliable theoretical models are still completely missing, but will become feasible with the new upcoming facilities SPIRAL2 and FAIR in France and Germany, respectively. In topic section 3, Crespo Loṕez-Urrutia bangs the drum for the use of highly charged ions (HCI) in fundamental high precision experiments. In particular, with the first demonstration of sympathetic laser cooling of HCI re-trapped in a cryogenic radio-frequency trap, and the rapid development of narrow-band lasers in the x-ray regime, extreme frequency metrology (XFM) will become possible with HCI as adequate frequency standards and excellent study cases for fundamental interactions. Litvinov sketches the possibilities for precision investigations of highly charged radionuclides. Their decay processes are extremely sensitive to the interplay of atomic and nuclear structure and therefore promise a large potential for new discoveries, once the next generation of low-energy storage rings and traps becomes available. Shabaev explains, how high precision experiments with HCIs provide stringent tests for non-perturbative QED methods leading to the most precise determination of electron mass, nuclear magnetic moments, nuclear radii or even an independent determination of the fine structure constant. Cederquist and Zettergren describe the challenges and prospects for ion-cluster collision studies. Improved control over properties such as cluster size, charge and internal energy will be the key to a more fundamental understanding. Schleberger and Wilhelm motivate the reasons and explain the challenges for moving from irradiation of 3D bulk to novel 2D materials in HCI-surface interaction studies. Modern multi-coincidence spectroscopies are already in use to disentangle some of the basic interaction processes and the development of truly time-resolved ion scattering techniques will be the next step to study hollow atoms and 2D materials in a highly non-equilibrium state in unprecedented detail. Azuma briefly reviews the exciting phenomenon of resonant coherent excitation (RCE) in ions traveling through periodic crystals and suggests that RCE may become a new tool for high precision spectroscopy and dynamics of HCI once new high energy storage rings become available. Boduch describes the latest efforts to simulate in the laboratory the effect of cosmic rays on complex organic molecules (COMs) embedded in icy mantles on dusty grains in interstellar media or at the surface of comets. This is of particular interest since such molecules might have reached the early Earth via comets and meteorites contributing to the formation of life. In topic section 4, Schmidt describes the novel purely electrostatic and cryogenic ion-storage rings which have recently become operational. They allow experiments with rotationally cold ions and therefore offer unprecedented control of all internal and external degrees of freedom. Stoḧlker outlines the challenging atomic physics program foreseen at FAIR. Since this future facility will offer beams of cooled HCI in energy regimes where no such experiments have so far been possible, it will substantially enlarge the research capabilities for the exploration of atomic matter within the realm of extreme and ultra-short duration electromagnetic fields. Although the contributions to this Roadmap, outlined above, come from a few selected researchers, we believe that they are representative of many other related scientific activities. The study of collision processes with heavy projectiles is a broad field and offers great potential for new discoveries in basic and applied research in the future. Acknowledgments KU acknowledges support from the five-star alliance and IMRAM project. ES thanks SFI and the EU CALIPSOplus programme for support. Status. This contribution focuses on the use of high Rydberg states of neutral molecular systems to characterize the structure, dynamics and reactivity of molecular cations. Rydberg states can be defined as the states of atoms and molecules having spectral positions ν n that can be approximated by Rydberg's formula High Rydberg states and ion chemistry In equation (1), n is the principal quantum number, δ l is the quantum defect of the series with electron orbital-angularmomentum quantum number l, R M is the mass-dependent Rydberg constant, E I (α + ) represents the series limit as n approaches infinity and thus the onset of the ionization continuum associated with the production of a singly charged ion in the quantum state α + (see figure 1(a)). Equation (1) implies independent, separable motions of the Rydberg electron and the ion core and becomes increasingly accurate as n and l increase. Most properties of Rydberg states rapidly scale with n. For instance, their polarizability scales as n 7 , their electric dipole moment scales as n 2 , and the minimal electric field needed for their efficient ionization scales as n −4 (see [1]). Rydberg states form series that converge on the different electronic, and in the case of molecules also vibrational (v + ) and rotational (N + ), quantum states of the molecular ion core (see figure 1(a)). At high n values, the Rydberg electron does not influence the ion core but maintains the charge neutrality of the molecular system. The use of Rydberg states to characterize positively charged ions relies on these properties. The measurement of Rydberg series and their extrapolation yield the energylevel structure of cations [2]. The energy-level structure of cations can also be obtained by the selective pulsed field ionization (PFI) of very high Rydberg states, as illustrated in figure 1(c). With their large dipole moments (beyond 1500 Debye at n=25) Rydberg atoms and molecules can easily be accelerated in inhomogeneous electric fields, and numerous devices have been developed that enable the deceleration, the deflection and the reflection of beams of Rydberg atoms and molecules, and the storage of cold Rydberg atoms and molecules in electric traps [3]. These devices have considerable potential for studies of ion-molecule reactions at low temperatures. Current and future challenges Precision measurements of ionization energies and ionic level structures by Rydberg series extrapolation. High-resolution spectra of high-n Rydberg states can be used to determine the series limits, and thus the ionization energies of atoms and molecules with high precision. In atomic systems with a closedshell ion core (Li, Na, K), the extrapolation can be performed with equation (1). Using ultracold atomic samples to suppress Doppler and transit-time broadening in combination with photoexcitation with frequency-comb-stabilized single-mode continuous-wave lasers, the ionization energy of Cs [5] was determined to be E I /(hc)=31 406.467 7325 (14) cm −1 . To reach this level of accuracy, it is necessary to measure and compensate stray electric and magnetic fields to better than 1 mV cm −1 and 1 mG, respectively. Pressure shifts and shifts arising from dipole-dipole interactions with neighboring Rydberg atoms must also be quantified. The extrapolation of Rydberg series in atoms with open-shell ion cores and in molecules necessitates the consideration of interactions between series converging on different ionic states. The most reliable extrapolations are not carried out with equation (1) but using multichannel quantum defect theory (MQDT) [6,7], which can even be used to determine the hyperfine structure of the cations [8]. Molecular hydrogen has been the test system for the application of MQDT to molecules [7]. The MQDT analysis of recent measurements of the Rydberg spectrum of H 2 has allowed its ionization and dissociation energies, and rotational and hyperfine intervals in + H 2 [8,9], to be determined with high accuracy. The most accurate measurements of Rydberg series in H 2 reach an accuracy of 65 kHz [10] and are precise enough to be sensitive to the finite size of the proton [11]. Future steps in precision Rydberg spectroscopy for the determination of accurate ionization energies include (i) the development of methods and light sources to record highresolution Rydberg spectra of a broader range of molecules, (ii) the generation of corresponding cold-molecule samples, and (iii) a rigorous establishment of the uncertainties of the Rydberg-series limits extrapolated by MQDT. In future, MQDT-assisted Rydberg spectroscopy of cations is expected to contribute not only to solve problems of molecular structure and dynamics. When applied to few-electron systems, precision Rydberg spectroscopy may also contribute to test theory at its most fundamental level, improve fundamental constants or discover new effects. Ion-neutral reactions within the orbit of a Rydberg electron. At sufficiently high n values, the ion-core can undergo chemical reactions with neutral molecules located within the Rydbergelectron orbit. The Rydberg electron does not affect the reaction but provides charge neutrality to the reaction system and shields it from stray electric fields. Heating of the ions by uncontrolled acceleration through stray fields is thus avoided. This opens up the prospect of studying ion-neutral reactions at very low temperatures. Instead of observing the reaction A + +B→C + (or C + +D), one observes the reaction A * +B→C * (or C * +D), where * symbolizes a high Rydberg state. In the limit where the Rydberg electron acts as a spectator, both reactions take place at the same rate [12]. This idea has recently been used to study the + H 2 +H 2 → + H 3 +H reaction [13] and the radiative association reaction H + +H→ + H 2 +hν [14] at temperatures where quantum-mechanical effects strongly affect the reaction rates (figure 2). In the study of the + H 2 +H 2 → + H 3 +H reaction, a chip-based Rydberg-Stark deflector and decelerator [13] was used to merge a beam of cold, velocity-and state-selected Rydberg H 2 molecules with a supersonic beam of cold ground-state H 2 molecules and to vary their relative velocity. In this way, the energy dependence of the reaction cross section could be measured at collision energies as low as k B × 300 mK, where deviations from classical Langevin behavior were observed [13] (Inset of figure 2(b)). The H + +H → + H 2 +hν reaction was studied following excitation of H 2 to high (n?200) Rydberg states near the dissociative ionization threshold of H 2 in a half-collision approach [14]. By observing the shape resonances of + H 2 and analyzing their role in the dissociation of the + H 2 ion core, the radiative association cross section and rate coefficient could be determined in the collision-energy range between 10 mK and 10 5 K [14] (figure 2(a)). Concluding remark. Progress in the development of tunable narrow-band light sources, of frequency-calibration methods, of cold-molecule preparation techniques and of new devices to manipulate the translational motion of Rydberg atoms and molecules are currently opening new perspectives for investigations of ions, with applications ranging from metrology to cold ion-neutral chemistry. (a) Cross sections of the H + +H (green) and D + +D (red and blue) radiative association reactions for approach on potentials of the ground (X + ) and first excited (A + ) states (from [13] University of Virginia, United States of America Status. The study of dipole-dipole interactions of Rydberg atoms began as an effort to explore systematically resonant energy transfer. A classic example of resonant energy transfer is the HeNe laser, which is based on resonant collisional energy transfer from the metastable 2s states of He to the ground state of Ne, resulting in the selective population of the upper laser levels of the HeNe laser [17]. While the HeNe laser works, it does not tell us much about the sharpness of the resonance, only that the match between the He and Ne energy levels is close enough. One approach to probing resonance in collisional energy transfer is examining fine structure changing collisions of Br with different molecular collision particles. The different frequencies of the molecular vibrational transitions straddle the Br 5p 1/2 →5p 3/2 interval and provide a measure of tunability [18]. Rydberg atoms, with their 1/n 3 scaling of energy levels, provide a more systematic approach. Here n is the principal quantum number of Rydberg atom. A series of experiments with Xe Rydberg atoms and polar molecules, such as NH 3 , showed sharp collisional resonances in which the changes in the binding energy of the Rydberg electron matched the frequencies of rotational transitions of the molecule [19]. In these collisions the impact parameters are approximately equal to the diameter of the Rydberg atom or smaller, so the interaction responsible is not simply the dipole-dipole interaction; several multipoles are involved. An ideal way to probe a collisional resonance is to tune continuously the energy of the collision partners through resonance, and the enormous Stark shifts of Rydberg atoms provide a simple way to do so. Perhaps the most well studied example is the resonant energy transfer between two Rydberg atoms, for example the process [20] +  + Na 16s Na 16s Na 16p Na 15p, 2 ( ) which is tuned into resonance in fields of E∼600 V cm −1 . In most cases static fields have been used, but the AC Stark shifts from a near resonant microwave field provide a way to alter only a few states with the field [21]. The process of equation (2), observed with a thermal atomic beam, yields sharp collisional resonances with widths of ∼1 GHz and cross sections of σ∼10 9 A 2 . The resonance widths imply that the duration, or time, of the collisions is τ=1 ns, and the cross sections imply impact parameters one hundred times larger than the size of the atoms. The energy transfer is due to the dipole-dipole interaction, and it is straightforward to show that for the process ( ) / / where v is the collision velocity. Compared to gas kinetic collision times of one picosecond, one nanosecond collision times are extremely long, allowing systematic study of radiative collisions, those in which photons are absorbed or emitted during the collisions [22]. Collisions in which multiple microwave photons are emitted and absorbed have been observed and characterized [23]. The experiments described above were conducted with thermal beams in which the collision velocity is determined by the velocity spread of the beam. Velocity selection has been used to produce beams with internal temperatures of 1 K, leading to collisional resonances as narrow as 1 MHz [24], and shortening the time during which the atoms can collide to less than one microsecond produces transform limited collisions, with known beginning and ending times [24]. The first cold atom experiments, conducted in magneto optical traps (MOTs), were also resonant energy transfer experiments, often termed Forster resonances [25,26]. The properties of a MOT, temperature T=300 μK and density ρ=10 9 cm −3 , combined with the 3 μs lifetime of an n=20 Rydberg atom bring the experiments into a new regime. In a MOT the typical atomic spacings and velocities are 10 −3 cm and 10 cm s −1 , so in the one microsecond duration of a typical experiment the atoms move ∼1% of their typical spacing. In short, they are frozen in place. The atoms do not collide; instead, the interactions are between static atoms, as in an amorphous solid. In the initial experiments energy transfer resonances many MHz wide were observed. For example, the process [26] +  + Cs 23p Cs 23p Cs 24s Cs 23s 6 ( ) has a width of 80 MHz. The dominant interaction is still the dipole-dipole interaction, and the observed widths of the energy transfer resonances, such as equation (6), are primarily due to pairs of atoms which are close together. However, unlike collisions, other atoms play a role. For example, the diffusive processes +  + Cs 23p Cs 23s Cs 23s Cs 23p 7 ( ) and +  + Cs 23p Cs 24s Cs 24s Cs 23p 8 ( ) also contribute to the widths. In addition to changing energy transfer from a binary to a many body problem, the use of cold atoms allows the Forster resonances to be examined by coherent laser spectroscopy, specifically, Ramsey interferometry [27]. The enormous range of the dipole-dipole interaction between Rydberg atoms has led to proposals for their use in a broad spectrum of applications, from quantum gates and simulators to the construction of ultralong range molecules [29,30]. One of the gate notions, the dipole blockade, has proven to be widely applicable [28]. In the blockade the strong dipole-dipole or van der Waals (off resonant dipoledipole) interaction prevents the excitation of more than one Rydberg atom in a blockade radius, forming a superatom in each blockade zone in which one of the many atoms is in the Rydberg state, the rest remaining in the ground state. Superatoms behave like two level systems, and multiple superatoms in a sample can be observed to self organize into regular patterns [30]. While the first cold Rydberg atom experiments resembled an amorphous solid, it is possible to put Rydberg atoms into well defined arrays. The first such arrays were linear chains of three atoms, in which the excitation was observed to travel back and forth along the chain [31]. Much longer linear chains have been constructed and used to simulate many body phenomena under controllable conditions [32,33]. Concluding remarks. Finally, methods of placing atoms in well defined positions in two dimensional arrays have been developed, and these arrays have been used to compare thermalization in striaght (linear) and zig zag chains [34]. In the latter each atom has four, not two, nearest neighbors. The combination of the rapid advance in the ability to manipulate atoms combined with the enormous dipole-dipole interactions of Rydberg atoms suggest that there will be a continuing stream of fascinating discoveries. Rice University, United States of America Status. High-n Rydberg atoms possess physical characteristics quite unlike those associated with atoms in ground or low-lying excited states. They are physically very large allowing many of their properties to be discussed using the classical Bohr model of the atom. At high n the excited electron orbits far from the nucleus and its motion can be strongly perturbed by application of even modest external electric fields. In addition, as n increases the classical electron orbital period, T p , (∼4 ns at n∼300) increases rapidly allowing high-n states to be manipulated (and probed) using conventional fast pulse electronics and carefullytailored series of electric field pulses whose characteristic times (duration and/or rise/fall times) are less than T p [35]. Such pulse series have been used to engineer quasi-one-and twodimensional Rydberg states and, with periodic driving, to create non-dispersive wave packets whose behavior mimics that of classical particles [36,37]. This work has enabled ongoing studies of strongly-interacting Rydberg systems and of the production of so-called planetary atoms. Scattering of the Rydberg electron from neighboring ground-state atoms can lead to formation of ultralong-range Rydberg molecules [38]. Such scattering can be described using a Fermi pseudopotential and results in a molecular potential that can (see inset in figure 3) support multiple vibrational levels with binding energies (typically) of a few to a few tenths of a megahertz. At sufficiently high atom densities multiple ground-state atoms can be bound to a single Rydberg atom allowing formation of not only dimers but also trimers, tetramers, (see figure 3). In a dense Bose-Einstein condensate (BEC) the Rydberg orbit can, for n50, enclose tens to hundreds of ground-state atoms [39]. Rydberg excitation introduces an 'impurity' into the BEC and elicits a collective response resulting in the formation of polaron-like quasiparticles comprising the impurity dressed by excitations of the background BEC [40]. Rydberg molecule production provides new opportunities for the study of particle correlations and the effects of quantum statistics. Future directions and challenges Dynamics of strongly-coupled Rydberg systems. Long-range interactions in many-body systems give rise to a rich variety of phenomena of fundamental importance to many areas of physics. Rydberg atoms offer advantages for the study of strongly-coupled systems because their extreme dipole moments result in strong long-range interactions whose strength can be simply controlled by manipulating the atomic states. Recent experiments show that single veryhigh-n, n∼300, Rydberg atoms can be prepared in welldefined locally-blockaded regions [41] which provides a convenient starting point for future detailed study of stronglyinteracting Rydberg systems. If two atoms are prepared in reasonably-well-separated blockaded regions, a tailored series of short electric field pulses may be used to increase their mutual interactions by exciting them to states of much higher n, the degree of coupling being tuned by varying the final target state (and initial interatomic spacing). Given that the two atoms are excited simultaneously by the same electric field pulses, the initial conditions are particularly welldefined. Questions of interest relate to the dynamics of energy interchange and its dependence on the degree of coupling, which might be further controlled by varying the relative orientations of the states involved. For example, pairs of quasi-one-dimensional states may be formed that are oriented parallel or perpendicular to their line of centers. Excitation to Rydberg orbits whose size is comparable to the interatomic spacing leads to formation of transient Rydberg 'molecules' whose stability against autoionization through electron-electron scattering can be examined. Since the excited electron motions can be locked to an external periodic drive field [37,38], long-lived configurations might exist where, due to their correlated motions, the electrons remain far apart. Initial experiments have been undertaken in an atomic beam but relative motions associated with the distribution of atomic velocities in the beam limit the time over which measurements of Rydberg-Rydberg interactions can be undertaken. Use of cold atomic gases can overcome this limitation but the optical access required for cold atom trapping presents a challenge in reducing the stray fields present in the trapping volume to the levels required, 50 μV cm −1 , for study of very-high-n states. To date, Rydberg studies in cold atom traps have been limited to atoms with values of n200 [39]. Nonetheless, if stray fields can be controlled successfully, use of a 'pancake-shaped' trap and multiple blockaded regions would make possible detailed studies of interactions involving two, three, or more Rydberg atoms and their dependence on the geometrical arrangement of the atoms. Rydberg excitation spectrum in the vicinity of the 5s38s 3 S 1 strontium Rydberg state. Inset-calculated molecular potential for 5s38s 3 S 1 -5s 2 1 S 0 atom pair. Wavefunctions Rc v (R) for the v=0, 1, and 2 molecular vibrational states are included. Two-electron excited states. Interest in two-electron excited states stems from early attempts to quantize many-electron systems using the Bohr-Sommerfeld quantization rules which met with little success due to the dynamical instability of the proposed structures. Early studies of two-electron-excited states focused on Group II elements and were limited by the dipole selection rules to states of low L which undergo rapid autoionization. However, it is now possible to engineer veryhigh-n, n∼ℓ, near-circular states for which the overlap between the excited Rydberg electron and inner core electrons is negligible, even if one of the inner core electrons is itself in a (lower-lying) excited state. The production of twoelectron excited states in which both electrons remain far from the core ion and from each other in near classical orbits reduces the autoionization rate and admits the possibility of creating long-lived so-called 'planetary atoms' [42]. One possible strategy for the production of such atoms involves creation of a localized very-high-n, n∼600, wavepacket traveling in a near-circular Bohr-like orbit followed by excitation of a second 'inner' electron to a state of lower n, n∼200. The 'outer' electron as it rotates polarizes the orbit of the 'inner' electron creating a dipole that rotates in step with the 'outer' electron and whose field preserves the localization of the 'outer' electron. Classical trajectory calculations suggest that such correlated motion can result in long-term stability against autoionization. Initial experiments show that autoionization rates are dramatically reduced when the 'outer' electron is in a high-ℓ state to the point that the lifetime of the two-electronexcited state is governed by radiative decay of the 'inner' electron [43]. However, in these experiments, undertaken in strontium, the 'inner' electron was only excited to the 5p state and multiple additional lasers will be required to excite it to high-n states. Simulations suggest the existence of a second long-lived two-electron-excited state, termed the frozen planet configuration, which embodies very different electron dynamics and which should also be amenable to study using carefully-engineered high-n Rydberg atoms [42]. Ultralong-range Rydberg molecules. Correlations in atomic gases become difficult to observe when their characteristic length scales fall below the wavelength of light. However, ultralong-range Rydberg molecules promise a means to investigate such correlations in this previously inaccessible regime. The wavefunction for the molecular ground vibrational state is strongly localized near the outermost well in the molecular potential (see figure 3). The probability of creating a ground-state dimer molecule therefore depends strongly on the likelihood of finding a pair of atoms with the appropriate separation. Thus measurements of dimer formation as a function of n, i.e. the position of the outermost potential well, can provide information on finding atom pairs with different separations and hence on the pair correlation function, g (2) (R). Values of g (2) (R) for thermal gases of non-interacting identical bosons, fermions, and nonidentical (or classical) particles are presented in figure 4 and diverge markedly for interparticle separations, R, less than the thermal de Broglie wavelength, λ dB . For T∼900 nK, λ dB ∼200 nm which corresponds to the radius of an n∼45 s state suggesting that measurements of dimer formation over the interval 30n45 can be used to probe the effects of quantum statistics on g (2) (R) for R<λ dB . 87 Sr which has a nuclear spin I=9/2 is attractive for such studies and can be conveniently optically pumped to the m F =9/2 state to obtain a gas of identical fermions. Since the dimer formation rate depends not only on the number of ground-state atoms present but also their spatial distribution, i.e. density, a major challenge will be to account for such factors in analysis of the data. Dimer formation might also be used to examine two-atom correlations that arise in a quantum gas from resonant atom-atom scattering. Concluding remarks. Although Rydberg atoms have been studied for many years, new applications continue to emerge that address important physical questions related to few-and many-body interactions. Universität Innsbruck, Austria Status. Helium nanodroplets (HNDs) are readily formed by expanding helium through a nozzle into vacuum. The average droplet size can be varied by varying helium pressure and nozzle temperature. In vacuum HNDs rapidly cool by evaporation to about 0.37 K. They are superfluid and can be doped efficiently by passing the droplet beam through a pick-up cell containing a low-density vapor of atoms or molecules. The technique has led to an entirely new approach to synthesize and characterize complex systems [44,45]. Multiple collisions with dopant species lead to the formation of clusters; their size can be controlled by varying the droplet size or the vapor pressure in the pick-up cell. Uniquely shaped species such as ultrathin metallic nanowires or nanofoam form in very large HNDs [45,46]. Two successive pick-up cells may be used to form binary aggregates with distinct core-shell structure [47], or to study reactions. Equally impressive are the novel ways by which doped droplets can be investigated. HNDs form an ideal matrix for spectroscopy because of the weak interaction with the dopant, optical transparency, and low temperature. Problems inherent to bulk (liquid) helium, namely agglomeration and accumulation of dopants at walls, are avoided because most dopants will preferentially reside near the center of HNDs; low dopant concentration (=1 per droplet) will avoid agglomeration. Furthermore, the weak binding of helium (bulk cohesive energy 0.62 meV) opens a path for highly sensitive action spectroscopy because absorption of a photon will cause ejection of one or more helium atoms [48]. Reaction dynamics have been followed by pumpprobe spectroscopy, ion imaging and other techniques [49]. Other new venues have been explored. For example, the superfluid environment makes it possible to orient polar dopants in an external electric field, providing a molecular goniometer for electron diffraction of single molecules [50]. Current and future challenges. Research involving HNDs offers many tracks. Some use HNDs as vehicles to synthesize, manipulate or characterize the species of interest; others explore quantum phenomena in superfluid systems of finite dimensions or ultrafast dynamics in helium [51]. Still others seek to generate ever larger HNDs, or more intense beams of HNDs [52]. All these tracks are meritorious and actively being pursued. In this short contribution we outline the prospects of yet another, new track, namely forming intense beams of large, size-selected HNDs. Novel structures are formed upon doping very large HNDs, including nanowires, nanofoam, and granular, multi-center clusters [45,46]. However, as discussed further below, well-defined nanostructures cannot be formed by doping unless the HNDs are large and monodisperse. In the early years of HND research, few experimental groups had the technical means of generating HNDs containing more than a few hundred or thousand atoms. Improved designs, pumps with larger capacity, and the use of pulsed supersonic nozzles by some groups have now pushed the limit beyond n≈10 11 , corresponding to droplet diameters of ≈2 μm [44,52]. At this size, new forms of matter such as core-shell clusters, nanowires or nanofoam may form inside helium doped droplets [45][46][47]. Moreover, these objects can be deposited on surfaces and imaged by STM or TEM; the HND provides for a disposable cushion that prevents break-up of the object upon landing. We have recently formed and detected very large, multiply charged HNDs. Figure 5 displays the yield of HNDs cations obtained by electron ionization at 40 eV and ion deflection in an electrostatic sector field. The approach uses the fact that the velocity distribution of neutral droplets is narrow and independent of the droplet size. The deflection voltage has been converted to a mass-to-charge ratio n/z; the ion yield has been corrected for an n 2/3 dependence of the ionization cross section. The distributions are markedly dependent on the emission current because large droplets may collide with multiple electrons, leading to charge states z>1. The charge state z of these ions can be explored by post-ionization or electron attachment. Values up to z=19 have been identified; the size of the charged droplets extends to n=10 12 . Large, monodisperse HNDs will also enable the investigation of so-called Coulomb crystals which may form at low temperatures when several free ions, or quasi-free ions in superfluid helium, are spatially confined. In ion traps these crystals enable measurements of state-selective reaction rates in a diverse range of systems [53]. At least n c =2×10 5 atoms are needed to support two charges in a HND. The appearance of ordered charge arrangements might be detected by electron scattering. Alternatively, one may dope the HND by collisions with several charged atomic or molecular ions, or multiply ionize droplets that have been doped with neutral multicenter clusters [45]. The charges would thus be localized at the impurities which, for sufficiently large separation, would mainly interact via Coulombic forces. Their spatial arrangement could be measured indirectly by soft-landing on a surface followed by TEM or STM imaging. Advances in science and technology to meet challenges. Controlling the growth of nanostructures requires large, monodisperse HNDs. Doping is a statistical process; for fixed n the size distribution of aggregates grown in the HND would follow Poisson statistics whose relative width decreases with average size 〈n d 〉as 1/√〈n d 〉. The dopant distribution would have a width of only 3% if 〈n d 〉=10 3 . In practice, two factors ruin narrow distributions: First, HNDs shrink upon doping. For example, a release of 2 eV, a typical cohesive energy per atom for metallic clusters, would cause the evaporation of some 3000 He. Second, the size distributions of HNDs are broad. Depending on expansion conditions they are either log-normal or bimodal [44]. The first problem is avoided if large HNDs are being doped (loss of 3×10 6 atoms from a droplet containing 10 8 or more atoms would be acceptable), but the second problem necessitates the use of monodisperse HND beams. We foresee three possible ways for doing this. 1. HNDs can be efficiently ionized by electron impact. We have realized situations where droplets carry up to 19 charges due to successive collisions with electrons, and the size-to-charge ratio exceeds n/z=10 8 . Their size distributions are broad. Narrow slices of the distribution have been selected with an electric sector field but the presence of different charge states implies multimodal size distributions. 2. The production of size selected clusters has always been the holy grail of cluster science. Size selection of charged clusters by mass spectrometry wastes a large fraction of all clusters in the beam. A much less wasteful approach would be to select the singly charged, nearly monodisperse HNDs that are ejected from highly charged droplets as a result of electrostriction combined with Coulomb repulsion. Although not in the n≈10 8 regime that is needed to grow large, strongly bound structures, they could be used to synthesize smaller, less strongly bound systems. 3. Toennies et al have formed HNDs containing n=10 10 -10 12 atoms by expanding pressurized liquid helium through a nozzle below 4.2 K [44]. Rayleigh oscillation-induced breakup of the liquid jet produces droplets with an exceedingly narrow angular and velocity distribution but the width of the size distribution, said to be mono-disperse, was not characterized. Also, the generation of droplets much larger than needed may have undesirable side effects, namely large gas loads even if the flux of droplets is modest. Concluding remarks. Research involving or merely using HNDs is a burgeoning field. New opportunities such as messenger spectroscopy of He-tagged complex ions or electron diffraction of single aligned molecules are just some of many new research directions. In this contribution we have outlined another one, namely the prospects and use of large, monodisperse droplets. With sizes between 100 nm to 1 μm they would cover the mesoscopic regime; they could serve as vessels to synthesize and characterize, in situ or after deposition on surfaces, novel structures such as nanowires, Coulomb crystals, or ligand-free catalysts. Preliminary experiments by several groups have already shown the viability of this approach. Non-relativistic ion-atom collision theory Tom Kirchner York University, Canada Status. One may say that the ultimate goal of non-relativistic ion-atom collision theory is the development of approaches that provide a complete and accurate description of a given collision system. Complete means that all reaction channels are taken into account simultaneously, and accurate that the numerical results explain existing experimental data and make trustworthy predictions in regions where there are none. This is difficult to achieve since in many situations of interest a number of channels, direct and rearrangement, are open and contribute at the same time. Methods for the description of individual channels, such as excitation, ionization, or electron transfer are usually perturbative in nature [54]. These methods have enjoyed many successes, in particular when based on distorted-wave approaches, but they are somewhat limited in scope. A full account of all, potentially competing, processes calls for a fully non-perturbative treatment implemented via advanced numerical techniques. Despite significant progress over a long period of time [55] it is perhaps fair to say that the feat of a truly complete calculation has only recently been achieved for the prototypical one-electron proton-hydrogen collision system. A fully quantum-mechanical converged close-coupling (QM-CCC) approach has been developed and applied to this fundamental three-body problem of two Coulomb-interacting protons and one electron [56]. As an example figure 6 shows the total ionization cross section obtained from the QM-CCC method in comparison with a selection of previous non-perturbative calculations and the available experimental data. Interestingly, the overall agreement is not satisfactory in the 30-100 keV impact energy region in which the cross section maximum occurs. One may conclude that further investigation, both theoretical and experimental, is warranted before this case can be considered understood and closed [56]. This is all the more true for the study of differential cross sections, which in the past has been the domain of perturbative and classical trajectory methods, but for which fully non-perturbative calculations have started to emerge as well (see [57] and references therein). A large body of theoretical ion-atom work, including all but the QM-CCC calculations of figure 6, makes use of the semiclassical approximation (SCA) according to which the nuclei are assumed to move classically (most often on straightline trajectories) and only the electronic motion is treated quantum mechanically [55]. Over a wide range of impact energies the SCA is essentially exact, at least as far as cross sections that are integrated over the projectile deflection are concerned. For the case of projectile-angular differential cross sections there is a well-established procedure, called eikonal approximation [55], which re-introduces quantum mechanics into the heavy-particle scattering and often gives very accurate results. In the framework of the SCA the scattering problem has similarities with the problem of laser-field induced excitation and ionization in that a time-dependent Schrödinger equation has to be solved to find the electronic wave function. Indeed some cross-fertilization between both areas in terms of method and code development has happened (see, e.g. [58]). Most ion-atom collision problems involve few-or manyelectron atoms. It has long been known that the independentparticle model (IPM) is often not sufficient to understand experimental observations in these systems. Explicit methods to deal with beyond-IPM situations are mostly restricted to twoelectron systems, i.e. collisions involving helium atoms whose double ionization famously cannot be properly described without taking electron correlation effects into account (see, e.g. [58]). Recent work has shown that also three-electron problems can be studied with non-perturbative methods that are based on correlated many-electron wave functions [59]. Figure 7 shows an example: the angular-differential cross section for electron transfer from the target ground state to the projectile ground state in He + -He collisions at 60 keV amu −1 impact energy [59]. While a correlated, fully non-perturbative three-electron calculation (that uses the eikonal approximation to extract the differential cross section) is in excellent agreement with the experimental data, results from previous perturbative efforts are not. Current and future challenges. The above discussion suggests that there is still work to be done to reach a quantitative understanding of one-electron problems. But the bigger challenge lies in the treatment of many-electron systems given that the majority of them will remain beyond the reach of explicit methods for some time to come. Perhaps the most promising alternative approach is time-dependent density functional theory (TDDFT). The time-dependent (so-called Kohn-Sham) equations in TDDFT calculations look like IPM Figure 6. Total ionization cross section for proton-hydrogen collisions as a function of projectile energy. References to indicated previous works are given in [56]. Reproduced with permission from [56]. © IOP Publishing Ltd. All rights reserved. equations, but they involve a potential that, in its exact form, includes electron correlation effects. Similarly, the exact outcome analysis of the Kohn-Sham calculations involves a correlation integral. Both issues, correlation in the potential and correlation in the method to extract observables from the Kohn-Sham solutions, are areas of active research. Despite recent advances there is room for improvement of the current models (see, e.g. [60] and references therein). TDDFT methods are also promising tools to deal with ion-molecule collision systems which are of relevance in the context of radiation biology [61] (see section 9 for an experimental perspective). Beyond conducting proof-ofprinciple studies it will be important for work in this area and in other, e.g. plasma-related applications to quantify the uncertainties of calculated cross sections. This is more challenging than it may sound given that it is straightforward (albeit perhaps tedious) to determine the numerical accuracy of a calculation. However, quantifying the uncertainties associated with the use of a particular many-body model is only poorly understood [62]. Experimental advances, most notably the development of recoil-ion momentum spectroscopy and the reaction microscope, have led to a wealth of highly-differential ion-atom and ionmolecule measurements (see section 7). They often show rich structure and probe theory and our fundamental understanding of the few-body problem on deeper levels than integrated cross section data. They may also point to aspects of the collision problem that are less well understood than assumed. A recent example is the observation of effects that appear to be related to the coherence properties of the projectile beam [63]. This observation challenges a basic tenet of atomic collision theory in that it suggests that one can choose the set-up of an experiment such that the projectile beam is not fully coherent and the standard way of calculating a cross section does not apply. Advances in science and technology to meet challenges. A large fraction of ion-atom collision theory research has been motivated by experimental studies, and this is not expected to change. Accordingly, active experimental research programs which generate accurate differential and integrated (see figure 6) data will be indispensable for progress on the theoretical side. It will be particularly useful to maintain or even expand on the breadth of the collision systems studied in the laboratory; the use of magneto-optical traps to allow for reaction-microscope studies of alkalis and the use of antiproton beams to initiate electron dynamics in a variety of atoms and molecules are important examples. Together with the ongoing advances in numerical method and code development plus the ever-increasing available computer power they will allow theorists to move forward in the areas outlined above. A few good ideas, e.g. on how to incorporate correlation effects in TDDFT models or on how to account for incomplete projectile coherence in such a way that it can be distinguished from other effects, will be required as well. Concluding remarks. Ion-atom collision theory may be a mature field, but it remains highly relevant given the deep science questions it addresses and the data it produces. One can perhaps say that thanks to advances in computational methods and resources we have just started to answer some longstanding questions in a quantitative fashion, e.g. on the role of electron correlation effects in the few-particle dynamics, and can now produce (semi-) accurate data for more complex collision systems. This will continue. Moreover, similar questions are being addressed in neighboring research fields and knowledge and insights obtained from collisional studies may help answer them as well-as will progress in those areas benefit the advancement of atomic collision theory. In collisions of fast bare ions with atoms, for example, initially quasi-free target electrons often recombine radiatively with ions under the emission of a photon; the time-reversed process to Einstein's well-known photoionization [64,65]. If the electron is captured into the 2p 3/2 level of a hydrogen-like ion, it will subsequently decay by a (second) Lyman-α 1 (2p 3/2 →1s 1/2 ) photon. This Lyman-α 1 radiation of heavy ions appears very sensitive to both, the population of magnetic sublevels |2p 3/2 μ>, produced in the capture process, and to the E1-M2 multipole mixing, i.e. the coupling of the bound electron density to the different (multipole) components of the radiation field. Indeed, combined measurements of the angular distribution and linear polarization of the characteristic Lyman-α 1 photons help disentangle here the formation of the excited ionic state from the subsequent decay and to determine the M2/E1 amplitude ratio without further assumptions about the population mechanism for the 2p 3/2 state [66]. The M2/E1 amplitude ratio can be tested against quantum electrodynamics (QED) which is the fundamental theory for describing the electron dynamics of atom and ions in the presence of strong (Coulomb) fields. Bremsstrahlung, the emission of photons due to the inelastic scattering of (quasi-) free electrons at ions and atoms, is a similarly fundamental process in relativistic collisions but with the 'capture' of the electron into the Dirac continuum. Detailed calculations of the bremsstrahlung spectra, angular distribution and polarization Stokes parameters of emitted photons, are however still a challenge to atomic theory, as they require a reliable representation of the (positive-energy) continuum for both, the initial and final electronic states as well as an accurate evaluation of 'free-free' transition amplitudes, especially at high electron and photon energies [67]. Measurements of the linear polarization of bremsstrahlung radiation with electrons of variable polarization for neutral gold atoms confirmed the need for accurate computations. Distinct peaks in the observed x-ray spectra from high-Z ion-electron collisions arise especially due to the dielectronic recombination (DR) of the ions as competitive but resonant process to the radiative electron capture (EC). This resonant capture of electrons leads to the excitation of the bound electron density and often to a subsequent photon emission. Indeed, the DR in fast ion-electron collisions provides an important tool for studying the relativistic, i.e. magnetic and retardation contributions to the electron-electron interaction, known also as Breit interaction in atomic physics. To reveal details about the relativistic interactions between the electrons in the presence of strong Coulomb fields, measurements were proposed on the angular distribution and linear polarization of the 1s 2s 2 2p 1/2 J=1→1s 2 2s 2 J=0 (electric-dipole) photon emission, following the resonant EC into initially lithium-like ions. Multi-configuration Dirac-Fock (MCDF) calculations for this particular DR process demonstrated that the Breit interaction can dominate over the pairwise Coulomb repulsion among the electrons and may even cause a qualitative change of the characteristic photon emission [68]. For the DR of high-Z ions, for instance, the Breit term leads to a significant reduction of the (absolute value of) linear polarization of 1s 2s 2 2p 1/2 J=1→1s 2 2s 2 line; a behavior that has meanwhile been confirmed experimentally [69], see figure 8. Current and future challenges. Any detailed interpretation of fast ion-atom collision experiments requires of course a deep understanding of the underlying mechanisms and their quantum (-field) theoretical formulation. Therefore, various theoretical methods have been developed during the past two decades in order to explain such polarization and correlation phenomena in heavy-ion collisions. In particular, the density matrix theory, combined with the calculus of spherical tensors, has been found a very versatile tool to incorporate all major relativistic and many-body contributions into the computational framework. The density matrix approach is naturally built upon manyelectron amplitudes of the electron-electron interaction and the coupling of the electrons to the radiation field and, thus, allows quite easily to incorporate many-body and non-dipole contributions into the theoretical predictions of all observables of the emitted x-rays. When combined with MCDF wave functions, this approach has indeed been found versatile to explain the electronic correlations in high-Z ion-electron collisions. For example, figure 9 displays the calculated angledifferential cross sections for the radiative capture of electrons into the 1s 2 2s 2 1 S 0 state of (initially) lithium-like uranium U 89+ ions with projectile energies T p =2.18 and T p =218 MeV/u. While the correlated motion of the electrons appears to be negligible for 218 MeV/u, it alters the angular distribution at lower energies [69]. Closely related to the ion-electron collisions from above is the (so-called) radiative double electron capture (RDEC) in which two electrons are simultaneously captured under the emission of a single energetic photon. Such an EC can occur only due to the interelectronic interactions in the continuum. Few measurements have been carried out [70] but with rather large uncertainties and until now in disagreement with theory. Further detailed angle-resolved RDEC experiments will here provide a stringent test for theory due to demand to deal with two correlated electrons in the continuum. Advances in science and technology to meet challenges. Recent years have seen significant progress in detecting (hard) x-rays by microcalorimeters and position as well as energy sensitive solid-state detectors, designed for advanced photon spectroscopy [71]. These detectors nowadays achieve a submillimeter spatial resolution as well as excellent time and energy resolution in the hard x-ray energy regime above 15 keV, and they helped advance both, x-ray polarimetry and medical imaging. Beside of EC and transfer processes, these detector advances will facilitate further QED studies and even investigations on parity-violating interactions in forthcoming years. A number of ion-atom (ion-electron) collision experiments have been proposed recently to scrutinize these interactions. For example, the parity mixing between 1s2p 1/2 3 P 0 and 1s 2 1 S 0 states of helium-like ions leads to a rotation of the linear polarization of x-rays emitted in the L-shell EC [72], an experiment proposal that will likely become feasible in the near future. At FAIR, the currently built Facility for Antiproton and Ion Research in Darmstadt (Germany), such polarization measurements will be possible also for the negative-continuum dielectronic recombination (NCDR), in which a free electron is captured into a bound state of a heavy high-Z ion, while the energy is released by an electron-positron pair [73]. The NCDR process provides indeed an important benchmark for theory, describing the electron and positron dynamics in quite critical fields. Special interest refers here to the formation and subsequent radiative decay of excited ionic states, and which requires a QED treatment right from the beginning. Finally, excited ionic states are created also by Coulomb excitations of few-electron ions in collisions with target atoms and molecules. Again, the subsequent decay of these states typically results in an anisotropic angular distribution and non-vanishing polarization of the characteristic radiation. For high-Z ions, these distributions of the characteristic x-ray lines will help explore especially magnetic corrections to the Coulomb interaction, i.e. further details about the Lienard-Wiechert potential. Concluding remarks. Angular and polarization studies in relativistic ion-atom collisions help reveal new insights into the quantum dynamics in strong Coulomb fields [74]. Especially the comparison of high-resolution experiments with advanced theoretical techniques will provide an efficient route to further enhance our knowledge about the interaction of matter in extreme fields. Xinwen Ma Chinese Academy of Sciences, People's Republic of China Status. Experimental cross sections of ion-atom collisions in a wide range of impact energies play important roles in testing many-body theories and in application of astrophysics, plasma modeling, radiation damage, and ion-matter interaction, etc. The products of a collision usually consist of emitted photons, emitted electrons, recoil ions, and charge (un)changed scattered ions, and the ions may be in excited states. By detecting part of the particles in final states, experimentalists can extract essential information on collision dynamics. However, it is not just as that easy. The energies of emitted photons and outgoing electrons vary greatly depending on the collision systems and the collision energies, therefore, it is difficult to find one universal technique to detect photons, electrons and ions with sufficient resolution in one experiment. And further, due to very limited photon collection efficiency (solid angle and quantum efficiency of a detector system) compared to charged particle detection, coincidence measurements are generally difficult to apply for photons and charged particles. There are usually two types of experimental approaches for collisions studies, photon detection with high resolution and charged particles coincidence measurement. Few measurements have been reported with photon and ion in coincidence. Here the focus will be on the charged particle measurements. Before the invention of reaction microscope [75] or COLTRIMS [76], only total cross sections and partial cross sections were obtained. Due to the powerful multiple coincidence measurements of charged particles, reaction microscope/COLTRIMS opens the way to perform multifold and even fully differential cross section (FDCS) studies, with reasonably good energy/momentum resolutions. The cross sections differential in scattering angle provide rich information of the impact parameter dependent collision dynamics where electron emission mechanism in a broad range of collision velocities have been studied [77]. In low to intermediate energy EC collisions, it is now possible to perform studies on quantum state-resolved charge exchange processes with high resolution. Figure 10 shows single-EC in collisions of 63 keV Ne 7+ ions with He studied using COLTRIMS [78], where the state-resolved differential cross sections have been determined with respect to the principal quantum number, subshell levels and spin states of the captured electron. Furthermore, it is possible to explore the high resolution method to study the electron correlations in double and multiple EC processes, and give new insight into mechanisms, e.g. symmetric/asymmetric population, sequential/non-sequential processes. At high impact energies, only few measurements of FDCSs have been done, where first Born approximation is assumed to be valid, Schulz et al [79] found unexpected discrepancies of electron angular distribution between theory and experiment in the perpendicular plane for single ionization of He by 100 MeV/u C 6+ . This 'C 6+ puzzle' frustrates the community and needs to be solved in future experiments. When more electrons are involved in a charge exchange process, electron correlation plays an important role, which is essential to understand many-body dynamics. For instance, in double-electron transfer with one electron ionized collisions of 80 keV/u Ne 8+ on He, distinct features of the correlated processes can be well separated from the un-correlated ones [80]. With multiple coincidence technique, Gao et al [81] obtained fully differential data for transfer ionization process of one electron transfer with two electron ionized (five-body process). It is found that the sum electron momentum and the individual electron momenta are surprisingly strongly correlated with the total momentum transfer. Dedicated experiments for multiple ionization are expected to better understand these phenomena, and rigorous theoretical treatments are needed as well. In the past decades, the targets used in ion collisions have been extended from simple atom to molecules and even to atomic/molecular dimers/trimers, etc. The structure information of these complex system serves as extra dimensions for the investigation of the structure involved collision dynamics. The molecular frame can be determined from the coincidence detection of charged particles produced in these processes. Consequently, the characteristic of matter wave of massive particles has also been observed at atomic scale: Fraunhofer diffraction in atomic collisions and interference in molecule involved collisions. In the works of Schmidt et al [82] and Zhang et al [83], they revealed that the interference pattern, which is in contrast to the optical double-slit experiment, resulted from the π-phase shift due to the parity change of molecular hydrogen ion in the collision. Later, the double-slit interference effects is used to extract the phase information in collisions with asymmetric diatomic molecules [84]. Figure 11 shows the asymmetry and shift of two-center interference patterns of He 2+ on CO. In two different colliding energies, the scattering phase differences of He 2+ on C compared to He 2+ on O can be extracted. Current and future challenges. Despite many successes in ion-atom collision studies, there are still challenges. To the least extent, here are some possible perspectives that one can look forwards in the near future. First, photon measurements in coincidence with charged particles. In ion collisions, photon emission from residual ions (recoils and/or scattered ions in multiply excited states) can provide additional information about the collision dynamics. However, so far sophisticated techniques where photon and charged particles are recorded in coincidence are still quite challenging. Second, the phase information of the collision processes. Scattering phase contains important information about collision dynamics. In atomic transitions induced by the laser field, it has been shown that phase measurements show great potential in deeper understanding of the few-body problem. However, the long-range property of the Coulomb field of the projectile in ion collisions makes it nearly impossible to define the start/end point of the interactions as those of the photon experiments. So far there is only one successful scattering phase measurement achieved via two-center interference effects in ion collision processes [84], and it still challenges the community in finding proper ways to extract dedicated phase information of ion-atom collision dynamics. Last but not least, electron correlation still challenges the understanding of collision dynamics. HCIs have strong Coulomb potential and thus can capture many electrons simultaneously when interacting with atoms at low and intermediate impact energies. These electrons usually populate to highly excited states and will decay via autoionization or radiation, where electron correlation plays crucial roles. High precision measurements are needed to understand the dynamics and decay process. When a HCI with relativistic velocity interacts with atoms/molecules, the interacting time is in a sub-attosecond time scale, similar to an ultrafast and strong electro-magnetic pulse, and target electrons can be kicked out instantly. This may create the ideal Heisenberg condition for studying electron correlation in an atom. On the other hand, strong electromagnetic field and relativistic will provide unprecedented extreme conditions and will lead to new phenomena in collision processes. All these investigations will challenge the present experimental technologies. Advances in science and technology to meet challenges. Theoretical description of ion-atom collision dynamics is making progresses (see section 5) thanks to more experimental differential data and the rapid increase of [78]. Reproduced from [78]. © IOP Publishing Ltd. All rights reserved. computation power. On the experimental side, MOTRIMS will provide much higher resolution for collision studies although only applicable to several atomic species. Some attempts have been made to incorporate photon detection in reaction microscope to get more accurate electronic level information. We are developing new ideas of scattering phase measurement, which will help to study details of collision dynamics. We pointed out a novel very fast breakup mechanism of dicationic acetylene dimers resulting from intermolecular proton transfer induced by ions [85] and even expecting breakup resulted from heavy ion transfer process in other dimers. These are important and relevant to DNA double-strand breakup induced by radiation. Furthermore, the rapid development of laser technology in recent years may help to realize laser assisted collisions studies. In the meantime, new storage ring facilities under construction now, which will be equipped with reaction microscope and high resolution electron spectrometer, will provide HCIs with relativistic velocity, and further create excellent opportunities for investigation of collision dynamics under extreme conditions. Concluding remarks. Ion-atom collision processes are by far the best choice to explore the few-body problem. First, in atomic physics the underlying Coulomb force between charged particles is precisely known and structure properties can be obtained with high accuracies, it guarantees that all discrepancies between experimental observations and theoretical predictions result from the quantum models. Second, in ion-atom collisions the number of active interacting particles can be well controlled which is essential to examine few-body processes. Few-body problem is a longstanding fundamental topic in science, more dimensions of freedoms will be accessible in experiments with the new developing technologies, and it is definitely needed to investigate the few-body dynamics in atomic collisions. Instituto de Física Rosario (CONICET-UNR), Argentina Status. A controversy between the corpuscular or wave nature of electrons remained for centuries. Based on the oscillatory structure observed in measurements of photoabsorption cross sections of the diatomic N 2 and O 2 molecules, in 1966 Cohen and Fano [86] suggested that this behavior is obtained from the fact that an electron is coherently emitted from the proximities of the nuclei of the targets. This mechanism was associated with the two-slit scenario of the Thomas Young experiment [87], where the coherent superposition of electrons demonstrated the wave character of these quantum objects. The effect was measured 35 years later than the Cohen and Fano prediction but for impact of 60 MeV/u Kr 34+ on H 2 molecules [88]. In order to give visibility to the effect, experimental double differential cross sections (DDCS) as a function of the final electron velocity, at fixed emission angles, where divided by the corresponding theoretical ones for two independent effective H atoms (see figure 12; see also [89] for a case where atomic H experimental measurements were employed for comparison). Fit straight lines were drawn to enhance the visibility of the spectral structure. As a result, interference patterns were observed and a simple theoretical interpretation was given. Encouraged by this seminal work, numerous experiments and theoretical models were developed in the following years In particular, evidences on the existence of the effect were also given for the impact of photon and electron beams, showing that the essence of this coherent behavior resided in the two-center character of the target and only depends in a secondary aspect on the projectile type (for a general review see [90]). Usually for ion beams, perturbative approximations were employed to describe the reaction within an independent electron approximation and assuming that the residual target (including the non-ionized electrons) remains as frozen until the projectile-target interaction ends. This assumption is supported by the fact that the collision time considered is of the order of the sub-femtoseconds while the vibrational and rotational ones are much larger. Consequently, the orientation of the molecule remains fixed in space. In general, the molecular bound orbitals are described as linear combinations of different bound states of the atoms that compose the target, and the molecular continuum in the exit channel is described by means of effective continuum states. In order to obtain DDCS the matrix elements describing the collision are integrated on the momentum transfer and then averaged over all possible molecular orientations. Further theoretical work for H 2 targets was focussed to describe the dependence of the angular distribution of emitted electrons with the molecular orientation, for impact of high energetic bare carbon ions. Thus, triple differential cross sections (TDCS) were calculated for a coplanar geometry, where the molecule, the emitted electron and the projectile velocity are all in the same plane. It was shown that the presence of interference patterns is favored for a molecular orientation coinciding with the one of the initial projectile velocity v. The contribution from different momentum transfers tends to shadow the corresponding oscillations as the molecule is aligned perpendicularly to v. FDCSs were also calculated for proton beams in order to investigate the contribution of fixed values of the momentum transfer. They put in evidence that for certain kinematical conditions when the molecular orientation is perpendicular to the impact velocity, the so called recoil peak tends to disappear due to destructive interference contributions. Concerning the influence of the molecular orientation, experimental and theoretical results of the electron spectra as a function of projectile scattering angle at fixed electron losses, suggest that for small scattering angles a transverse molecular orientation dominates whereas for large scattering ones a parallel orientation is preferred. Further research for proton impact was extended to the case of multi-orbital targets like N 2 molecules in a coplanar geometry [91]. The theoretical studies show the presence of interference patterns separately for each one of the molecular orbitals on the angular distribution of emitted electrons (see figure 13). The number of lobes revealing the apparition of coherent emission tends to increase as the electron energies increase and appear to be preferable for a molecular orientation parallel to v (directed to 0°), as it happens for H 2 . The oscillations corresponding to the different orbitals are shifted the ones with respect to the others, so that DDCS resulting from summing all their contributions do not present any signature of interference effects, as observed in existing experiments [92]. Current and future challenges. An important number of improvements are necessary from both theoretical and experimental sides. A tentative short list of them is given in the following. From the theoretical aspect, it appears as relevant to investigate the role played by electron correlation not only in the description of molecular orbitals but also in an appropriate description of the molecular continuum in the exit channel, moreover in the case of multi-electron orbitals. The inclusion of vibrational and rotational movements are also necessary in the future. Furthermore, measurements of multiple differential cross sections that take into account simultaneously the ejected electron coordinates and the orientation of the molecule, including or not the projectile scattering angle, appear as fundamental for a more complete understanding of the effect. The signatures of constructive and destructive interference contributions on FDCS for different molecular orientations with respect to the momentum transfer vector, in coplanar and no-coplanar geometries, are relevant. For the case of electron impact ionization of H 2 , for coplanar geometries, it is observed that under certain kinematical conditions the binary and recoil peaks may disappear, depending on the gerade or ungerade character of final residual states [93]. Recent predictions [94] for ions seem to show a similar behavior to the one obtained for electron beams. The sum over all molecular orientations allows an adequate description of the Physics involved in existing experimental TDCS and an increase of the binary peak is observed due to constructive interferences. The influence of the symmetry character of the bound states is a main aspect to consider. For example, the inner gerade and ungerade orbitals of the N 2 target show emission spectra in phase opposition. This effect has been shown to come from the mentioned symmetry character. Experimental research distinguishing the initial and final orbitals is an ambitious but useful project. The verification of the possible existence of double frequency interference effects associated with the scattering of the ejected electron on both molecular centers is a matter of further research [95]. Advances in science and technology to meet challenges. As it was before mentioned a lot of theoretical work is still necessary for a more complete understanding of the Physics present in interference effects due to coherent electron emission from molecular targets. From the experimental point of view, a complete mapping of all resulting particles appears as a challenge, including the orientation of the molecular target during the collision. It could be perhaps obtained combining cold target recoil ion momentum spectroscopy (COLTRIMS) and laser techniques. Concluding remarks. A brief review on the state of the art on the physical interpretation of interference effects due to coherent electron emission from molecular targets impacted by fast ion beams was presented. Future challenges for theoretical and experimental research lines are proposed. Tata Institute of Fundamental Research, India Status. Research on heavy ion collisions with atoms, molecules, clusters and solids is a natural extension of the electron-atom collision studies. While electron induced atomic molecular collision offers an excellent tool to probe the basic structure, properties and collisional aspects of the atoms and molecules, the collisions with ions have a wider scope due to the possibility of the variation of perturbation strength on a wider scale. This allows one to deal with the collision mechanisms from weak to strong perturbation regime. With the gradual development of experimental tools, such as, accelerators, ion sources, fast-electronics and detection systems, many of the investigations are renewed due to its importance towards fundamental interest in molecular physics as well as for its application in interdisciplinary sciences: radiation-biology, plasma physics, astrophysics and astrochemistry. Besides the well-known mechanisms of molecular interactions, completely new decay process-interatomic and intermolecular Coulombic decay (ICD) has been discovered [96] in case of clusters. It is only recently that the Young-type electron interference due to spatial coherence has been observed in a molecular-double slit [97,98] under heavy ion-collisions following the original prediction by Cohen and Fano (1966). The experimental study of electron emission from atomic H under heavy ion was crucial to identify, unambiguously, the interference oscillation. Besides this, a new approach of using forwardbackward asymmetry parameter to identify the interference and the existence of a second order interference were some of the highlights. Synchrotron based photo-ionization studies by Uwe Becker et al [99] as well as fast e-beam induced e-emission (Tribedi et al PRA 2016) have revealed the spatial coherence induced oscillations for N 2 and O 2 molecules. The interference oscillation has been used to estimate the bondlength in case of simple hydrocarbon molecules. The fragmentation dynamics of multiply charged smaller polyatomic molecular ions using high resolution recoil-ion momentum imaging is under study to identify under what condition a concerted or sequential processes are crucial [100]. A new dimension to this research is the wide spread experimental and theoretical investigation of ionization and fragmentation of biologically relevant large and complex molecules under the interaction of swift ions. A detailed and systematic investigation is likely to open new avenue towards the applications in related fields, such as, radiation biology, in particular, for hadron therapy. In heavy-ion radiation therapy, the energy loss of swift ions inside the body exhibits a maximum in the Bragg peak region. Therefore, the study of ionization or fragmentation of the nucleobases, DNA, RNA, sugar phosphate backbone or water molecule over a wide energy range, across this Bragg-peak region will be of great importance. It is now well known that the electrons of energy, even lower than the ionization threshold, are primarily responsible for the single and Another direction of this research involves the study of plasmon excitation by detecting the plasmon electron peak in the low energy part of the e-spectrum [104] for C 60 ( figure 15(a) as well as PAH molecules [105] (e.g. for coronene as shown in figure 15(b)). Polycyclic aromatic hydrocarbon (PAH) molecules are quite abundant in the interstellar space and, hence, attract lot of interest in astrophysics and astrochemistry. Recently observed [103] plasmon excitation peak at ∼17 eV for coronene helps to explore the UV photon absorption by the PAHs in the interstellar medium. Besides PAHs, the C 60 fullerene and its clusters are of importance in the study of many body physics and giant plasmon excitation. The clusters of PAH are also important for nano-electronics and UV plasmonic devices. The ion collisions with such clusters and the question of instability against fragmentation has been addressed by a few groups, e.g. at Stockholm, Caen and KVI. A scaling law to estimate TCS for uracil, water and other large molecules are also being investigated by different authors (e.g. [103,106] and Olson EPJD 2019). Current and future challenges. From a large set of data on ion-atom and ion-molecular collisions, it appears that the theoretical techniques, based on the CDW-EIS and various versions of it, fairly explain the data-sets particularly for simple light target atoms. However, for large molecules including nucleobases the back-scattering phenomenon is yet to be understood, since one often finds deviation of theory for ejected electron emission in large scattering angles. In case of such large molecules many electron correlation, size effect or any collective effect need to be included in the models. In view of their potential applications in diverse fields like radiobiology or medical imaging there is an effort to develop Monte Carlo numerical codes of charged particle transport and energy loss. Two such approaches by Champion and Garcia can be found in EPJD 71 (130) 2017; and Rad Phys Chem 130, (371) 2017. In spite of having many measurements carried out one finds a lack of accurate experimental data on total energy loss of charged particles in water, particularly around the Bragg peak. Such measurement will automatically include the contribution of the ionization, EC and fragmentation in the total energy loss. Whether the use of water to simulate the biological medium is correct or not needs to be answered experimentally. In practical front, the challenging task is to carry out experiments which must be tuned to include the realistic environment around the target molecules. For example most of the ion-collision experiments are carried out with isolated biomolecules in vapor phase. In reality the nucleobases are parts of the DNA/RNA present in biological matters along with liquid water. Advances in science and technology to meet challenges. In certain class of experiments one uses the clusters of biomolecules, water-clusters or liquid drops. For example, the study of fragmentation of clusters of 5Br-uracil indicates clustering effect, by triggering new pathways for fragmentation, e.g. the loss of the OH group (PCCP 19, 19807, (2017)). Therefore, to explore the interaction dynamics of these biologically active molecules surrounded by an environment, such as, solvent molecules, water, alcohol, or other smaller bio-molecules is an experimentally challenging task. To this end, a bio-molecular ion beam facility, comprising of an electro-spray ion source and an ion-trap with a cooling buffer-gas, capable of delivering low energy internally cold molecular-ions, could be used as injector to a storage ring. Such cross-beam experiments involving accelerators are being developed in different laboratories worldwide (e.g. Stockholm, Aarhus, RIKEN). Metal nanoparticles are proposed to be candidates as sensitizers in cancer treatments in hadron-therapy [104]. The injection of such nanoparticles into tumors increase the biological effectiveness due to the enhancement of low energy electronsor radicals. Collective plasmon excitation in the inserted metal is a possible mechanism for such enhancement [107]. In reality this is extremely challenging experiment. The initial experiments, which are in progress involve different halo-uracils as target, such as, bromouracil, iodouracil etc. In benchmark system, such as, C 60 it has been observed that, nearly 50% of the electron emission and hence similar fraction of energy loss can be accounted by the collective excitation alone. Such collective excitations are dealt theoretically by a few groups: Eric Suraud et al (Toulouse, France) and Solovyov and co-workers (Frankfurt group) (e.g. see [105], EPJD 66, 254 (2012), JPCS 490, 012159 (2014)). From radiation biology point of view the contribution of the ICD process, to the DNA-strand breakage needs to be investigated in more details, particularly its role in case of heavy-ion collisions. The last, but not the least point relates to the ion-atom collisions. In case of electron emission for ion-atom or ionmolecule collisions, the deviation of the models for backward electron emission is yet to be understood. Observed double frequency in interference oscillation in backward e-emission from H 2 compared to the forward emission is yet to be understood. In case of interference experiment, DDCS for molecule is compared with that for the atomic DDCS. However, only in one experiment such explicit comparison was possible i.e. by using atomic H [98], but for other atomic targets, e.g. N or O atoms, such experiments are awaited. Concluding remarks. The mature field of ion-molecule interaction has taken a new shape of interdisciplinary science due to constant contributions by genre of researchers across different branches of science i.e. plasmaand astrophysics, biology, chemistry and radiobiology. New discoveries, such as, ICD in photo-ionization and e-interference in molecular double slit have kept the field vibrant. The theme of ion-molecule collision continues to offer an active area of research which enriches our basic understanding of large bio-molecules, PAH molecules, fullerenes and clusters. The future research may aim towards the interaction with biomolecules which is attached to an environment and with nano-inserted DNA/RNA molecules-which is, however, a challenging task. The development of a range of sophisticated equipment, ion-traps, ion-sources, storage-rings, ion-beam facility will continue to push the field of molecular sciences. Acknowledgments I would like to thank my students, colleagues and R Rivarola and C Champion for long-standing theory-experiment collaborations. Ion-ion collisions in the intermediate velocity regime Emily Lamour INSP-Sorbonne University, France Status. Ion-ion collisions provide a unique scenario for testing our knowledge of fundamental electronic processes such as capture, ionization and excitation. Their study is also motivated by the fact that they are strongly correlated to the ion energy transfer in various plasma conditions. Whereas ion-ion experiments for high-energy physics (like the experiments at CERN) are currently carried out, ionion collisions for atomic physics have been so far performed mainly in the context of magnetically confined plasmas [108]. There, in the low velocity regime, typically at center-of-mass energies of a few keV up to a few 100 keV, cross section measurements of the charge transfer process [109,110] (by far the dominant process) were performed with light ions (up to oxygen) for collisions with bare, hydrogen-or helium-like ions, or with low-charged ions for the heavy systems like Bi and Pb, where charge states of up to 4+have been used. Investigation of the intermediate velocity regime (the pink area in figure 16) is more complicated due to the fact that, there, all the primary electronic processes (EC, loss and excitation) reach their optimum probability leading to the maximum of the ion stopping power and, consequently, to the strongest effects on material modifications. A few ion-plasma experiments, motivated by their consequences in the inertial confinement plasma fusion physics, have been carried out with swift heavy ions in hydrogen or deuterium low-density plasmas and, later on, in laser-generated carbon plasma of one percent solid state density. Evidences for drastic differences in ion stopping power when ions interact with either neutral or ionized matter have been clearly established (see for instance, [111]). Nevertheless, to extract direct information on elementary process modifications is almost impossible since many charge states may be present at the same time in the plasma. Likewise, in simple ion-(neutral) atom collisions, the presence of many electrons makes the determination of experimental cross sections of a single elementary collision process extremely difficult. It can be achieved only in very specific cases: for instance, taking full advantages of the spinselectivity in excited state populations, the single 1s→2p excitation cross section has been measured for collisions between 13.6 MeV/u Ar 16+ ions and neutral atoms (from He to Xe) [112]. From a theoretical point of view, none of the present most sophisticated available calculations is able to treat all the processes together on the same footing pointing to a fundamental issue in our understanding of elementary atomic interactions. This circumstance gives rise to a paradoxical situation where, for this velocity regime, cross sections are very hard to predict, while their knowledge is of critical importance. A significant test of theories can only be performed if the presence of numerous electrons can be avoided (beside the simplest case of a proton on hydrogen collision). Alternatively, the effect of other electrons participating to the processes must be explored. In this context, our knowledge of the intermediate collision regime is really limited. There is a lack of measurements and available theoretical calculations are at their limit of validity. In other words, this regime corresponds to a real 'terra incognita' for atomic physics. With ion-ion collision experiments, we deal with 'clean plasmas' of well-known charge states, we have the ability to investigate a large variety of systems and the possibility to easily scan the charge state of each ion partner. Nevertheless, the realization of these experiments remains a real challenge involving several steps that have to be solved in order to perform absolute cross section measurements. Current and future challenges. The intermediate regime is reached when the relative target-projectile velocity is of the same order as the ones of the active electrons in its initial state. For instance, in the case of the symmetric collision Ar Q+ on Ar q+ , it occurs at a collision velocity around 8 MeV/u. Besides the possibility to reach the pure three-body problem (bare ion on hydrogenic target) as a benchmark, the role of additional electrons bounded to the target and/or to the projectile -one by one-should allow quantifying several effects such as: 1. closure and/or opening of different channels: such as capture channels, that are open for bare projectiles but may be closed (or less likely) for other charge states; 2. electron-electron interactions: besides correlations, the presence of additional electrons can also directly increase (anti-screening) or decrease (screening) the mechanism probabilities; 3. multi-electron processes: often neglected, they can become as large as single processes in some cases (see, for instance [112]); 4. Coulomb forces acting on the electron cloud in the entrance and exit pathway of the collision (effect related to the total charge of the collision partners). Therefore, these kind of studies should provide original data on the quantum dynamics of N-body systems. So far, no experiment has been performed in this regime mainly due to experimental issues, among which the requirement of (i) very high ion beam intensities of good optical quality with a perfect charge state control of the both ion beams, (ii) the control of the overlap between the ion beams and (iii) the high energy ion detector with a good count rate capability (up to 1 MHz) that need to be, in particular, radiation resistant. Additionally, an efficient cross beam arrangement running under ultra-high vacuum conditions is needed with the possibility to slightly change the energy of the low-energy ions in the interaction zone to tag the true events from the ones coming from collisions with the remaining atoms in the residual gas. Obviously, to properly analyze the collision products, powerful charge-state dispersion systems coupled to ion detectors are required for multicoincidence measurements. Advances in science and technology to meet challenges. For the low-energy channel, an ion source connected to a beam transport line provides currently ions in the keV/u energy range. The line needs to be well-adapted to shape the beam and clean it from the non-desired charge state just prior to the collision zone (for background reduction). This beam constitutes a target of a rather dilute density (a few 10 14 cm 3 maximum). Therefore, for the high energy channel, very intense beams are mandatory. The forthcoming availability of MeV energy and stable ion beams of high optical quality at French and German Large Scale Facilities, GANIL/SPIRAL2/S3 [113] and FAIR/ CRYRING ( [114,117], see section 19), opens now real opportunities towards the study of the intermediate collision regime. In fact, two experimental approaches have to be considered depending on the facility used. With SPIRAL2/ S3, very intense ion beams between 10 12 and 10 14 particles per second of medium Z number (from He to Ar) will be delivered allowing a 'single-pass experiment' arrangement. In this case, the stripping of very intense ion beams for reaching the desired charge state raises the issue of the resistance of the stripper (a thin solid foil) to the ion energy deposition. This can be overcome by using a rotating stripper so that the beam power is distributed over a much larger volume [115]. With the CRYRING ion storage ring, the effective target density is increased simply due to the revolution frequency of the MeV/ u ion beam what we call a 'multi-pass experiment' arrangement. CRYRING is equipped with an electron cooler operating with an ultracold electron beam, allowing to provide ion beams of very high-optical quality. With this facility, MeV/u ions heavier than the ones provided by SPIRAL2 (presumably up to U) will be stored allowing the study of asymmetric collision systems. For the heaviest ions stored at energies in the region of ∼10 MeV/u or less, we also enter the realm where the formation of a quasi-molecule during the collision with an electric field from the combined nuclear charge of Z target +Z projectile >174 can be formed allowing the spontaneous creation of electron-positron pairs (supercritical field) [116]. This offers the possibility to investigate the most controlled environment where fully relativistic QED calculations are required to study atomic processes in the presence of extreme electromagnetic fields (i.e. exceeding the critical Schwinger limit of 2× 10 16 Concluding remarks. When a few MeV/u ions collide with a few keV/u ions, a hitherto unexplored collision regime is reached: the regime where the ion energy transfer is at its maximum. There, measurements and reliable theoretical predictions are completely lacking. With the performances of the new upcoming facilities in France and Germany, a complete experimental program of ion-ion collisions is now clearly conceivable involving a large variety of collision systems with the possibility to tune both the projectile and target charge state over a wide range up to bare ion on hydrogenic target. There is no doubt that original experimental data on the quantum dynamics of N-body systems are expected. Acknowledgments This work is not and will not be feasible without the full ASUR team at INSP, our colleagues from CIMAP (Caen, France) and from the Atomic Physics group at GSI and Jena University (Germany). We thank the GANIL staff and the Atomic Physic group at Giessen (Germany). This work is supported by the French National Agency under the two contracts ANR-13-IS04-0007 and 10-EQPX-0046 as well as by the Laboratory of Excellence Plas@Par (Plasma à Paris). Frequency metrology with trapped and stored HCIs José R Crespo López-Urrutia Max-Planck-Institut für Kernphysik, Germany Status. Arguably, all interactions in the Standard Model of physics leave their watermark on the electronic wave function. The question is, can one access those effects through precision measurements? Until now, the answer in atomic physics has often been yes; witnesses are atomic clocks, hyperfine anomalies, Mössbauer-based general relativity tests, atomic parity non-conservation and many other examples. An even wider landscape of possibilities is reviewed in [118]. For the most accurate method in science, frequency metrology, the introduction of the frequency comb as well as the use of atoms and singly charged ions as clock references [119], with laser cooling techniques bringing them into the Doppler-free motional ground state are enabling stupendous and steady gains in accuracy. With frequency (ν) determinations in optical transitions by far surpassing in their accuracies that of classical microwave atomic clocks, novel methods of optical frequency metrology (OFM) are starting to contribute to fundamental and applied research in manifold ways. At the level of Δν/ν≈10 −18 recently demonstrated [120], universally reproducible atomic frequency standards are also equally sensitive to various physical aspects of their environment and become the finest physical sensors available. Until recently, HCIs stayed outside those developments, since directly laser cooling them is not possible. Fundamental studies with HCI in the field of QED in extreme fields, nuclear-size effects and the like had the advantage of the high powers of the atomic number Z with which those effects scale up with the charge state (see, e.g. [121,122]). Astrophysics benefited from controlled spectroscopic studies in the laboratory [123], and atomic structure theory was stringently benchmarked [124]. However, HCI spectroscopy in electron beam ion traps [124], merged-beam setups and storage rings was intrinsically limited by Doppler-borne broadening and shifts to an accuracy typically worse than one part-permillion, more than ten orders of magnitude away from stateof-the-art OFM. None of the proposed techniques including the use of free-electron lasers and highly monochromatic synchrotron radiation could suppress this drawback of those devices. Very recently, with the introduction of sympathetic laser cooling [125] in a cryogenic radio-frequency trap (see figure 17) at the Max-Planck-Institut für Kernphysik (MPIK), HCI cooling to the motional ground state has nonetheless become achievable, breaking the present barriers. In addition, this step has suddenly increased the variety of species available for OFM by a large factor, since many isoelectronic sequences in HCI possess narrow optical transitions suitable for laser excitation from the electronic ground state. Many applications in fundamental research will benefit from this advance [124,126]. Moreover, HCI have high ionization potentials and can therefore be excited by lasers from the extreme ultraviolet to the x-ray region without being photoionized, making narrow electronic transitions at such high frequencies of the electromagnetic field possible ( figure 18). In this way, supported by rapid advances in the generation of narrowband lasers in those spectral regions based on high-harmonic generation, extreme frequency metrology (XFM) will find in HCI [127] adequate frequency standards for reference and stabilization beyond the few that potential nuclear clocks could provide. Current and future challenges. From a scientific point of view, the foremost task is testing the basic theories that support our understanding of nature, such as QED, general Figure 17. Invisible, a sympathetically cooled Ar 13+ ion occupies the left side of a Coulomb crystal of laser-cooled Be + ions. Bringing down the translational temperature of the HCI by nearly eight orders of magnitude, this key step prepares the ion for subsequent groundstate cooling followed by quantum-logic spectroscopy in a cryogenic radio-frequency trap (Image: MPIK). Figure 18. Frequency metrology with HCI: an electron beam ion trap emits HCI that are decelerated, bunched, and injected into a linear radio-frequency trap, where sympathetic cooling with a previously prepared Coulomb crystal of several Be + ions takes place. Then, an ensemble of one single HCI with one single Be + ion is prepared and cooled to its motional ground state. Resonant excitation of forbidden transitions in the HCI, and detection of those excitations by means of quantum-logic spectroscopy follow. In this method, interrogation of the Raman side-band transitions of Be + yields the state of the HCI (Image: MPIK). relativity, the standard model of physics, parity nonconservation, cosmology and the Dark Matter question appear. Atomic physics and frequency metrology are already yielding deeper and deeper insights into those, and presumably an extension to higher photon energies will open new possibilities to contribute to these fields. However, our ability to benchmark theoretical developments against the most accurate results from OFM still encounters a conundrum that is both hindrance and opportunity, namely the limitations of our knowledge about the atomic nucleus. A very good example is the proton radius issue, where the superior sensitivity of laser techniques has challenged long standing high-energy electron scattering results, albeit not all problems have been solved yet. QED in stable, bound systems is the most accurate theory of physics, but further advances will require a better understanding of nuclear-size contributions to the experimentally determined atomic transition frequencies. On the other side, proposed combinations of such measurements can provide exactly that information on the nucleus, thus facilitating tests of nuclear theory and deeper benchmarking of QED. This would be more propitious than at first sight: as the primordial quantum-field theory, any improvements in its methods will lead to advances in other parts of the standard model both from the point of view of the mathematical methodology in use as well as from the better understanding of perturbations that affect measurements at high and low momentum transfer. Advances in science and technology to meet challenges. Exquisite methods of OFM [119] have been developed in the last decade by pioneering groups worldwide. Transferring those to the newly accessible HCI [125] will undoubtedly become a task for the next few years. Auspiciously, groundstate cooling of HCI via sympathetic cooling, and quantumlogic spectroscopy with HCI have just been achieved by a collaboration of the QUEST group (Schmidt) at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig and MPIK. This step makes HCI available for the application of OFM at the highest levels of accuracy now available. Other groups pursuing this route will also profit from it. With this clear path for OFM with HCI already in the near future, the quest for an extension into the XFM domain is still pending [127]. Frequency combs in the VUV and XUV have already been demonstrated by a few groups. Attempts to make them available for uses with HCI are already underway. Perspectives for their application include in principle all what has already been shown to work in the optical region, plus fields of research which intrinsically require high photon energies. One interesting example for long-term research is the study of nuclear transitions unaffected by solid-state effects, this means, using isolated HCI as targets being the carriers of the nuclei of interest. Potentially, HCI impervious to the x rays needed to excite low-lying nuclear levels, as in Mössbauer transitions, up to 100 keV could be prepared and cooled. The extremely narrow linewidths of those x-ray transitions would enable even more sensitive probes of nuclear forces and fundamental physics. Another fruitful approach to fundamental physics with HCI is their use in high-precision Penning traps (see related references to recent work of various groups in [118,124]). This has delivered outstanding tests of QED through studies of the bound-electron g factor and atomic mass determinations that are sensitive to the binding energies of nuclei and their surrounding electronic shell to a level which is rapidly becoming competitive with standard x-ray spectroscopy of HCI. Furthermore, in combination with rare-isotope beam facilities, Penning traps that are already workhorses of nuclear physics will also experience a stupendous push in accuracy from the application of sympathetic laser cooling techniques to the trapped HCI, and quantum logic techniques. Concluding remarks. Beyond our daily perspective, HCI constitute the bulk of baryonic matter for the majority of the chemical elements in the universe. Due to their properties, they control radiation transport in stellar cores, and are the strongest spectral emitters in x-ray astrophysics from stellar coronae to black hole and active galactic nuclei environments. As isolated quantum systems, they are excellent probes for fundamental interactions with the triple advantage of strongly scaled-up QED and nuclear-size effects, simpler electronic structure than atoms, and extreme suppression of sensitivity to spurious external perturbations of the electromagnetic field. Nowadays, their use in the laboratory is becoming easier, with smaller and more practical sources showing their performance exactly for those purposes. Now, tamed for XFM by means of re-trapping followed by sympathetic cooling, their variety will constitute an invaluable advantage for fundamental and applied research. The periodic table acquires height with the ionic charge state; on the pathways across these largely pristine ranges, new spots at high vantages will offer broad scopes and far sights of the physics landscape. Leaving them unexplored would be a lost opportunity. Acknowledgments Stimulating discussions with J Berengut, K Blaum, D Budker, A Derevianko, V Flambaum, M Kozlov, M Safronova, P O Schmidt, and other colleagues driving this field with their ideas are gratefully acknowledged. Highly-charged radionuclides Yuri A Litvinov GSI Helmholtz Center, Germany Status. Highly-charged radionuclides (HCR) are systems with none or just a few atomic electrons, like, for instance, hydrogen-(H-like), helium-(He-like) or lithium-like (Li-like) ions. These are well-defined nucleus plus lepton(s) systems with well-defined quantum numbers. Studies of isolated HCRs shall help understanding complicated processes involving HCRs in stellar hot and dense objects. The first major research subject with HCRs is devoted to studies of nuclear decay properties [128]. Here, decay channels known in neutral atoms can disappear, like for instance orbital EC is disabled in fully-ionized nuclides, while new decay channels may open up. One example of the latter is the bound-state beta decay (β b ). Different from an ordinary βdecay, the electron is not emitted into continuum but occupies one of the free bound orbitals. To date, β b decay has been measured for 5 nuclides ( 163 Dy 63+ , 187 Re 87+ , 205 Hg 80+ , 206,207 Tl 81+ ), providing-among other important results-the first β b /βratio that can be compared to time-mirrored EC/β + ratios [128]. Very demanding is the measurement of the β b of 205 Tl 81+ , which is important for the determination of the ppsolar neutrino capture probability into the 2.3 keV state of 205 Pb and can also be used to constrain the very end of the s-process (slow neutron capture) nucleosynthesis. Excellent examples illustrating the importance of the interplay of atomic and nuclear structure in describing weak decays are provided by first experiments addressing allowed Gamow-Teller EC decays in H-and He-like ions [129]. The EC decay rate in H-like 140 Pr 58+ and 142 Pm 60+ ions was found to be about 50% larger than in the respective He-like ions. This result is explained by considering the conservation of the total (nucleus+leptons) angular momentum and the defined helicity of the emitted electron neutrino [129]. A surprising consequence of the latter is the disabled Gamow-Teller 1+→2+EC transitions in H-like 122 I 52+ ions [129]. By selecting specific nuclei and transitions, forbidden decays and other subtle effects in weak decay can be addressed in the future. One specific example is 111 Sn, where, by studying β decay rate in bare and H-like ions, first direct measurement of electron screening in β decay can be achieved [129]. Concerning the electromagnetic decays, the atomic charge state can also have a significant influence on the decay rate. It is straightforward that the de-excitation of nuclei via internal conversion (IC) is disabled in fully-ionized nuclides. A new decay mode, the bound-state internal conversion (BIC) [130], can open up in HCRs. Here, an excited nuclear state resonantly transfers its excitation energy to a bound electron which is excited to a bound atomic level at a higher energy. The time-reversed processes for IC and BIC are respectively the nuclear excitation by electron capture/transition (NEEC/T). The latter exotic decay modes are being searched for many decades. They may play an essential role in the population of nuclear excited states in plasmas, which in turn are the reason for the so-called stellar enhancement factors in nucleosynthesis modeling. Furthermore, an effective induced de-excitation of a nuclear isomer through NEEC/T process is one of the dreams for the application of nuclear isomers for energy storage. A first observation of the NEEC process has been reported in 2018 [131] and calls for an independent verification. Suggestions for searches of other exotic decay modes, like Pauli-forbidden transitions and bound electron-positron de-excitations, have been proposed and await their realization [132]. The second research subject is DR on HCRs [129]. The DR spectra of the in-flight generated Li-like 237 U 89+ and 234 Pa 87+ ions have been obtained [133]. The resonant nature of the DR process, which is sensitive to nuclear quantum numbers, might be used to select/purify nuclides of interest from unresolved contaminants [133]. This may be employed to investigate the lowest-energy isomeric states ( 229 Th, 235 U) [134]. The experimental verification of the existence of the 'nuclear clock' isomeric state in 229 Th [135] makes its future extraction and trapping highly relevant. It is also interesting to measure the lifetime of 229 Th isomeric state dependent on the atomic charge state, that is with enabled/disabled hyperfine splitting (HFS). The last application of HCRs addressed here are the astrophysical reactions, where the nuclear reaction rate is deduced through normalizing to the significantly better known theoretical atomic K-REC cross-section. The first proton capture reactions were performed on 94 Ru 44+ and 124 Xe 54+ stable nuclides [136]. In the latter case, the center of mass energy as low as 6 MeV/u has been achieved [137] thus approaching the Gamow window of the astrophysical p-process (proton capture process). The available experimental data are scarce and such investigations are therefore highly demanded. Current and future challenges. Experimentally, precision investigations on HCRs are complicated since the exotic nuclides have to be produced in a specific high atomic charge state and then be purified from inevitable contaminants [128,129]. Furthermore, except for very short lifetimes, all experiments require HCRs to be stored for an extended period of time in a preserved atomic charge state. The experimental studies with HCRs are presently routinely conducted at heavy-ion storage rings. We note though, that there are proposals for studying decays of HCRs in electron-beam ion traps (EBITs). Therefore, the main challenges are connected with the development of trapping devices, EBITs and storage rings, connected to radioactive-ion beam facilities. Furthermore, different experiments require different energies and sophisticated beam manipulations. All experiments count on excellent quality of electron-cooled ion beams. To keep HCRs stored, the detectors must be nondestructive or intercept reaction products at special locations defined by the storage ring ion-optics. Since the experiments are conducted on rare ion species, an essential goal is to improve sensitivity and efficiency of detectors, ideally up to ultimate sensitivity to individual ions and 100% efficiency. One example of the latter is illustrated in figure 19, where the novel resonant non-destructive Schottky detector is applied to detect EC decays of each stored H-like 142 Pm 60+ ion [129]. For the particle detectors, the electron-cooled beams have tiny sizes so as the hit area on the detectors. Furthermore, the experiments at lower energies need detectors to be located directly in the ultra-high vacuum environment of a storage ring which is often a severe technological challenge [136,137]. The relevant center-of-mass energies for astrophysical p-process are 2-6 MeV/u. Efficient slowing down to these energies of HCRs produced at high energies is yet a challenge [117]. Ultra-pure, thin, internal gas-jet targets are needed. The handling of low-energy beams interacting with gas targets, background due to Rutherford scattering, cooling efficiency, duty cycles, etc are still to be investigated. If successful, even lower energies relevant for, e.g. rp-process (rapid proton capture process) can be envisioned. So far only proton capture reactions were addressed. However, all kinds of proton and alpha induced reactions are of high interest. Advances in science and technology to meet challenges. Presently there are only two operational heavy-ion storage rings capable of performing relevant experiments. These are the experimental storage ring (ESR) at GSI in Darmstadt and the experimental cooler-storage ring at IMPCAS in Lanzhou [128,129]. However, both rings are designed for a routine operation at high, several hundred MeV/u, energies. Although the ESR is capable of slowing the ions down to energies as low as 3 MeV/u, there is an obvious need for dedicated low-energy storage rings. At GSI, the CRYRING@ESR project is being finalized ([117], section 19). Several pilot experiments have been approved to run in the near future. In addition to ESR and CRYRING, the new-generation Facility for Antiproton and Ion Research (FAIR) will add collector ring and high-energy storage ring (HESR) ( [128,129], section 19). Mainly due to significantly increased secondary beam intensities from the new powerful radioactive ion-beam facility Super-FRS, various studies of decays of HCRs will become possible, especially for nuclides relevant for astrophysics. Furthermore, the HITRAP facility will enable HCRs at rest (section 19). With all these new trapping devices, GSI/FAIR will offer HCRs in a wide range of energies from about at rest (HITRAP) throughout to about 5 GeV/u (HESR) (section 19). In China, High-Intensity Accelerator Facility (HIAF) is being planned [138]. This complex will include several storage rings and versatile experimental capabilities. A unique project to install a low-energy storage ring at ISOLDE/CERN has been proposed [134]. Such a ring would enable a broad range of experiments with HCRs. One distinct advantage of the project is that HCRs are injected into the ring at the required energy omitting the inefficient deceleration process. In particular, the EC decay of H-and He-like 7 Be would be measured, which is important for constraining the Solar neutrino flux [134]. The project is presently postponed. A new storage ring facility R3 has been constructed behind the fragment separator at RIKEN in Japan. Atomic mass measurements and lifetimes of very exotic HCRs are planned there [138]. Concluding remarks. The interest in HCRs is found at the intersection of atomic, nuclear, and plasma physics. The precision investigations with HCRs allow for investigating basic phenomena under very clean experimental conditions where the atomic charge state and the corresponding quantum numbers are well-defined. Thus, the complex processes found in hot and dense plasmas can be constrained. Addressed here are the decays of HCRs, which are extremely sensitive to the interplay of atomic and nuclear structure. Furthermore, DR and reaction studies on HCRs have huge discovery potential. Left aside are atomic mass measurements [138]. All measurements will profit dramatically from new possibilities offered by the next-generation accelerator complexes. Especially the low-energy storage rings and traps will boost the research field in near future [117,134]. St. Petersburg State University, Russia Status. Basic principles of QED were formulated to the beginning of 1930s by merging quantum mechanics with special relativity. The discovery of the Lamb shift in 1947 stimulated theorists to complete the creation of QED by developing the renormalization technique. Till the beginning of 1980s tests of QED were mainly restricted to light atomic systems where the calculations were performed in the weakfield approximation, which corresponds to small values of the parameter αZ (α is the fine structure constant and Z is the nuclear charge number). A unique opportunity to test QED in a strong-field regime, which requires calculations without any expansion in αZ, appeared when high-precision experiments with heavy few-electron ions became feasible. High-precision tests of QED effects with HCIs were first performed for the binding energies. The ground state Lamb shift in H-like uranium, which is defined as a difference between the exact energy and the point-nucleus Dirac energy, was measured to amount 460.2(4.6) eV [139]. The comparison of this experiment with the theoretical result, 463.99 (39) eV (see [140] and references therein), provides a test of QED at the strong Coulomb field on a 2% level. Higher accuracy was achieved in experiments with Li-like ions. The present status of theory [141] and experiment [142] for the 2p 1/2 -2s transition energy in Li-like uranium provides a test of QED on a 0.2% level. For both H-and Li-like uranium ions the theoretical uncertainty is presently defined by some uncalculated contributions of two-loop QED diagrams (figure 20) and by an uncertainty of the nuclear charge radius. To date we have a number of high-precision measurements of the HFS in heavy H-like ions (see, e.g. [143] and references therein). The main goal of these experiments was to test QED in a unique combination of strong electric and magnetic fields. But, because of a large theoretical uncertainty due to the nuclear magnetization distribution correction (so called Bohr-Weisskopf effect), it turned out that the QED tests are possible only via studying a specific difference of the HFS values of H-and Li-like ions [144]. The recent measurements of this difference in Bi [143] revealed a large discrepancy between experiment and theory. This discrepancy was explained in [145] by an incorrect value of the nuclear magnetic moment which was widely used in literature. New calculations of the magnetic shielding factor and new measurements of the nuclear magnetic moment in 209 Bi(NO 3 ) 3 and -BiF 209 6 [145] lead to good agreement between theory and experiment. However, more precise measurements of the nuclear magnetic moments and the HFS values are needed to provide stringent QED tests. High-precision measurements of the g factor of H-and Li-like low-and middle-Z ions (see, e.g. [146] and references therein) have provided stringent tests of the QED effects in presence of a magnetic field. Combined with the related theoretical predictions, these experiments have also provided the most precise determination of the electron mass [146]. Recently [147], the isotope shift of the g factor of Li-like calcium ions was measured. This experiment allowed the first test of the relativistic theory of the nuclear recoil effect on the g factor of highly charged Li-like ions. One of the most interesting and intriguing issues of modern fundamental physics is related to tests of QED at supercritical fields. According to the QED theory, a static and spatially uniform electric field should create electron-positron pairs, provided its strength is close to the Schwinger limit, E=1.3×10 16 V cm −1 . One might expect that the desired field can be achieved using strong laser fields. Recent developments of the laser technologies have triggered a great interest to calculations of the pair production in strong fields, especially for the case of colliding laser pulses. However, the maximal field strength which can be achieved by modern lasers is by 3-4 orders of magnitude smaller than the Schwinger limit. Another access to QED at supercritical fields can be gained in Coulomb field created by an extended nucleus with the nuclear charge number exceeding the critical value, Z c =173. At this critical value, the 1s level should 'dive' into the negative energy Dirac continuum. If the 1s level was empty, its diving results in spontaneous creation of two positrons. Since there is no nuclei with so high Z, the only way to access the supercritical regime is to study low-energy collisions of heavy ions with the total nuclear charge larger than the critical value. The corresponding experiments were performed many years ago at GSI (Darmstadt). However, for a number of reasons [148], these experiments could not prove or disprove the spontaneous pair creation. Plans to return to investigations of this phenomenon at FAIR (Germany), HIAF (China), and NICA (Russia) facilities have triggered new theoretical studies of relativistic quantum dynamics at low-energy heavy-ion collisions. gaining the 1 eV accuracy in the 1s Lamb shift in H-like uranium. The Lamb shift in heavy H-like ions should be considered as the main reference point for QED tests at strong fields. This is due to the simplicity of H-like ions compared to few-electron ions as well as the simplicity of the Lamb shift compared to other QED effects. These tests are needed to prove the non-perturbative QED methods, which then can be applied to calculations of other important properties of HCIs. From the theoretical side, to improve the accuracy of the ground-state Lamb shift, the last two diagrams in figure 20 have to be calculated beyond the free-electron-loop approximation. Also the rigorous evaluations of the second-order two-and threeelectron QED contributions in highly charged Be-like ions are urgently needed to meet the experimental accuracy achieved in recent measurements of the transition energies. Accurate measurements of the HFS transition energies in H-and Li-like ions must be accompanied by independent high-precision determinations of the nuclear magnetic moments. From the theoretical side, the corresponding calculations for B-like ions including all second-order twoand more-electron contributions are required. Together with the corresponding calculations for H-and Li-like ions, these calculations can also be used for tests of QED in a combination of the strongest electric and magnetic fields. High-precision measurements of the g factors of heavy H-, Li-, and B-like ions are anticipated in the near future at the Max-Planck Institut fuer Kernphysik (MPIK) in Heidelberg and at the HITRAP/FAIR facilities in Darmstadt. Combined with the related theoretical predictions, these measurements should provide stringent tests of QED at strong fields. It was recently shown that the study of the g factors of H-and Lilike lead ions can provide a test of the QED nuclear recoil effect on a few-percent level. This would give the first test of QED at strong-coupling regime beyond the Furry picture. Measurements of the g factor of ions with non-zero nuclear spin will result in the most precise determinations of the nuclear magnetic moments. Also an independent determination of the fine structure constant from the g-factor experiments with heavy H-and B-like ions is feasible, provided the corresponding theoretical calculations are performed to the required accuracy. The study of QED at supercritical fields created in lowenergy heavy-ion collisions demands developments of theoretical methods which allow to investigate in all details such processes as pair production, electron excitation and ionization, charge transfer, and x-ray emission. Special attention should be paid to calculations focused on finding signatures of the 'diving' scenario which inevitably leads to the spontaneous pair production. Advances in science and technology to meet challenges. The progress in high-precision QED calculations of HCIs was always stimulated by the related progress in experiment. No doubts that any further substantial progress on the Lamb shift experiments with heavy ions, which is one of the main topics of a number of nowadays conferences and workshops, will motivate theorists to advance the calculations of two-loop QED contributions in ions with one and more electrons. Precise measurements of the g factors of heavy few-electron ions at MPIK in Heidelberg and at GSI/FAIR in Darmstadt will stimulate theoretical calculations of the corresponding higher-order QED and nuclear effects. The recent advances in calculations of the pair-creation probabilities in low-energy heavy-ion collisions beyond the monopole approximation give a hope for further developments of theoretical two-center methods, which are needed to study in details the quantum dynamics of electrons in strong and supercritical fields. Concluding remarks. High-precision measurements with HCIs, combined with the corresponding theoretical calculations, provide stringent tests of non-perturbative QED methods. These theory and experiment can be also used for the most precise determinations of the electron mass, nuclear magnetic moments, nuclear radii, etc. They have also a potential for an independent determination of the fine structure constant. The study of quantum dynamics of electrons in low-energy heavy ion collisions can provide a unique possibility for tests of QED in supercritical regime. Stockholm University, Sweden Status. The title 'Interactions of HCIs with clusters' makes most ICPEAC participants think about situations where atomic ions, stripped of many of its electrons, collide with some neutral aggregation of matter-a cluster. Typically, ICPEAC-clusters have consisted of metal atoms [149], noble gas atoms [150], or small molecules [151]. Carbon clusters and in particular C 60 have also often been used as targets. A recent trend is to study interactions with clusters of larger molecules where the bonds between the individual molecules are much weaker than within the molecules themselves. Examples are loosely bound clusters of biomolecules [152], clusters of C 60 [153] or PAHs-Polycyclic Aromatic Hydrocarbon molecules [154]. The interest in the latter two targets is partly motivated by astrophysical observation of characteristic emission in the micrometer wavelength region, where C 60 , PAHs and other aromatic molecules are knownor expected to-play important roles. Here we will mainly focus on collisions between slow highly charged, positive, ions and neutral clusters of molecules. In this context a collision is slow when the velocity of the ion is lower than typical velocities of the outermost target electrons. As indicated in figure 21, one or several electrons may be transferred from the cluster to the ion already at large distances. The higher the charge state, q, the larger the ioncluster distance at which electron transfer processes may occur and the larger the cluster ionization cross section. The essence of this behavior is well described by over-the-barrier models in which one considers the potential energy barrier for an electron moving in the electric field created by the ion, the active electron itself, and the ionized cluster (see [155] and references therein). These considerations become particularly simple for surfaces and spherical or close to spherical targets and it has been shown that C 60 can be modeled as a classical metal sphere [155]. The same model also explains the high charge mobility within clusters of fullerenes and the ultrafast (sub-femtosecond) timescales for such processes [155,156]. At a first glance, one would expect that distant electron transfer processes would lead to very little heating of the cluster, but with HCIs strong heating of individual molecules emitted from clusters have been observed [154]. This is most likely due to heating during cluster Coulomb-explosion processes [154]. In penetrating collisions, the incoming HCI is first neutralized and may then transfer considerable amounts of energy in electronic and nuclear stopping processes as it passes through the cluster. In particular, the projectile may knock out single atoms in prompt Rutherfordlike atom-atom scattering processes (sub-femtosecond timescale). This gives highly reactive molecular fragments, which are likely to bind to other molecules or fragments in the cluster breakup phase (see [157] and references therein). The fast knockout processes are non-statistical in nature as the knockout atom is removed before local excitations (at the knockout site) have time to distribute over the whole molecule. Here, a key point for the survival of moleculargrowth products is that the excess energy is shared among many of the molecules in the cluster. This scenario is supported by close agreement between experimental results and classical molecular dynamics (MD) simulations assuming neutral projectiles. In these simulations, trajectories of all individual atoms in the system are followed for a few picoseconds and the agreement with the experimental results then indicate that growth products survive also on typical experimental time scales of microseconds (see [157] and references therein). Given the interest from the astrophysics community for, for example, carbon-based molecules and their formation and destruction in interstellar space, planetary atmospheres and supernova shock waves-and for understanding fundamental molecular growth processes as such-it is of great interest to study the long-term stabilities of such fragments and to study how they react with neutral, charged, and multiply charged molecules or clusters. Related to this, it will be most interesting to investigate the role of the incident ion charge in situations like the one depicted in figure 21. Current and future challenges. So far, most ion-cluster collision studies have been performed with keV atomic ions colliding with thermal neutral cluster targets and where distributions of cluster sizes always have been rather wide. One challenge is thus to perform experiments with sizeselected clusters. Such experiments will make it possible to control internal cluster temperatures and to study how sizeselected clusters interact with atoms, molecules, or other clusters in completely new velocity domains using mergedbeams techniques. A further challenge on the experimental side concerns spectroscopic studies of molecular growth products resulting from close interactions between clusters of molecules and ions in both high and low charge states. On the theoretical side, long-term goals are to simulate knockout driven intra-cluster reactions in a wide range of systems using Figure 21. Schematic of distant and penetrating (close) collisions between a highly charged ion and a cluster of C 60 fullerenes. Distant collisions lead mainly to charge transfer with little heating of the target, but in penetrating collisions much more damage can be inflicted through nuclear and electronic stopping processes. quantum mechanics instead of classical force fields, and to include the effects of electronic excitations and the incident ion charge state in the simulations. These are extremely challenging tasks but some steps in this direction have already been taken [157]. Studies of ions interacting with aggregates of carbonbased matter relates to one of the main issues in fullerene research: How are the classical fullerenes predominantly formed and why is it that C 60 and C 70 become so dominant in so many different types of experimental situations and in nature? In figure 22, which is adapted from [158], parts of mass spectra recorded for collisions between 400 keV Xe 20+ and small (blue spectrum) and large (red spectrum) neutral clusters of k C 60 molecules, [C 60 ] k , are shown. In both cases there are broad size-distributions of neutral clusters before the collision but with a larger average target cluster size in the experiment yielding the red spectrum. The much wider distribution of reaction products in the red spectrum is particularly striking. There are also enhanced intensities for so called magic fullerenes like, for examples, + C 84 and + C . 70 These results are consistent with a picture in which hot large giant fullerenes form in confined hot carbon plasma along projectile ion trajectories through the cluster. The C 60 molecules that are not directly struck by the projectile mostly remain intact initially and are then either evaporated from the heated system or react with, e.g. carbon atoms or molecules from the plasma. Hot giant fullerenes may then decay by fragmentation and photon emission and some of them will shrink towards smaller sizes including, for examples, 70 and + C . 60 Such magic fullerenes have somewhat higher stabilities than their neighbors. It has been suggested, however, that the large abundances of C 60 in many different situations rather is due to exceptionally fast radiative cooling processes. Products like + C 60 3 [ ] in figure 22 are due to multiple ionization of much larger [C 60 ] k clusters followed by emissions of many intact C 60 molecules. A current challenge is to investigate molecular growth starting from much smaller carbon-based molecular building blocks and also to study the conditions for C 60 -formation as functions of cluster size, cluster charge, mass and velocity, a future challenge is to study how internally cold cluster ions in different charge states interact with neutrals and with other ions at subthermal and higher collision energies. Advances in science and technology to meet challenges. It is of great interest to improve, e.g. the experiment behind the spectra in figure 22. As has already been indicated above, this can be done by colliding size-selected [ ] clusters and clusters of other carbon-bearing molecules with atomic targets. This will yield cluster-size specific distributions of reaction products and, in addition, possibilities to register several reaction products in coincidence for each cluster size. Furthermore, one could prepare beams of ultracold + C k 60 [ ] clusters inside HNDs, which would lead to a much better definition of the cluster structure than in earlier experiments. By placing a gas target on injection lines of cryogenic electrostatic ion storage devices it will become possible to study stabilities and photo-absorption properties of fragments and molecular growth products resulting from single collisions in the target gas. In some ion storage devices, it may also become possible to study interactions between internally cold complex (cluster) ions of opposite charge states using merged beams techniques. Interactions between highly charged atomic ions and internally cold clusters of molecules can be performed in single-pass merged beams experiments. Concluding remarks. We foresee highly interesting developments where molecular growth processes, and in particular the C 60 formation mechanism are investigated with control of cluster size, charge, internal energy, structure, and velocity for different masses of the target. In these studies, the charge will be carried into the collision by the cluster and it will be crucial to be able to handle the system charge in simulations of reaction processes. Stabilities and optical properties of molecular growth products will be studied with cryogenic electrostatic ion storage devices. In the near future, these devices will allow studies of pairwise interactions between molecular clusters using merged-beams techniques. Acknowledgments This work was supported by the Swedish Research Council through Grants No. 2015-04990 and 2016-0418. Michael Gatchell, Stockholm University and Innsbruck University is gratefully acknowledged for his input and for discussions Figure 22. Mass spectra from collisions between 400 keV Xe 20+ ions and neutral [C 60 ] k clusters. The red data is for experimental conditions that give, on average, larger cluster sizes k than the blue data. Reprinted from [158], with the permission of AIP Publishing. TU Wien, Austria Status. Nowadays, the modification of solids with ion beams represents the backbone in semi-conductor manufacturing with respect to tuning electrical properties by implantation and shaping surface structures by ion lithography. However, the ever-smaller layer thicknesses and lateral dimensions in modern electronics demands new approaches [159]. Reducing the ion's kinetic energy to confine energy deposition to only a few atomic layers seems to be a straightforward solution, but serious challenges arise from the broad energy spread of the ion sources and when it comes to beam focus at these low energies. Slow HCIs carry an additional potential energy, which can be even larger than the kinetic energy [160]. Upon neutralization, HCI deposit a large amount of the potential energy in a shallow region at the material's surface and thus feature an ideal tool to tailor properties of surfaces and 2D materials without the need to go to ultralow kinetic energies [161]. It was shown on surfaces of wide band-gap insulators that HCI can induce extremely large sputtering yields of a few thousand atoms per ion or even induce local phase transitions on the nanoscale depending on material properties [162]. For insulators the modification mechanisms are well described by the formation of an inelastic thermal spike, surface melting, and resolidification; or by electronically stimulated desorption (DIET). In general, HCI neutralization leads to heating of the electronic subsystem of the material. A coupling between electrons and phonons heats the lattice on a later time scale. This two-temperature model (TTM) provides an extremely powerful description of the initial energy deposition processes, while details of the electrical and thermal properties of the material play a decisive role for the outcome of the ion impact. This also is the reason why metal surfaces are typically not prone to nano-structuring by HCI, because the deposited energy in the electronic subsystem dissipates in the material before electron-phonon coupling can effectively set in. When moving towards 2D materials supported by arbitrary substrates, new phenomena already emerge as the non-perfectly matched electronic and lattice subsystems of the substrate decouple the 2D layer [163]. Quantum confinement effects for the excited electrons prevent energy dissipation in the third dimension. This points towards an even more efficient potential energy-driven process leading to modifications similar to those for bulk insulators, and the few experiments conducted so far support this assumption [164,165]. In the purely freestanding case of 2D materials, where a substrate is entirely absent, one may expect that even metal-like materials become prone to HCI modifications due to quantum confinement effects. However, this has not yet been observed experimentally [166]. In their freestanding state, 2D materials also constitute a perfect model system to study the material in an (ion-induced) highly non-equilibrium state in unprecedented detail [167]. While the TTM allows for a powerful description of the processes in a thermodynamic sense only, a detailed and quantifiable atomistic model is still missing due to the complexity of the neutralization processes and subsequent surface electron and atom dynamics. Current and future challenges. One of the greatest challenges in HCI irradiation and spectroscopy with 2D materials lies currently in the fact that HCI beams are only available as broad beams. Additionally, the presence of surface contaminations becomes increasingly important. Because 2D materials are surface-only materials, adsorbed hydrocarbons, residuals either from wet-chemical transfer or from the growth substrate despite chemical etching, all affect available measurement techniques, i.e. scanning probe and electron microscopy, spectroscopy of the scattered/ transmitted ions, sputtered target recoils, as well as emitted electrons and photons, see figure 23. Another important problem and research issue are defects in 2D materials. Those have a great influence on the material's structural, optical, and electronic properties, and even sometimes dominate them. The presence of intrinsic defects, i.e. single and multi-atomic vacancies, small and extended rotational defects, grain boundaries, foreign atoms, creases, wrinkles, etc needs to be controlled during all steps of sample preparation, i.e. material growth, transfer onto a substrate, onto another 2D material and/or onto a supportive show different susceptibility to HCI driven modification, whereas further research for 2D materials is needed. In the future, more complex integrated multi-coincidence spectroscopy and in situ microscopy in combination with refined theoretical descriptions will reveal details on the interaction mechanisms. grid in case of a suspended sheet. This is why currently homogeneous, defect-free 2D materials cannot easily be produced on the mm-scale. Some 2D materials are even not stable in air, as e.g. black phosphorous or silicene, and thus require a complicated invacuum transfer to an HCI facility and to an analysis device. On top of this, some 2D materials are highly susceptible to damage by electronic excitations, beneficial for HCI induced modification, but hampering atomically resolved electronbased microscopy. In principle, suspended 2D materials allow direct spectroscopy of the ion still in a non-equilibrium state after transmission through the material together with the spectroscopy of emitted secondary particles i.e. sputtered atoms, electrons, and photons. Spectroscopy of (quasi-) particles from the interaction process with a suspended 2D material can give, for the first time, experimental access to the dynamics of ion-induced electronic processes on the femtosecond time-scale. However, the current challenge lies in the measurement of all particle energies and momenta, ideally in coincidence, plus the determination of energy retained in the material. To probe even longer time scales, i.e. atomic dynamics in the layer after the ion transmission, a truly timeresolved ion scattering spectroscopy would be needed. Apart from the experimental difficulties also theory faces serious challenges. The models for ion-solid interaction nowadays adequately describe equilibrium processes, whereas the impact of an HCI on a surface represents a highly perturbative non-equilibrium process. In this regime only little is known about the dynamics of the ion neutralization as well as the kinetic energy loss. However, to model and predict the energy deposition of HCI in various materials, challenges in simulating multi-electron processes must be overcome. Advances in science and technology to meet challenges. To meet the challenges, the integration of HCI facilities into an ultra-high-vacuum (UHV) set-up will be necessary. Currently, some labs are pursuing this path. Further, the combination of HCI beams with electron-based microscopes would allow true in situ measurements minimizing the influence of surface adsorbates, even though volatile hydrocarbon molecules are always abundant even in UHV. For small area 2D materials or crystalline grains, the development of focused HCI devices with beam spot sizes well below 1 μm will advantageous. Some first efforts have been undertaken until now, but further development is heavily needed in this field. To further complement spectroscopic measurements with data from emitted secondary particles, a reaction microscope setup suitable to hold a suspended 2D sample will be necessary. To measure the energy retained in the material, a bolometer-type temperature measurement at liquid He temperature could be performed and thus making the full kinematics of energy deposition and emission by the projectile and secondary particles accessible. To disentangle processes during the ion impact on the material such as its neutralization and primary energy deposition, from processes on much longer time scales, such as atom sputtering and thermalization, a truly time-resolved methodology is required. Whether this will rely on absolute time-resolution or on a stroboscopic technique is currently under debate. In case of highly perturbative interaction phenomena, new theoretical methods for their description must be developed. The most promising methodologies are based on TD-DFT and hybrid models, i.e. MD simulations for large simulation cells coupled to an electronic temperature bath. While MD can capture atomic motion in large cells for an extended period of time (several nanoseconds), TD-DFT can give ab-initio information on the energy deposition into the electronic system by charge state dependent ion stopping and neutralization. Coupling TD-DFT and MD goes beyond the Born-Oppenheimer-Approximation and therefore represents a powerful tool for situations where the electronic and atomic subsystems are strongly coupled on similar time-scales [168]. Concluding remarks. Over recent years HCI-surface interaction studies have moved from 3D to 2D and thus increased drastically in complexity with respect to sample manufacturing, handling, analysis, and ion spectroscopic methodologies, see figure 23. The combination of an increasing number of all-UHV and in situ methods, 2D material complexity, as well as multi-coincidence techniques will continue in the future. While simulations will soon be able to describe HCI-surface interaction from the sub-fs to the ns time regime, the next disruptive step in experiments will be the development of truly time-resolved ion-scattering techniques. Acknowledgments MS acknowledges funding by the DFG SFB1242 'Non-Equilibrium Dynamics of Condensed Matter in the Time Domain', project C5. R W acknowledges support from the DFG through project WI 4691/1-1 (no. 322051344). We want to express our gratitude to A Reichert for contributing to the visualization. RIKEN, Japan Status. Ever since energetic heavy ions have been available, their interaction with solids has been an important topic to be explored. Energy transfer and damage to the solid are especially critical to material and biological sciences. HCIs in solids experience successive collisions with the constituent atoms of the solid, leading to the formation of tracks in the solid. This is a far cry from the single-collision conditions for ion-atom collisions in the gas phase. Nowadays a large amount of information has been compiled for ion-induced effects in solids including electronic excitation and elastic collisions. The traveling ions' behavior such as ionization, EC, and excitation has also been investigated in detail. Nevertheless, owing to the difficulty in observing the ion state in the solid in situ, many issues still remain unresolved. For crystal targets, a very unique property known as channeling is available. By being guided by the crystal potential, traveling ions have a chance to pass through the open space of the crystal without hard collisions with atoms. This phenomenon has been widely applied as a powerful tool for diagnostics of crystal quality. From the viewpoint of atomic collision physics, one of the most exciting phenomena in crystal targets is the Okorokov effect, often also called RCE. Ions passing through a periodic lattice (ordered rows or planes) of a crystal feel a time-dependent perturbation of the crystal potential. When one of the frequencies of the perturbation corresponds to the difference of internal energy levels of the ions, transitions may take place. Theoretical prediction of this effect can be traced back more than 50 years to an idea proposed by Okorokov [169]. Experimentally concrete observation was reported by Datz et al [170]. They observed this phenomenon by measuring the change of the charge state of H-like or He-like HCI (Z=5-9) in the MeV/u energy range channeling through a thin Au or Ag crystal at the resonance condition of the 1s-2p transition of the HCIs. The resonantly excited electron in the 2p state is more easily ionized by collisions with atoms in the crystal, leading to an enhancement of ionization under the resonance condition. The high ion energies used and the thinness of the crystals were two major reasons for the successful observation; otherwise charge exchange after resonance obscures the effect. In this experiment under the axial channeling condition, the ion energy was scanned to match the resonance condition, which was an experimentally difficult procedure for accelerator operation. They then proposed a new idea to satisfy the resonance conditions, what we now call 2D-RCE [171]. The excitation was induced by the two-dimensional periodic array of crystal strings under planar channeling conditions, where the resonance condition was achieved just by tilting the crystal with a rotating axis perpendicular to the crystal plane, enabling the precise scan of the resonance profile. Since then, the experiments have made progress by adopting higher energy HCIs as shown in figure 24(a). In 1998 using HCIs with much higher energy, i.e. 390 MeV/u, supplied from the HIMAC synchrotron in Japan, this type of experiment was performed with H-like Ar 17+ combined with a thin Si crystal [172]. The oscillating frequency of the periodic crystal fields in the projectile's frame are equivalent to that of about 3 keV photons. The usual atomic collision processes, especially EC, are indeed suppressed and highly resolved resonance profiles with rich structure have been observed. From a series of successive experiments clear DC-Stark effects to the excited levels due to strong static crystal electric fields of the order of GeV/cm and the corresponding evolution of the wavefunction depending on the position of the ions in the crystal were revealed. At this stage, it was recognized that not only in the channeling condition but also in the non-channeling condition, such highly energetic ions can penetrate the crystal without suffering from severe collisions anymore. Taking full advantage of this fact, a great leap in the research happened. As shown in figure 24(b), in the non-channeling condition, ions have a chance to be excited by the three-dimensional periodic array of the crystal plane, which we call 3D-RCE [173]. This phenomenon was confirmed again by H-like Ar 17+ ions at an energy of 391 MeV/u passing through a 1 μm thick silicon crystal, and resonance profiles observed by the charge state distribution or the x-ray emission yield are much narrower than those of planar channeling ions (2D-RCE) due to the absence of the large DC Stark shift caused by the planar potential. The HCIs traveling through a crystal target feel a variety of periodic crystal fields including higher harmonic components simultaneously. The direction of research then naturally headed for double resonances using these different oscillating fields. The double resonance is a key technique in many fields like quantum optics or pump-probe spectroscopy in chemistry, and it has become reality in the x-ray energy region. Under the same configuration of the ion trajectory and the crystal direction, the double resonance condition of 3D-RCE can be satisfied by scanning two parameters, rotation angles of the crystal, which are in a sense analogous to two-color energy tunable x-ray laser excitation. One example of successful double resonance experiments is selective production of the doubly excited state in He-like Ar 16+ ions [174]. The Ar 16+ ions were resonantly excited sequentially from the ground state to the 1s2p state and then the 2p 2 state by the (X-X) double resonance, which was confirmed by Auger electron emission. The other example is a demonstration of the Autler-Townes doublet as a novel type of coherent interaction of atoms not with photons but with a periodic crystal field [175]. It was observed by the (X-VUV) double resonance. The states strongly coupled in the VUV region were probed by the excitation in the x-ray region. The characteristic spectra are well interpreted by an analogy of the dressed atom concept often adopted for atom-photon interactions. Current and future challenges. Figure 25 schematically shows that, as the energy of the HCI beam increases, the resonance peaks become sharpened due to the lack of the collisions of HCI with atoms in a crystal which are the usual origin of decoherence effects. The typical resonance width, ΔE/E, for RCE of a few 100 MeV/u HCIs is of the order of 10 −3 . This leads to a new idea that RCE may be a new tool for high precision spectroscopy of the Lamb shift in the energy levels of HCI to check the bound state quantum electro dynamics (BS-QED). To explore this feasibility, experiments using uranium (Z=92) ions were started at the accelerator and storage-ring facility of the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany using the heavy-ion synchrotron (SIS) [176]. The 2s-2p 3/2 transition of 4.5 keV in 192 MeV/u Li-like U 89+ ions was excited, and the resonance profiles were measured by the deexcitation x-ray yield. Clear resonances with a width of 4.4 eV were demonstrated. However, it was also realized that the resonance width was affected by the energy distribution of the provided ion beam. Then, narrowing of the ion velocity was achieved by introducing the electron cooling technique. The He-like U 90+ ions extracted from SIS were injected into an ESR ion storage ring, where the stored ions merged with a mono-energetic electron beam for electron cooling. The resulting Li-like U 89+ ions produced via EC were extracted and guided to the crystal target. As a result, the resonance width was narrowed down to 1.4 eV with a Lorentzian distribution. The next challenge in the near future is to determine the absolute energy within 1 ppm. Recent HFS measurements of Bi ions stored in the ESR ring was successful in determining the ion absolute energy by precisely measuring the electrode voltage for the electron cooler [177]. This technique will be adopted in the coming experiments. A new synchrotron SIS100 is under construction in the ongoing FAIR project as introduced in section 19 in this Roadmap. This will provide 1 GeV/u uranium beams, making it within reach to determine the contribution of the 1s Lamb-shift (∼460 eV) in the 1s-2p 3/2 transition of 140 keV using the RCE technique. A HESR is also planned for storing cooled beams at energies of up to a few GeV/u, and this may open a chance to perform new RCE experiments. This technique is in principle can be applied for any specific ions or transition levels. Especially, the double resonance technique will be useful to prepare metastable states. Alignment of HCIs by taking an advantage of the linearly-polarized nature of the perturbation of the crystal potential will be applied to high-Z ions, where the relativistic effect like the Breit interaction plays a significant role in polarization of the emitted x-rays. Nuclear excitation of low-lying levels of nuclei including nuclear quadrupole resonance is also within scope. Apart from RCE with high-energy ions, an alternative approach to suppress decoherence is using the surface channeling condition where ions at glancing incidence to the crystal surface experience mirror reflection above the surface to avoid the collision process with surface atoms. This idea was indeed verified by two groups [178], and extensive application of this technique is expected. Advances in science and technology to meet challenges. As already described, critical new techniques and devices were introduced, when important progresses in dynamics and spectroscopy were made, like higher energy ion accelerators, electron cooling techniques, and extremely thin crystals with good quality. In the future this trend will never change. Finally, it is also noted that, as a new approach, magnetic resonance of traveling neutral alkali atoms were achieved in the microwave frequency region using artificial periodic structures [179]. This technique also has the potential for a variety of future applications. Concluding remarks. Understanding of the interaction of HCI with crystals has been deepened dramatically for the last decades far beyond a diagnostics tool for crystal quality or damage. RCE utilizing a periodic crystal field has offered a new paradigm in atomic collisions involving other research fields. Its significance will be strengthened in the future as an alternative approach to lasers for exploring spectroscopy and dynamics of HCI. Acknowledgments I would like to acknowledge valuable collaborations with Y Nakano, Y Nakai, A Hatakeyama, K Komaki, Y Yamazaki, A Demian-Bräunning, T Stöhlker, and many coworkers. 17. COMs and irradiation effects in solid phase for astrophysics: radiolysis and radio-resistance of nucleobases Philippe Boduch CIMAP-GANIL/University of Caen-Normandie UNICAEN, France Status. For COMs, several theories claim that these molecules have possibly reached the early Earth via comets and meteorites [180]. Indeed, during the recent ROSETTA space mission, the simplest amino acid glycine, has been detected in the coma of comet 67P/Churyumov-Gerasimenko [181]. Additionally, analyses of carbonaceous meteorites found on Earth show a presence of COMs (e.g. nucleobases). That is a strong indication of existence of such molecules in outer space. Concerning the formation of complex molecules, irradiation of ices containing simple molecules such as H 2 O, CO, NH 3 can lead to the formation of COMs [182,183]. Among COMs, nucleobases are particularly important. They are a part of DNA and are essential for the emergence of life, maybe due to exogenesis. Even if nucleobases have not yet been observed directly in space, their presence on meteorites on earth is a strong indication of their existence in space environments. Thus, it is important to study irradiation effects on such molecules in order to determine their radio-resistance. Current and future challenges. All the nucleobases (adenine, uracil, cytosine, guanine and thymine) have been irradiated with swift heavy ions at GANIL (Caen, France) and GSI (Darmstadt, Germany) facilities in order to determine their destruction cross sections as a function of the electronic stopping power of the projectiles. The goal of these experiments with high energy projectiles is to simulate in laboratory the effect of cosmic rays on COMs embedded in icy mantles on dusty grains in Inter Stellar Media (ISM) or at the surface of comets. The samples have been irradiated at low temperature (around 10 K) and the irradiation induced modifications are followed in situ by Fourier transform infrared spectroscopy (FTIR). Figure 26 shows the evolution of the cytosine column density inside the sample as a function of the local dose (deposited energy per molecule) for 4 projectiles (Ca, Ni, Xe and U). The fact that all these curves fall together strongly suggests that the local dose is a key parameter and the evolution of the cytosine sample depends mainly on the deposited energy. The same result was obtained for all of the nucleobases (for adenine, see [184]). From the evolution of the column density as a function of the fluence, it is possible to determine the destruction cross section of the nucleobases for each projectile. We distinguish 2 kinds of nucleobases: (I) purine nucleobases (guanine and adenine) formed of two heterocyclic rings, and (ii) pyrimidine nucleobases (cytosine, uracil and thymine) which are formed by just one heterocyclic ring. The corresponding destruction cross sections are of the same order of magnitude (between 1 and 5 × 10 −13 cm 2 for the experiments performed at GSI with 190 MeV Calcium projectiles). Nevertheless, the destruction cross sections of purine nucleobases are smaller than for pyrimidine nucleobases. Guanine is the most radioresistant nucleobase. This is in agreement with the fact that guanine is also the most abundant nucleobase detected in carbon-rich meteorites analyzed on earth ( [185]). From all the experiments performed with swift heavy ions at GANIL and at GSI, it is possible to determine the evolution of the nucleobase destruction cross section as a function of the electronic stopping power. Figure 27 shows as example the results obtained for heavy ions at 2 different places (GSI and GANIL) with 2 different experimental setups for adenine, which are in good agreement. This is also the case for the cross sections for electron beams [186]. The cross sections follow a power law with electronic stopping σ≈A.S e n with n typically around 1.2. This law can be used for astrophysics applications in order to determine the lifetime of nucleobases in ISM. In the case of adenine, the half-life found is around 10 million years [184]. This value is directly comparable to the typical lifetime of dense clouds in the ISM, clouds which are the birth place of stellar system. These results suggest that this type of molecules has a high survival probability and could be detected in ISM in the future. Technology to meet challenges. To simulate the effect of cosmic rays on complex molecules and to perform such 'online' experiments with in situ analysis, it is mandatory to work at very low temperature in ultrahigh vacuum conditions to ensure a controlled preparation of the targets and a clean monitoring of its evolution under irradiation, in particular for mass spectrometry. In addition, contamination by water may influence cross section and sputter yield measurements. Recently, a new ultrahigh vacuum device has been built at CIMAP-GANIL (Caen, France). The performances of this new device are described in [187]. It is equipped with three spectrometers (FTIR, UV and Quadrupole Mass Spectroscope) and can be installed on several GANIL beam lines. This setup is open to the scientific community and will help to address new challenges about irradiation effects in ices, COMs, and other materials. These challenges are really numerous: for example, to study COMs irradiation in a more realistic situation. In ISM, grains are covered with a thin icy mantle mainly composed of H 2 O molecules. In these conditions, it is important to study irradiation effects on COMs trapped in a water matrix at low temperature in order to characterize the effect of the matrix. A second challenge consists in increasing the size and the complexity of the COMs. New experiments have been started on nucleotides (nucleobase+sugar). A sample of uridine (uracil+ribose) has been recently irradiated and the associated irradiation cross section will be compared with those obtained for uracil samples. This kind of studies will be extended to more complex molecules in the future, peptides for example. For astrophysics, polycyclic aromatic hydrocarbons (PAH) are really important complex molecules. They have been detected in space and represent an important source of carbon in space (up to 30%). Since carbon is the key element in evolution of prebiotic materials [188], it would be also really interesting to study the PAH radio-resistance and the associated radiolysis products in water matrix with different percentages. Concluding remarks. Nucleobases are essential for the emergence of life. The analysis of nucleobase irradiation experiments shows that these molecules are rather radioresistant and the associated life time indicates a high survival probability in ISM. Now, recent technical developments (see e.g. [189]) allow to continue these studies in more realistic conditions with water matrix and with more complex molecules. Cryogenic electrostatic storage rings Henning T Schmidt Stockholm University, Sweden Status. With the introduction of ion-storage rings in atomic and molecular physics research in the late 1980s, the option to study ensembles of ions for extended time periods exceeding the μs range was for the first time combined with the efficient detection of decay/collision products offered by fast beams. While these relatively large (typically 50 m circumference) magnetic-confinement storage rings were mostly operated at MeV energies, some of the most prominent studies involved low-energy collisions between the stored ions and electrons in merged-beams configurations. The electron beams' original purpose was to cool the translational degrees of freedom of the stored ions, but at the same time they constituted an excellent target for electron-ion recombination studies. In molecular recombination the long-time storage and vibrational relaxation of the molecular ions were combined with the excellent electron-beam quality to provide unprecedented experimental conditions [190]. During the late 1990s with intense activities at the storage rings many experiments were performed, which only made use of the extended storage times and the ease with which products of interactions or spontaneous processes could be detected from an ion beam. It was realized that such experiments, which did not make use of the option of going to high (MeV) energies or to make use of electron cooling could be performed in simpler less costly devices. At the same time, a growing interest emerged in clusters and complex molecules, so that a development towards storage of higher mass particles was called for. For these new demands, a purely electrostatic ionstorage ring in an ultra-high vacuum environment was an ideal solution and this lead to the development of the ELISA storage ring in Aarhus [191]. Here a single example will represent all the pioneering ELISA experiments with complex systems: In studies of hot metal cluster anions with a large spread in internal energies, it was found that the spontaneous decay monitored through neutral product detection followed a power-law rather than an exponential [192]. ELISA was followed by other electrostatic rings where spontaneous processes and processes induced by interactions with photons or electrons are studied [193]. Current and future challenges. The general advantage in all storage ring experiments is the combination of the temporally extended confinement, allowing the relaxation of excited state ions, and the efficient detection of reaction products. Concerning the relaxation, this is efficient for most metastable states in atomic systems with millisecond lifetimes and for vibrationally excited states of small molecular ions, where the few seconds of storage time easily available in room temperature rings are sufficient. In many cases, however, even rotational excitations of molecular ions can have strong influence on the results of experimental investigations and if the fundamental properties are sought, one needs ways to accomplish control over the rotational degrees of freedom as well. The nature of this problem is twofold. Rotational relaxation processes are generally quite slow, of the order of minutes even for the smallest molecules. Thus, very long storage times are needed. Furthermore, the excitation energies involved are so low that many rotational levels are occupied at room temperature, even if thermal equilibrium were to be attained. A central motivation behind the recently developed cryogenic electrostatic storage rings is that both these problems related to the rotational excitations are simultaneously addressed. Cooling the systems down to the 10 K temperature range makes the inner walls of the vacuum chambers act as efficient cryopumps for most of the gases otherwise present in the ultra-high vacuum systems. This way, residual-gas densities have been reached that are lower than one molecule per cubic millimeter. This extreme vacuum has led to storage lifetimes in cryogenic electrostatic rings of keV ion beams approaching one hour. With this, waiting for even the last steps of rotational cooling towards the ground state is within reach for many molecular systems. Evidently, the low temperature itself is also necessary for the thermally equilibrated molecular ions to have a high fraction of the population in the (few) lowest rotational levels. While the details are system dependent, the amount of rotational excitation is strongly decreased for all systems by going from room temperature to a cryogenic environment. Advances in science and technology to meet challenges. Recently, three cryogenic electrostatic storage ring facilities have become operational. These are the DESIREE (Double ElectroStatic Ion-Ring ExpEriment) at Stockholm University [194], the CSR (Cryogenic Storage Ring) at the Max-Planck Institute for Nuclear Physics in Heidelberg [195] and the RICE (RIKEN Cryogenic Electrostatic ring) at RIKEN, Japan [196]. Common to these are their principle constructions with an entire separate all-cryogenic vacuum chamber surrounded by in-vacuum thermal insulation. This ensures the very high vacuum and the long storage lifetimes. The three facilities differ strongly in other respects. RICE is with its 3.0 m circumference by far the smallest one. It is well suited for studies of spontaneous processes and for laser interaction with the stored ion beams. Furthermore, there are advanced plans for merging a beam of neutrals with the stored beam to study reactions between neutrals and positive ions at low and wellcontrolled energies [196]. Of the three, the CSR is the biggest system with a 35.1 m circumference. This large ring is equipped with an electron cooler for merged-beams ionelectron recombination studies, a merged neutral beam is being developed, and a reaction microscope is to be included [195]. DESIREE [194] (see figure 28) is special in being a double electrostatic ring and it consists of two 8.6 m circumference ion-storage rings. The design is based on the ability to merge beams of positive and negative ions stored in the two rings and measure products of mutual neutralization and related processes. The long-time storage of both beams will make it possible to study pairs of ions of complex molecules close to thermal equilibrium with the cryogenic surroundings. While the merged-beams geometry provides the opportunity to control the inter-molecular degrees of freedom of the collision partners, the intra-molecular degrees of freedom are controlled by the low temperature and the long time storage. For all the cryogenic storage rings, the ability to demonstrate rotational cooling and to which extent thermal equilibrium may be reached are essential. In 2017 both the CSR and DESIREE teams reported the observation of stored beams of OH − with well over 90% of the population in the lowest (J=0) quantum state [197]. Figure 29 displays data from a threshold photodetachment experiment in DESIREE. By comparing signal rates for wavelengths just above and just below the threshold for ground state photodetachment and using prior knowledge of the photodetachment cross sections, the fraction of OH − ions not in the J=0 ground rotational level was deduced as function of time after ion injection. For the data set shown in green, the population of ions in excited rotational levels was reduced artificially by depletion through selective photodetachment. For the first 260 s after injection, this depletion laser was on resulting in an asymptotic excitation fraction of 3.6%±0.3% corresponding to a rotational temperature of 12.3±0.2 K. After the laser light was blocked at t=260 s the excitation fraction increased by absorption of thermal photons from the 13.5±0.5 K temperature surroundings and by collisions with the few residual-gas molecules left. Finally an asymptotic excitation fraction of 18.1%±0.9%, corresponding to a rotational temperature of 20.6±0.5 K, was reached. These results and similar findings in CSR holds great promise for future experiments in the here mentioned and any future cryogenic storage ring systems for experiments where cold molecular ions interact with light, electrons, neutral particles or other ions [194][195][196][197]. In parallel to the electrostatic ion-storage ring developments, compact electrostatic ion-beam traps have been developed in which fast-moving ions are confined between two opposing focusing electrostatic mirrors. Here we mention the recent studies from the Weizmann institute of ultraslow isomerization in C 10 − molecular ions [198] and the first experiment with a cryogenic electrostatic storage device in which the He − lifetime was accurately measured [199]. Concluding remarks. Novel cryogenic electrostatic ionstorage rings have become operational and cooling of the rotational degrees of freedom has been demonstrated. New insights from experiments with rotationally cold ions interacting with photons, electrons, neutral atoms and other cold ions are expected in the very near future. Figure 28. DESIREE [194] at Stockholm University consists of two electrostatic ion-storage rings in a common cryogenic vacuum chamber and with a common straight section where beams of opposite charge polarity can be merged. With the low internal energies that can be attained by waiting for the ions to reach thermal equilibrium and the high degree of control of the inter-particle motion obtained from the merged-beams configuration, unprecedented control of all internal and external degrees of freedom can be obtained. Figure 29. Measured fractions of the OH − ions not in the rotational ground state as function of time after injection in DESIREE (blue data points and curve). The green data points and curve show the same fraction but now a cw laser beam is merged with the ion beam for the first 260 s and tuned to a wavelength where non-ground state ions only can be photodetached [197]. 19. Frontiers of extreme electromagnetic fields: atomic physics with HCIs at FAIR (SPARC collaboration) Thomas Stöhlker Friedrich-Schiller-University Jena, Germany; Helmholtz Institute Jena, Germany; GSI Darmstadt, Germany Status. The international Facility for Antiproton and Ion Research FAIR [200][201][202], currently under construction, will allow for a broad variety of experiments over a wide ranges of ion energy and ion intensity. The intensities can be varied from single ions up to highest particle densities at energies ranging from rest in the laboratory up to the highly relativistic collision domain, corresponding to the strongest and shortest electromagnetic field pulses available. Moreover, stable and exotic nuclei up to the heaviest ones are available in any arbitrary charge state, enabling the extension of atomic physics research across virtually the full range of atomic matter. Of particular interest for atomic physics, FAIR is unique in its ability to deliver these highly intense, brilliant beams with excellent momentum definition. For this purpose, a unique portfolio of trapping and storage facilities will be available at FAIR covering ion energies, that span more than 10 orders of magnitude ( figure 30). This will enable experiments with cooled ions aiming at exploring the atomic structure and electron dynamics in the realm of the extreme electromagnetic fields close to, and even beyond, the Schwinger limit. In order to cope with these challenges, the SPARC collaboration (Stored Particle Atomic physics Research Collaboration) has been formed. SPARC has over 430 members and is dedicated to scientifically exploit the unique discovery potential of FAIR [201,202], in close interaction between experiment and theory. Accelerator-based atomic physics has opened new, widely unexplored fields of research. Heavy-ion storage cooler rings, such as the ESR at GSI, have provided a first access to basic processes associated with strong electromagnetic fields in collisions of heavy, HCIs. Experiments with stored and cooled one-and few-electron high-Z ions interacting with atomic targets, (polarized) electrons, or laser photons uniquely reveal the effects of relativity, electron correlations and QED on the dynamics and structure of elementary atomic systems in the presence of extremely strong fields. These research activities focus on the structure and dynamics of atomic matter subject to extreme electromagnetic fields and ultrafast electromagnetic interactions, in particular those described by non-perturbative QEDs. On the side of fundamental interactions, we conduct highly sensitive tests of atomic structure theory based on various experimental approaches such as precision measurements of the 1s Lamb shift [139], 1s hyperfine structure [143], bound-state g-factor [146] and mass measurements of heavy few-electron ions. In case of electrons bound to the strong Coulomb fields of heavy nuclei, experiments on quantum dynamics of electrons subject to extreme fields has been studied. This was achieved by precision studies of DR [203], photon-correlation studies for radiative EC (see section 6) [204], state-selective excitation and EC studies [205], as well as of correlated electron dynamics of atoms interacting with the intense [206], ultra-short fields of high-Z relativistic ions. Moreover, atomic physics techniques have been applied to yield the most precise test of time dilatation by using stored and cooled Li +ions at the ESR storage ring [207]. Last but not least, concerning the intersection of atomic and nuclear physics, novel decay modes of nuclei have been discovered, such as the bound-state β-decay, which may play an important role in nuclear astrophysics and more specifically in nuclear synthesis (see section 12) [208]. Current and future challenges. The storage and trapping facilities of FAIR will substantially enlarge the research capabilities for the exploration of atomic matter within the realm of extreme and ultra-short electromagnetic fields, and has key features that offer a range of new and challenging research opportunities. The investigations will focus on atomic structure, e.g. new concepts for QED in extreme fields (see section 13), collision studies at moderate and highly relativistic energies (ionization, capture and pair production) (see section 6), to generate insights into correlated many-body dynamics via ultra-short and superintense field pulses (<10 −18 s), and avenues towards the discovery of new electromagnetic interaction processes. Further examples include laser spectroscopy exploiting the large Doppler boost (both in the photon frequency and time domain) associated with relativistic ions, RCE of ions passing through crystal lattices at a relativistic speed (see section 16), as well as precision x-ray spectroscopy [209]. Furthermore, experiments at the border between atomic and nuclear physics will be performed with an emphasis on rare nuclear decay modes only possible for nuclei in high atomic charge states. At the same time, SPARC will apply these accurate atomicphysics techniques as powerful tools for the determination of nuclear parameters such as nuclear radii and moments, and of fundamental constants (see section 12). More specifically, Figure 30. Main FAIR facilities for storage and trapping within the SPARC collaboration, covering more than 10 orders of magnitude in particle kinetic energies [202]. Reprinted from [202], Copyright 2015, with permission from Elsevier. SPARC will exploit the research potential of the following four research facilities at FAIR (see also figure 31). HITRAP [210]: HCIs (up to bare uranium) from the ESR storage ring will be extracted into the HITRAP facility, where they are decelerated and cooled. The cold ions are delivered to a broad range of atomic physics experiments at very low energies or even in ion traps (see section 11). This facility is ready for operation and is waiting for first beams. CRYRING [211]: A low-energy storage ring, dedicated to precision experiments with HCIs that has already proven its potential in a multitude of experiments at Stockholm. CRYRING has been transferred to GSI/FAIR and ions decelerated in the ESR will be provided to CRYRING for experiments. A unique feature of CYRING is its ultra-cold electron cooler. CRYING is currently being commissioned and will be available for experiments in 2019. ESR [201,202]: The ESR is the most versatile storage ring with respect to beam energies, acceptance, various beam manipulations, and detection instrumentation. It allows for electron, stochastic, and laser cooling of beams of stable and exotic nuclei. Moreover, beams for HITRAP and CRYRING will be prepared and provided (decelerated) by the ESR. HESR [201,202]: The HESR will give access for the very first time to cooled, HCIs up to a γ-factor of 6 for precision experiments in atomic physics and neighboring fields. Advances in science and technology to meet challenges. Storage and cooling are essential for the discussed studies, and the development and improvement of the corresponding dedicated tools for particle storage, preparation and detection are part of the current technical challenges. Within the SPARC collaboration, there are numerous efforts to improve instrumentation including devices for deceleration and cooling of ions in storage rings and particle traps, nondestructive detection of ions and their reactions, and spectrometers in the optical and x-ray regimes. Newly developed calorimetric low-temperature detectors for spectroscopy of Lyman x-rays of HCIs which yield a high energy resolution that is comparable to crystal spectrometers and at the same time exhibit a large wavelength acceptance have been implemented and tested in first experiments at the ESR storage ring. Techniques for cooling of ions from keV to μeV energies (i.e. by more than 8 orders of magnitude) and crystallization within seconds have been devised by application in Penning traps [212]. For laser spectroscopy, a power-scalable nonlinear compression scheme which will provide few-cycle pulses at up to 1 kW of average power has been developed [213]. Thus, efficient high harmonic generation up to the water window will be feasible as well as an isolated attosecond pulse generation. A portable XUV source for experiments at heavyion storage rings at GSI/FAIR will be setup and installed at CRYRING [211]. At a later stage, such a system will serve as a light source for experiments at the HESR. Single-and multispecies ion crystals as presently produced in Penning traps will be used for sympathetic cooling of externally produced HCIs (HITRAP and EBIT at FAIR) and will allow us to perform high-precision optical spectroscopy of magnetic dipole transitions as probes of QED calculations in the magnetic sector. Future activities related to micro-calorimeter detectors will focus on large pixel arrays, e.g. 1000 pixels, and the optimization of the electronic read-out schemes. Furthermore, besides Metallic Magnetic Calorimeters [214], Superconducting Tunnel Junctions detectors with their high count-rate capability and unique performance at XUV photon energies will be considered. Concluding remarks. The SPARC collaboration is pursuing a challenging atomic physics program in the realm of strong electromagnetic fields, exploiting the present and future trapping and storage facilities of GSI/FAIR. The focus is on collision experiments of HCIs with atoms, electrons, and photons as well as spectroscopic studies in the x-ray, optical and microwave domains. Beside experiments on stable ions, also experiments at the border between atomic, nuclear and astro-physics will be conducted. Since the future FAIR facility will offer beams of cooled HCIs in energy regimes where no such experiments were possible so far, the SPARC research physics programs exhibit a high discovery potential especially considering the advances in experimental equipment.
39,545.8
2019-08-09T00:00:00.000
[ "Physics" ]
Diagnostic advantage of thin slice 2D MRI and multiplanar reconstruction of the knee joint using deep learning based denoising approach The purpose of this study is to evaluate whether thin-slice high-resolution 2D fat-suppressed proton density-weighted image of the knee joint using denoising approach with deep learning-based reconstruction (dDLR) with MPR is more useful than 3D FS-PD multi planar voxel image. Twelve patients who underwent MRI of the knee at 3T and 13 knees were enrolled. Denoising effect was quantitatively evaluated by comparing the coefficient of variation (CV) before and after dDLR. For the qualitative assessment, two radiologists evaluated image quality, artifacts, anatomical structures, and abnormal findings using a 5-point Likert scale between 2D and 3D. All of them were statistically analyzed. Gwet’s agreement coefficients were also calculated. For the scores of abnormal findings, we calculated the percentages of the cases with agreement with high confidence. The CV after dDLR was significantly lower than the one before dDLR (p < 0.05). As for image quality, artifacts and anatomical structure, no significant differences were found except for flow artifact (p < 0.05). The agreement was significantly higher in 2D than in 3D in abnormal findings (p < 0.05). In abnormal findings, the percentage with high confidence was higher in 2D than in 3D (p < 0.05). By applying dDLR to 2D, almost equivalent image quality to 3D could be obtained. Furthermore, abnormal findings could be depicted with greater confidence and consistency, indicating that 2D with dDLR can be a promising imaging method for the knee joint disease evaluation. www.nature.com/scientificreports/ use of z-direction phase encoding. Because 3D FSE sequences collect data with a small isotropic voxel size, it can provide MPR (multiplanar reconstruction) images and make arbitrary cross-sectional images. Additionally, it has less partial volume effect than ordinary 2D FSE sequences [2][3][4] . However, because of blurring and low resolution, earlier reports have described that the depiction of meniscus injuries is inferior to that of 2D FSE sequences 5,6 . In clinical practice, MR images should be scanned within an acceptable time, and there are several ways to obtain high resolution images under such situations. The super-resolution technique is one of them. This is a technique that transforms low resolution images into high resolution images. Recent reports show a super-resolution technique using multi-contrast MR images, which made it possible to obtain high resolution image from under-sampled image acquired in a short scan time using auxiliary information from a fully-sampled sequence 7,8 . Another technique is to obtain high resolution images with a short scan time and then denoise. Recently, many denoising techniques based on a deep learning algorithm for MRI have been reported [9][10][11][12] . One of these techniques involves training a neural network to perform computational generation of images that closely resemble high-quality training images from input images that include large amounts of noise, leading to the creation of a deep convolutional neural network (DCNN) 9,10 . By installing the DCNN in a diagnostic imaging system, low signal-to-noise ratio (SNR) images acquired by the system can be denoised and can be converted to high SNR images 9,10,13 . Advanced intelligent Clear-IQ Engine (AiCE, Canon Medical Systems Corporation) is a state of art denoising technique with deep learning reconstruction (dDLR) and has previously been reported to be applied to various sites, including the pelvic region and liver [14][15][16][17][18][19][20] . dDLR makes it possible to obtain low-noise thin-slice 2D MR images within an acceptable scan time. Using this technique, we can obtain almost isotropic voxel data in a high-resolution 2D manner and create MPR images. Such images with both high in-plane resolution with less blurring and a multi-planar view can be ideal for knee joint image. This study was designed to evaluate whether thin-slice 2D FS-PDWI of the knee joint after dDLR with MPR was more useful than 3D FS-PD MPV image. Materials and methods Study design. This prospective study was institutional review board (Ethics committee of Kyoto university graduate school and faculty of medicine)-approved and was registered with the UMIN Clinical Trials Registry (UMIN000036700). It was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments. Written informed consent was obtained from all individual participants included in the study. The primary end points were to assess the denoising effects of dDLR on thin-slice 2D FS-PDWI. The secondary end points were evaluation of the performance of dDLR-applied thin-slice 2D FS-PDWI of the knee joint with MPR images in comparison with 3D FS-PD MPV image in terms of the image quality, artifacts, the visualization of anatomical structures, and the detection of abnormal findings. This study did not require direct arthroscopic observation as a validated reference standard. Subjects. We recruited patients who were going to receive MRIs for knee problems between January and March 2020. Inclusion criteria were that they were at least 20 years of age and agreed to participate in our study. Exclusion criteria included general contraindications for MRI, prior knee surgery and the use of different coils for body size. MR image acquisition. MR scans were conducted using a 3T scanner (Vantage Galan 3T/ZGO, Canon Medical Systems, Tochigi, Japan) with a 16-channel knee coil. As our institutional protocol for routine knee MRI, sagittal and coronal proton density-weighted (repetition time msec/echo time msec, 4326/20; matrix, 384 × 448; 3 mm thickness), sagittal and coronal T2-weighted (repetition time msec/echo time msec, 5600/80.5; matrix, 320 × 448; 3 mm thickness), coronal STIR (repetition time msec/echo time msec, 4443/60; inversion time, 200 ms; matrix, 320 × 448; 3 mm thickness) and sagittal 3D FS-PD MPV images (voxel dimension, 0.7 × 0.7 × 0.7 mm) were acquired. Additionally, we performed coronal thin-slice 2D FS-PDWI (resolution, 0.5 × 0.5 mm; 1 mm thickness; − 0.3 mm gap) ( Table 1). dDLR application to MRI data. dDLR, a denoising technique used as a product (AiCE, Canon Medical Systems Corporation), was adapted to the scanned coronal thin-slice 2D FS-PDWI in this study. An MRI denoising method based on the denoising convolutional neural network (CNN) approach has been reported: shrinkage convolutional neural network (SCNN) 9 . Both the denoising CNN and SCNN have residual learning and batch-normalization in the hidden layer. Unlike the denoising CNN, SCNN can adjust the noise power of the input image using a CNN with soft-shrinkage activation functions. SCNN can adapt to various noise levels in a single network by setting an appropriate noise level for each input image. This means that there is no need to train a separate CNN for each noise level. dDLR is based on SCNN, but differs in its denoising technique. While SCNN performs noise reduction directly in the image domain, dDLR performs noise reduction by learning the noise threshold of the high-frequency components extracted by a discrete cosine transform (DCT) 9,10,15,16 . MPR image. Sagittal and transverse images were created from coronal thin-slice 2D FS-PDWI on the MRI console (slice thickness, 1 mm; slice spacing, 1 mm). Similarly, coronal and transverse images were created from the 3D FS-PD MPV sagittal image (slice thickness, 0.7 mm; slice spacing, 0.7 mm). Quantitative image evaluation methods. To evaluate the image noise before and after the adaptation of dDLR, two board-certified radiologists with 15 years of experience (R.S. and T.K.) placed the region of interests (ROIs) on the femur, the pars intermedia of the lateral meniscus, and the ACL on a medical professional www.nature.com/scientificreports/ imaging viewer (EV Insite, PSP Corporation, Tokyo, Japan) and obtained the mean, standard deviation (SD), and coefficient of variation (CV, SD divided by mean) of the signal in coronal thin-slice 2D FS-PDWI before and after the adaptation of dDLR. The sizes of the ROI were 50 mm 2 , 4 mm 2 , and 5 mm 2 . Qualitative image evaluation methods. Corresponding thin-slice 2D FS-PDWI and 3D FS-PD MPV image datasets were anonymized and randomized. Each study was evaluated independently by two board-certified radiologists with 9 years and 15 years of experience (H.T. and T.K.). Both readers used medical professional imaging viewer (Centricity Universal Viewer, GE Healthcare, Chicago, IL). Before the evaluation, the readers mutually discussed the evaluation method and agreement to proceed. Reading was performed at an approximately 1 lx using 21.3-inch diagnostic-quality color liquid crystal display monitor that was calibrated to Digital Imaging and Communications in Medicine standards (RadiForce RS240, EIZO Corporation, Ishikawa, Japan). All data were displayed in a 2 × 2 layout on the display monitor. Both readers were allowed to set the windows and levels as desired, as well as to magnify and scroll freely. Both thin-slice 2D FS-PDWI and 3D FS-PD MPV image can be viewed using interactive MPR mode. The evaluation items included image quality assessment, artifacts and anatomical structure visualization. All of them were evaluated on a 5-point Likert scale. Abnormal findings were also evaluated using a 5-point Likert scale [21][22][23] . For image quality, we evaluated edge sharpness, contrast resolution, and fluid brightness among coronal, sagittal and transverse images. We also evaluated uniformity of image quality by comprehensively assessing the three image planes. A 5-point Likert scale was used as the evaluation criterion: 1 point, very bad; 2 points, bad; 3 points, sufficient; 4 points, good; 5 points, very good. With regard to artifacts, noise, motion, flow and aliasing artifacts were evaluated by comprehensively assessing the three image planes using a 5-point Likert scale: 1 point, severe-difficult to assess for diagnosis; 2 points, moderate-remarkable artifacts are present; 3 points, mild-artifacts are seen, but not so readily apparent; 4 points, minimal-few artifacts are visible; 5 points, none. Aliasing artifacts were evaluated as not particularly problematic as long as they did not overlay the structures being evaluated. Delineation of the following anatomical structures was evaluated. Femur (coronal, sagittal, and transverse images); medial and lateral meniscus (coronal, sagittal, and transverse images); articular cartilage (tibiofemoral and femoropatellar joints) (tibiofemoral joint on coronal and sagittal images & femoropatellar joints on sagittal and transverse images); anterior cruciate and posterior cruciate ligaments (ACL and PCL) (coronal, sagittal, and transverse images); and medial and lateral collateral ligaments (coronal images). A 5-point Likert scale was used as the evaluation criterion: 1 point, very poor-the anatomical structure is completely obscured; 2 points, bad-some anatomical structures are unclear; 3 points, sufficient-anatomical details are not sufficiently clear; 4 points, good-anatomical structures are mostly clear at a minimum of detail; 5 points, very good-details of all anatomical structures are clear. Depiction of the following abnormal findings for each structure were evaluated. 1. Femur (coronal, sagittal, and transverse images): Assessment of osteochondral lesions related to arthritis including bone marrow edema-like lesion and subchondral cyst-like lesions. Bone marrow edema-like lesion was defined as a non-cystic Table 1. MR pulse sequence protocol. MR indicates magnetic resonance, 2D 2-dimensional, FS-PDWI fat saturated-proton density weighted image, 3D 3-dimensional, FS-PD MPV fat saturated-proton density multi planar voxel. www.nature.com/scientificreports/ subchondral area of ill-defined hyperintensity on fluid-sensitive sequences. Subchondral cyst-like lesions were well defined rounded areas of fluid signal intensity 24 . 2. Medial and lateral meniscus (coronal, sagittal, and transverse images): Only tears were evaluated. A tear was defined as meniscal distortion or increased intrasubstance signal intensity unequivocally contacting the articular surface 25 . 3. Articular cartilage (tibiofemoral and femoropatellar joints) (coronal, sagittal, and transverse images): Cartilage injury was evaluated and defined based on the Noyes classification system 26,27 . 4. ACL and PCL (coronal, sagittal, and transverse images): Complete and partial tears were evaluated. Complete ACL and PCL tears were defined as complete discontinuity of fibers or irregular contour with increased signal intensity. Partial ACL tear was defined as abnormal intra-ligament signal, bowing of the ligament, and inability to identify all fibers. Partial PCL tear was defined as hyperintense signal alterations without complete disruption of the ligament 28,29 . Also, ACL with ganglion cysts was evaluated and was defined as cysts that were either on the surface or within the substance of the ligament and showed water signal [30][31][32][33] . 5. Medial collateral ligaments (MCL) (superficial and deep layers) and lateral collateral ligaments (LCL) (coronal image): Injury and tear were evaluated and defined based on the grading system 29,34 . The images specified above were viewed and evaluated overall. The evaluation criteria were used with the following confidence levels: 1 point, certainly absent; 2 points, probably absent; 3 points, equivocal; 4 points, probably present; 5 points, certainly present 35 . If no abnormal signal intensity was found in the anatomical structures above, they were considered normal. Sequences 2D FS-PDWI 3D FS-PD MPV Image Statistical analysis. Statistical analyses were performed using JMP Pro 15.2.0 software (SAS Institute, Cary, NC). For the quantitative evaluation, intraclass correlation coefficients (ICCs) of CV by the two radiologists were obtained. The mean values of CV by the two radiologists before and after the adaptation of dDLR were compared using the Mann-Whitney U test. For the evaluation of image quality, artifacts, and anatomical structure visualization, scores of 3, 4, and 5 were classified as "acceptable" and scores of 1 and 2 as "non-acceptable". For this dichotomous classification, the results were statistically analyzed by chi-squared test. We defined "abnormal findings" as follows: "on both thin-slice 2D FS-PDWI and 3D FS-PD MPV images", " on two or more image planes (coronal image only for medial and lateral collateral ligament)", and "scored 4 or higher by two readers". For "abnormal findings", we calculated the percentages of both scores of 5 (5/5) for the number of image plane in the evaluated structures and analyzed with chi-squared test. P values of 0.05 and less were considered statistically significant. Results Thirteen patients (4 males and 9 females; mean age 64.5 years old, range from 35 to 87 years old) who received MRI of the knee joint were enrolled between January and March 2020. Fourteen knees (3 right knees and 11 left knees) were examined because both knees were examined in one patient. We excluded one patient, who was examined using different coil because of knee size issue, from the evaluation. Therefore, MRI of thirteen knees (2 right knees and 11 left knees) were evaluated. As the results of quantitative image evaluation methods, ICCs of CV measured by two radiologists were 0.82-0.96 for all ROI sizes, which were recognized as excellent interobserver reproducibility 37 . The CV after dDLR was significantly lower than that before dDLR for the femur, lateral meniscus, and ACL (p < 0.05, p < 0.05, and p < 0.05, respectively) ( Figs. 1 and 2). Results of inter-rater reliability coefficients show that image quality, artifacts, and delineation of anatomical structures in thin-slice 2D FS-PDWI and 3D FS-PD MPV images produced excellent agreement between the two radiologists, with Gwet's AC 2 For dichotomous evaluation of image quality and artifacts, the only significantly different parameter evaluated were those related to flow artifacts between two radiologists (p < 0.05) (Fig. 3). Otherwise, no other significant differences were found ( Table 2). No significant difference was found in the evaluation of anatomical structure visualization ( Table 3). The results of inter-rater reliability coefficients for abnormal findings were shown in Table 4. Statistically, the inter-rater reliability coefficients in abnormal findings, which were analyzed by adding all image planes, were significantly higher in thin-slice 2D FS-PDWI than in 3D FS-PD MPV image (p < 0.05). Discussion It would have been impossible to acquire thin-slice 2D FS-PDWI with fine MPR images in a clinically acceptable scan time. However, we inferred that thin-slice 2D FS-PDWI that can fulfill the requirement of ideal image quality would be possible by the improvement of SNR with the application of dDLR. Thin-slice 2D FS-PDWI was compared with 3D FS-PDWI, which is now performed routinely. Thin-slice 2D FS-PDWI was scanned in coronal plane as the original image, from which MPR images were created. By contrast, 3D FS-PD MPV image Results of CV measurement of the femur, the pars intermedia of the lateral meniscus and ACL in coronal thin-slice 2D FS-PDWI before and after the adaptation of dDLR. The CV after dDLR was significantly lower than that before dDLR at all sites (p < 0.05, p < 0.05, and p < 0.05, respectively). CV indicates coefficient of variation; ACL, anterior cruciate ligament; 2D FS-PDWI, 2-dimensional fat saturated-proton density weighted image; dDLR, denoising approach with deep learning-based reconstruction. www.nature.com/scientificreports/ was scanned in sagittal plane and MPR images were created. The reason for scanning thin-slice 2D FS-PDWI in coronal plane was to minimize the effects of the flow artifacts from the popliteal artery. By setting a high in-plane resolution to 0.5 × 0.5 mm, slice thickness to 1 mm, and gap to − 0.3 mm, we were able to create fine MPR images. Many approaches were used to obtain the high-resolution images. Simply increasing the resolution would result in a decrease in SNR. Increasing the number of excitations would improve the SNR, but it would also increase the acquisition time and would not be acceptable for clinical practice 38 . Actually, dDLR uses multilayered CNNs, which are trained by optimizing the CNN parameters to make the result of processing a low SNR image with CNNs closer to the high SNR image as a teacher image 9,10 . The decrease of SNR was anticipated due to decreased voxel volume and effects of a cross talk artifact by overlapping the gap 39 . This problem could be resolved by dDLR. We demonstrated that resolution by significantly lower CV in quantitative image evaluation, which is regarded as one indicator of image noise 40 , in thin-slice 2D FS-PDWI. With image quality including edge sharpness, contrast resolution, fluid brightness, and uniformity of image, no significant difference were found between thin-slice 2D FS-PDWI and 3D FS-PD MPV image. One radiologist (Reader A) evaluated the sagittal and transverse images of edge sharpness in thin-slice 2D FS-PDWI as "nonacceptable" ( Table 2), but these were MPR images of the same case. Coronal image was evaluated as "acceptable". In this case motion artifacts were strong. The evaluation was acceptable only on the original coronal image, but not on the reconstructed sagittal and transverse images. It is noteworthy that, in this case, both radiologists rated the motion artifact as "non-acceptable". This result was derived from an accidental motion artifact created during the scanning of thin-slice 2D FS-PDWI. We expect that the same results as those obtained in the other cases would have been obtained if the patient had not moved. Depending on the method of k-space data sampling, motion artifacts are more likely to appear in the slice direction and in the phase direction in the 3D image compared to the 2D image 41,42 . Therefore, in thin-slice 2D image, the effects of motion artifacts on the original image to be scanned are slight and the effects on the MPR image can be expected to be small. In 3D image, however, both the original image and the MPR image are strongly affected. For flow artifacts, thin-slice 2D FS-PDWI were significantly worse than 3D FS-PD MPV image. For the evaluation of anatomical structure visualization, no significant difference was found between thin-slice 2D FS-PDWI and 3D FS-PD MPV image. One radiologist (Reader A) evaluated sagittal image of femur, femoropatellar joints, ACL, and PCL as "non-acceptable" in thin-slice 2D FS-PDWI (Table 3). Actually, this represents the same case of motion artifacts as described above. Another noteworthy point is that significant flow artifacts observed in thin-slice 2D FS-PDWI did not negatively influenced the anatomical structure visualization when compared to 3D FS-PD MPV image. Regarding inter-rater reliability coefficients in abnormal findings, thin-slice 2D FS-PDWI was found to have superiority of greater 3D FS-PD MPV image in overall Gwet's AC 2 . In terms of the cartilage delineation of femoropatellar joints, 3D FS-PD MPV image was inferior to thin-slice 2D FS-PDWI. This might be attributable to the blurring effects of 3D images. Therefore, thin-slice 2D FS-PDWI is expected to indicate abnormal findings more constantly than 3D FS-PD MPV image does. For the evaluation of abnormal findings for each anatomical structure, the percentages of both scores of 5 (5/5) was higher in thin-slice 2D FS-PDWI than in 3D FS-PD MPV image. Therefore, we can point out abnormalities with higher confidence in thin-slice 2D FS-PDWI than in 3D FS-PD MPV image. The case described above with motion artifact was not particularly bad for detecting abnormal findings. The higher consistency www.nature.com/scientificreports/ and confidence indicated that the abnormal signal can be depicted more clearly in thin-slice 2D FS-PDWI, in spite of flow artifact effects. This study has several limitations. First, the number of patients included in the study was small. Future studies should examine data of more patients. Second, we evaluated only the image findings and did not compare them to the surgical findings. Further study examining the surgical findings must be conducted. In conclusion, we were able to obtain thin-slice 2D FS-PDWI with high-resolution using dDLR and were able to create fine MPR images. This is the first of an attempt to make use of the high-resolution and high contrast of 2D image and treat it like a 3D volume. Because the acquisition time for thin-slice 2D FS-PDWI was shorter than that for 3D FS-PD MPV image and was within the acceptable range, it can be readily applicable to clinical practice. With regard to the evaluation of abnormal findings, we indicated findings on thin-slice 2D FS-PDWI with greater consistency and confidence. Thin-slice 2D FS-PDWI with dDLR would afford an ideal image. It can be considered as an alternative for 3D FS-PD MPV image for evaluating knee joint abnormality. Finally, because dDLR is adaptable to various sequences, it is expected to be applicable to other different joints. Table 2. Evaluation of image quality and artifacts of thin-slice 2D FS-PDWI and 3D FS-PD MPV image by two readers. Acceptable, scores 3, 4 and 5; Non-acceptable, scores 1 and 2. Coronal and sagittal images were scanned as original images in thin-slice 2D FS-PDWI and 3D FS-PD MPV image, respectively. Others were MPR images. 2D FS-PDWI indicates 2-dimensional fat saturated-proton density weighted image; 3D FS-PD MPV, 3-dimensional fat saturated-proton density multi planar voxel. *p value < 0.05 by chi-squared test. ** "Uniformity of image" and "Artifacts" were evaluated by referring all three planes. www.nature.com/scientificreports/ Table 5. Agreement with high confidence in "abnormal finding". "Abnormal findings" were defined as "on both thin-slice 2D FS-PDWI and 3D FS-PD MPV images", " on two or more image planes (coronal image only for medial and lateral collateral ligament)", and "scored 4 or higher by two readers". *5/5 indicates that two readers gave a score of 5. ¶ This represents the number of image plane defined to evaluate the abnormal findings of each location, as described in Qualitative image evaluation methods. † p value < 0.05 by the chi-squared test.
5,272.4
2022-06-20T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
Exploring Metabolic Characteristics in Different Geographical Locations and Yields of Nicotiana tabacum L. Using Gas Chromatography–Mass Spectrometry Pseudotargeted Metabolomics Combined with Chemometrics The quality of crops is closely associated with their geographical location and yield, which is reflected in the composition of their metabolites. Hence, we employed GC–MS pseudotargeted metabolomics to investigate the metabolic characteristics of high-, medium-, and low-yield Nicotiana tabacum (tobacco) leaves from the Bozhou (sweet honey flavour) and Shuicheng (light flavour) regions of Guizhou Province. A total of 124 metabolites were identified and classified into 22 chemical categories. Principal component analysis revealed that the geographical location exerted a greater influence on the metabolic profiling than the yield. Light-flavoured tobacco exhibited increased levels of sugar metabolism- and glycolysis-related intermediate products (trehalose, glucose-6-phosphate, and fructose-6-phosphate) and a few amino acids (proline and leucine), while sweet honey-flavoured tobacco exhibited increases in the tricarboxylic acid cycle (TCA cycle) and the phenylpropane metabolic pathway (p-hydroxybenzoic acid, caffeic acid, and maleic acid). Additionally, metabolite pathway enrichment analysis conducted at different yields and showed that both Shuicheng and Bozhou exhibited changes in six pathways and four of them were the same, mainly C/N metabolism. Metabolic pathway analysis revealed higher levels of intermediates related to glycolysis and sugar, amino acid, and alkaloid metabolism in the high-yield samples, while higher levels of phenylpropane in the low-yield samples. This study demonstrated that GC–MS pseudotargeted metabolomics-based metabolic profiling can be used to effectively discriminate tobacco leaves from different geographical locations and yields, thus facilitating a better understanding of the relationship between metabolites, yield, and geographical location. Consequently, metabolic profiles can serve as valuable indicators for characterizing tobacco yield and geographical location. Introduction Tobacco (Nicotiana tabacum L.) is widely distributed in China's growing regions and serves as an important model plant for studying plant genetics, breeding, and biochemistry [1].Tobacco leaves contain abundant metabolites, including saccharides, organic acids, alkaloids, and free amino acids, which play important roles in determining the quality and flavour of tobacco [2,3].These chemical compositions are strongly influenced by environmental conditions and geographical location.Therefore, investigating the geographical location of metabolites will offer novel perspectives on the formation of regional style characteristics [4]. Metabolomics has been extensively used to trace and analyse the quality of agricultural products from various geographical locations; for instance, previous studies have investigated the bioactive components present in Glycyrrhiza uralensis taproots from different locations.Glycycoumarin and licoricone were found predominantly in Jiuquan, while neoliquiritin, isolicoflavonol, isoisoflavone alcohol, and glycerol were mainly detected in Lanzhou [5].Similarly, Zhao et al. identified 43 differentially expressed metabolites, such as fructose, glycine, and serine, between tobacco leaves originating from Guizhou Province and those from Yunnan Province.These metabolites exert a substantial impact on the tobacco leaf flavour [6].Guizhou tobacco exhibits distinct characteristics in different geographical regions, such as sweet honey, light, and burnt sweet flavours [7], and distinct metabolic profiles may be observed for different flavour types.Therefore, it is imperative to investigate the relationship between biochemical components and flavour types via metabolomics. Yield, as an important evaluation index of crops, is also closely related to metabolites.During the formation process, crop yield is affected by the synthesis and degradation of metabolites, such as carbohydrates, proteins, and fats [8].Many studies have proposed improving plant productivity and yield by increasing the photosynthetic rate and capacity [9,10].The enhancement of rice productivity and stress resistance under favourable moisture conditions has been demonstrated through the regulation of sugar transport and metabolism, as well as the improvement in photosynthetic capacity associated with high-yield rice gene expression, resulting in a remarkable 30% increase in grain yield [11].Previous research has found a close correlation between carbon, nitrogen metabolism systems and growth, yield [12].Therefore, C/N metabolic pathways are intricately associated with plant yield.Significant variations were observed in the phenotypes of tobacco leaves with different yields within the same geographical region.Compared with those of low-yield tobacco plants, the leaves of high-yield tobacco plants are broader and thicker [13].Consequently, differences in yield inevitably lead to the redistribution of metabolites, causing changes in metabolic pathways [14].However, studies on metabolic alterations in varying yields are limited.By investigating the metabolic disparities among high-, medium-, and low-yield tobacco leaves, we can identify distinct profiles, as well as biomarkers, that influence metabolic pathways and unravel the correlation between yields and metabolic networks. In recent years, pseudotargeted metabolomics has emerged as a pivotal tool for investigating plant disease resistance and cultivating superior varieties [15,16].This technology is designed to rapidly, reliably, and sensitively conduct systematic and comprehensive analyses of characteristic metabolites produced in organisms, tissues, cells, and other systems by monitoring the dynamic changes in plant metabolites and their metabolic pathways [17].The primary analytical platforms for pseudotargeted metabolomics include gas chromatography-mass spectrometry (GC-MS), liquid chromatography-mass spectrometry (LC-MS), and capillary electrophoresis-mass spectrometry (CE-MS) [18].Among them, GC-MS is the most widely employed owing to its excellent reproducibility, high precision, extensive dynamic range, and mature metabolite database [19,20].In 2012, Li et al. first proposed the retention time-locking GC-SIM-MS pseudotargeted metabolomics method and applied it to characteristic metabolites in tobacco leaves from different geographical locations [21].Cai et al. utilized GC-MS pseudotargeted metabolomics to accurately analyse metabolites in Oryza sativa soil, and they demonstrated that this approach enhances the specificity, sample throughput, and coverage of the detected metabolites [22].This method combines the benefits of both targeted and untargeted approaches, providing high sensitivity, precise quantification, and a broad linear range, and represents a promising technique that has been successfully employed for studying metabolic profiling across various tissue samples [23,24].Therefore, the utilization of pseudotargeted metabolomics enables more accurate and sensitive monitoring of tobacco metabolites from different geographical locations and yields, with better discerning metabolic characteristics. In this study, pseudotargeted GC-MS metabolomics was used to investigate the effects of the geographical location and yield factors on the metabolic characteristics, aiming to resolve the following issues: 1. interaction effects of geographical location and yield on metabolites in tobacco leaves; 2. influence of different geographical regions on tobacco flavour; and 3. changes in metabolic profiles under different yields. Sample Preparation Guizhou Province is situated in the southwestern region of China and is characterized by a gradual decrease in elevation from west to east and an increase in average annual rainfall from north to south.Bozhou and Shuicheng are the primary tobacco-cultivation areas for sweet honey and light flavours in Guizhou Province, respectively, and exhibit distinct differences in geographical and climatic conditions.Bozhou is located in the middle of Guizhou Province and has a higher temperature, abundant rainfall, and shorter daylight hours, whereas Shuicheng, located in western Guizhou Province, has lower temperatures, less rainfall, and more intense sunlight.The annual ecological factor data for 2021 were provided by the Guizhou Meteorological Bureau (Table S1).Thirty-six fresh flue-cured tobacco samples (cultivar: Yunyan 87) were collected from Bozhou and Shuicheng at three yield levels in 2021.Samples of low-yield (90-110 kg/mu), medium-yield (120-140 kg/mu), and high-yield (150-170 kg/mu) flue-cured tobacco were collected from more than 500 mu of contiguous tobacco fields with six biological duplicates per treatment.During sampling, the middle leaf was identified as the tenth leaf when counting from top to bottom.The base and tip of each leaf were removed, and the middle portion was retained.Subsequently, each leaf was divided into two halves along the main vein boundary, wrapped in tin foil, and flash-frozen in liquid nitrogen.Then, the samples were freeze-dried and ground into a powder at a low temperature.After passing through a 40-mesh sieve, the samples were stored at −80 • C in an ultralow temperature refrigerator.In addition, quality control (QC) samples were obtained by thoroughly blending with the same amount of each sample. Metabolite Extraction and Derivatization The leaf powder (50 mg) was added to a 10 mL centrifuge tube, followed by the addition of 40 µL of internal standard solution (hexanedioic acid at a concentration of 10 mg/mL, phenylglucoside at a concentration of 8.04 mg/mL, and L-norvaline at a concentration of 4.9 mg/mL in a methanol-water ratio of 1:1, v/v).Subsequently, 3 mL of the extract solution (methanol-chloroform-water 2.5:1:1, v/v/v) was added.After vortexing for 1 min, ultrasound extraction was performed at 4-10 • C for 40 min, after which the mixture was centrifuged at 3000-5000 rpm for 5 min.Three-hundred microlitres of supernatant were dried under N 2 flow at room temperature and then further dried completely by adding three-hundred microlitres of dichloromethane. Following this step, the derivatization reaction was carried out by reacting with a solution containing MEOX/pyridine (40 µL of 25 mg/mL) as an oximation agent (40 • C, 120 min), which protected carbonyl groups and reduced the ring reactions of sugars to minimize isomer formation.Trimethylsilylation was subsequently performed by adding BSTFA reagent containing TMCS (1%) (81 • C, 90 min), after which 90 µL of acetonitrile was added (81 • C, 90 min) to improve the derivatization efficiency of the amino group.Then, the samples were centrifuged at 10,000 rpm for 3 min, and the supernatant was subjected to GC-MS analysis. GC-MS Pseudotargeted Metabolomics GC-MS analysis was performed on an Agilent 7890A-5975C instrument (Palo Alto, CA, USA) equipped with a CTC PAL autoinjection system.Separation was achieved utilizing an HP-5 MS (60 m × 250 µm × 0.25 µm film thickness) capillary column.The injector port temperature was maintained at 280 • C, and a sample volume of 1 µL was injected through the autosampler at a split ratio of 1:10.The flow rate of the helium carrier gas remained constant at 1.0 mL/min.A temperature gradient program was employed for the oven, starting at 60 • C for 2 min, followed by an increase of 5 • C/min until reaching and holding at 230 • C for another 5 min; then, it was further increased by 8 • C/min to reach and hold at 290 • C for 21.5 min, for a total run time of 70 min.The ion source and quadrupole temperatures were set to 230 • C and 150 • C, respectively, while the transfer line temperature was maintained at 280 • C. The mass spectrometer was operated in the electron ionization mode (EI) at 70 eV.The full-scan acquisition mode was adopted for identification within the mass range of 45-600 m/z with a solvent delay time of 11.90 min.Pseudotargeted metabolomics incorporates an algorithm designed to choose ions for selected ion monitoring (SIM) from identified metabolites.The SIM data were acquired based on the published literature [21], and AMDIS software version 2.73 (Automated Mass Spectral Deconvolution and Identification System) was used for the selection of characteristic ions.The detailed peak table is shown in Table S2.The metabolites in the QC sample were identified using a standard mass spectrometry database (NIST14 and Willy08 library), the literature, and the linear retention index (LRI).Hexanedioic acid (10.00 mg/mL), phenyl beta-D-glucopyranoside (8.04 mg/mL), and L-norvaline (4.90 mg/mL) were used as ISs for quantification, and the correction factor was F = 1 for relative quantification. Statistical Analysis Chemometric analysis included different multivariate data analysis methods, such as principal component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA).Simca software 13.0 (Sartorius, Umeå, Sweden) was utilized to construct these models.Metabolic pathway analysis, a heatmap, and a volcano map analysis were carried out using metware cloud (https://cloud.metware.cn/)accessed on 16 August 2023.To normalize the data, log transformation and Pareto scaling were performed.The screening of highly characteristic metabolites among the samples was conducted according to the standard of Cai et al. [22].Chromatograms of the QC samples were generated using Origin 2021 software version SR1 (OriginLab Corp., Northampton, MA, USA). Interaction Effect of Geographical Location and Yield on Metabolites PCA was employed to discriminate between the geographical locations of the tobacco leaves from Bozhou and Shuicheng (Figure 2A).The first two principal components (PCs) explained 50.1% of the total variance, with PC1 and PC2 explaining 31.0%and 19.1% of the variance, respectively.PC1 distinguished the geographical location, while PC2 distinguished the yield.The QC samples were tightly clustered in the centre of the score plot, indicating that the sample analysis results were precise.Based on distinct separation, thirty-six samples were categorized into two groups.Significant differences were observed between the Shuicheng and Bozhou regions in PC1.However, samples from the same region, but with different yields, could not be completely distinguished in PC2, such as medium versus low yields in Bozhou and medium versus high yields in Shuicheng.This observation was further supported by the conversion of the PCA data into the corresponding metabolic trajectories (Figure 2B), suggesting that the influence of the geographical location on the metabolite levels may outweigh that of the yield variation.charides (17, 13.7%), sugar acids (9, 7.3%), phosphate esters or phosphate compounds (9, 7.3%), sugar alcohols (8, 6.5%), dicarboxylic acids (7, 5.6%), polyhydroxy carboxylic acids (6, 4.8%), and phenolic acids (6, 4.8%).Short-chain fatty acids, long-chain fatty acids, polyamines, and saccharolactones contributed 3.2% individually.The amino acid group comprised twenty-one proteinogenic amino acids, as well as four nonproteinogenic amino acids or derivatives such as gamma-aminobutyric acid, pyroglutamic acid, 5-hydroxytryptophan, and pipecolinic acid.The saccharide group consisted of thirteen monosaccharides, including hexoses, pentoses, tetrose, and triose, along with four disaccharides.The tobacco pseudotargeted metabolomics approach facilitated the detection of a wider range of metabolites from various chemical classes representing key metabolic pathways for tobacco metabolic profiling. Interaction Effect of Geographical Location and Yield on Metabolites PCA was employed to discriminate between the geographical locations of the tobacco leaves from Bozhou and Shuicheng (Figure 2A).The first two principal components (PCs) explained 50.1% of the total variance, with PC1 and PC2 explaining 31.0%and 19.1% of the variance, respectively.PC1 distinguished the geographical location, while PC2 distinguished the yield.The QC samples were tightly clustered in the centre of the score plot, indicating that the sample analysis results were precise.Based on distinct separation, thirty-six samples were categorized into two groups.Significant differences were observed between the Shuicheng and Bozhou regions in PC1.However, samples from the same region, but with different yields, could not be completely distinguished in PC2, such as medium versus low yields in Bozhou and medium versus high yields in Shuicheng.This observation was further supported by the conversion of the PCA data into the corresponding metabolic trajectories (Figure 2B), suggesting that the influence of the geographical location on the metabolite levels may outweigh that of the yield variation. Interaction Effect of Geographical Location and Yield on Metabolites PCA was employed to discriminate between the geographical locations of the tobacco leaves from Bozhou and Shuicheng (Figure 2A).The first two principal components (PCs) explained 50.1% of the total variance, with PC1 and PC2 explaining 31.0%and 19.1% of the variance, respectively.PC1 distinguished the geographical location, while PC2 distinguished the yield.The QC samples were tightly clustered in the centre of the score plot, indicating that the sample analysis results were precise.Based on distinct separation, thirty-six samples were categorized into two groups.Significant differences were observed between the Shuicheng and Bozhou regions in PC1.However, samples from the same region, but with different yields, could not be completely distinguished in PC2, such as medium versus low yields in Bozhou and medium versus high yields in Shuicheng.This observation was further supported by the conversion of the PCA data into the corresponding metabolic trajectories (Figure 2B), suggesting that the influence of the geographical location on the metabolite levels may outweigh that of the yield variation. Based on the loading factor of PCA, the contribution rates of tobacco metabolites to geographical location and yield differentiation were analysed (Table 1).The metabolites that contributed significantly to discriminating the geographical location (absolute value > 0.12) included saccharides, sugar acids, sugar alcohols, and phosphorylated sugars, which indicated that regional factors mainly affected carbohydrate metabolism and phosphorylation.On the other hand, the metabolites that contributed significantly to discriminating the yield factors (absolute value > 0.12) were mainly nitrogenous metabolites, such as amino acids and polyamines, which showed that nitrogen metabolism was a key determinant for achieving the desired crop yields.In brief, the geographical location had a greater influence than yield on metabolic changes in tobacco leaves, and these critical metabolites play crucial roles in plant development and growth regulation.Based on the loading factor of PCA, the contribution rates of tobacco metabolites to geographical location and yield differentiation were analysed (Table 1).The metabolites that contributed significantly to discriminating the geographical location (absolute value > 0.12) included saccharides, sugar acids, sugar alcohols, and phosphorylated sugars, which indicated that regional factors mainly affected carbohydrate metabolism and phosphorylation.On the other hand, the metabolites that contributed significantly to discriminating the yield factors (absolute value > 0.12) were mainly nitrogenous metabolites, such as amino acids and polyamines, which showed that nitrogen metabolism was a key determinant for achieving the desired crop yields.In brief, the geographical location had a greater influence than yield on metabolic changes in tobacco leaves, and these critical metabolites play crucial roles in plant development and growth regulation. Metabolic Profiling in Different Geographical Locations To further visualize the differences between the two geographical locations (flavour types) of the metabolites, we applied PLS-DA for discrimination (Figure 3A).The score plots of PC1 and PC2 clearly demonstrated the distinct separation between the Shuicheng and Bozhou samples, accounting for 32.4% and 13.1% of the total variance, respectively.A volcanic map with variable importance in projection (VIP) was drawn according to the screening standards of p value < 0.05, FC > 1.5, and VIP > 1.2; thus, 31 characteristic biomarker metabolites were found (Figure S1).These biomarkers included primary metabolites, such as maleic acid, threonic acid, proline, and phenylalanine, as well as secondary metabolites, such as caffeic acid and quinic acid.Subsequently, heatmap analysis was performed on these characteristic metabolites (Figure 3B), which revealed two distinct groups.Group A mainly consisted of the Bozhou samples with greater abundances of phenylpropane metabolism (salicylic acid, VIP = 1.72; p-hydroxybenzoic acid, VIP = 1. Metabolic Profiling in Different Geographical Locations To further visualize the differences between the two geographical locations (flavour types) of the metabolites, we applied PLS-DA for discrimination (Figure 3A).The score plots of PC1 and PC2 clearly demonstrated the distinct separation between the Shuicheng and Bozhou samples, accounting for 32.4% and 13.1% of the total variance, respectively.A volcanic map with variable importance in projection (VIP) was drawn according to the screening standards of p value < 0.05, FC > 1.5, and VIP > 1.2; thus, 31 characteristic biomarker metabolites were found (Figure S1).These biomarkers included primary metabolites, such as maleic acid, threonic acid, proline, and phenylalanine, as well as secondary metabolites, such as caffeic acid and quinic acid.Subsequently, heatmap analysis was performed on these characteristic metabolites (Figure 3B), which revealed two distinct groups.Group A mainly consisted of the Bozhou samples with greater abundances of phenylpropane metabolism (salicylic acid, VIP = 1.72; p-hydroxybenzoic acid, VIP = 1. Characteristic Metabolites and Their Metabolic Pathways at Different Yields The characteristic metabolites in tobacco leaves with different yields were analysed.According to Figure S2, the three yields in the Shuicheng area were effectively distinguished, whereas distinguishing between the middle and low yields in Bozhou was challenging.To further investigate the differences among these three yields in Bozhou and Shuicheng, the PLS-DA of VIP > 1.2, FC > 1.5, and p < 0.05 was used to conduct characteristic metabolite screening (Figures S3 and S4), and metabolite pathway enrichment analysis was subsequently conducted at different yields (Figure 4).The main enrichment pathways of the Bozhou samples were phenylalanine metabolism; phenylalanine, tyrosine, and tryptophan biosynthesis; starch and sucrose metabolism; glycine, serine, and threonine metabolism; alanine, aspartate, and glutamate metabolism; and isoquinoline alkaloid biosynthesis (Figure 4A).The main enrichment pathways of the Shuicheng samples were phenylalanine metabolism; arginine biosynthesis; starch and sucrose metabolism; glycine, serine, and threonine metabolism; alanine, aspartic acid, and glutamate metabolism; and linoleic acid metabolism (Figure 4B).These findings suggest that samples from both Shuicheng and Bozhou exhibited changes in six pathways, four of which were the same and mainly involved C/N metabolism. ways of the Bozhou samples were phenylalanine metabolism; phenylalanine, tyrosine, and tryptophan biosynthesis; starch and sucrose metabolism; glycine, serine, and threonine metabolism; alanine, aspartate, and glutamate metabolism; and isoquinoline alkaloid biosynthesis (Figure 4A).The main enrichment pathways of the Shuicheng samples were phenylalanine metabolism; arginine biosynthesis; starch and sucrose metabolism; glycine, serine, and threonine metabolism; alanine, aspartic acid, and glutamate metabolism; and linoleic acid metabolism (Figure 4B).These findings suggest that samples from both Shuicheng and Bozhou exhibited changes in six pathways, four of which were the same and mainly involved C/N metabolism.Further analysis of these metabolic pathways (Figures 5 and S5) revealed higher levels of intermediates related to glycolysis and sugar metabolism (e.g., glucose, sucrose, fructose, fructose-6-phosphate, and glucose-6-phosphate) in the high-yield tobacco samples than in the other samples.However, sugar alcohols exhibited a significant increase in both the middle-yield and low-yield tobacco samples in the two regions.The content of amino acid metabolism intermediates, such as serine, threonine, and valine, increased in high-yield tobacco leaves.Moreover, the levels of phenylalanine, tryptophan, and shikimic acid were also elevated, indicating that the metabolic pathway of phenylpropane in the high-yield tobacco leaves of the two regions improved.High-yield tobacco leaves also improved the urea cycle in both regions, thereby increasing the content of polyamines and nicotine alkaloids.Further analysis of these metabolic pathways (Figures 5 and S5) revealed higher levels of intermediates related to glycolysis and sugar metabolism (e.g., glucose, sucrose, fructose, fructose-6-phosphate, and glucose-6-phosphate) in the high-yield tobacco samples than in the other samples.However, sugar alcohols exhibited a significant increase in both the middle-yield and low-yield tobacco samples in the two regions.The content of amino acid metabolism intermediates, such as serine, threonine, and valine, increased in highyield tobacco leaves.Moreover, the levels of phenylalanine, tryptophan, and shikimic acid were also elevated, indicating that the metabolic pathway of phenylpropane in the high-yield tobacco leaves of the two regions improved.High-yield tobacco leaves also improved the urea cycle in both regions, thereby increasing the content of polyamines and nicotine alkaloids. Metabolite Identification and Method Evaluation In general, methanol-chloroform-water is an effective metabolite extraction system for water-soluble and hydrophobic metabolites in plant matrices [23].The accurate identification of 124 metabolites belonging to 22 chemical categories was successfully achieved in tobacco leaves.These metabolites play crucial roles as significant contributors to the C/N metabolic cycle.In addition, the reproducibility of the results is also a crucial aspect when evaluating the quality of an analytical method [25,26].All metabolites were subjected to normalization using an internal standard for relative quantification.As indicated in Table S2, 97.6% and 87.9% of all metabolites had a relative standard deviation (RSD) Metabolite Identification and Method Evaluation In general, methanol-chloroform-water is an effective metabolite extraction system for water-soluble and hydrophobic metabolites in plant matrices [23].The accurate identification of 124 metabolites belonging to 22 chemical categories was successfully achieved in tobacco leaves.These metabolites play crucial roles as significant contributors to the C/N metabolic cycle.In addition, the reproducibility of the results is also a crucial aspect when evaluating the quality of an analytical method [25,26].All metabolites were subjected to normalization using an internal standard for relative quantification.As indicated in Table S2, 97.6% and 87.9% of all metabolites had a relative standard deviation (RSD) below 20% for repeatability and reproducibility, respectively.The repeatability and reproducibility were considered acceptable and in line with the values commonly found for plant metabolomics (ca.25-35%) [27].All the results indicated that tobacco pseudotargeted GC-MS analysis is a dependable approach for metabolic profiling. Although the precision met the requirements for relative quantification, it is recommended that an appropriate internal standard system with each chemical classification is needed to effectively improve the accuracy and precision.The isotopic or homologous internal standard is the best choice for further research [22].Furthermore, pseudotargeted analysis cannot detect metabolites that have not been identified.Untargeted metabolomics is a complementary approach for discovering crucial signals of unknown metabolites in tobacco. Characteristic Metabolites of Different Geographical Locations and Their Effects on Flavour Type Previous studies have shown that light-flavoured tobacco is characterized by freshness, floral notes, and acidity, while fully flavoured tobacco predominantly possesses a high aroma profile with a rich and pure fragrance [28,29].Carbohydrates constitute the most significant precursors of aroma in tobacco, accounting for 40-50% of its weight [30,31].These compounds generate flavour components and acidic substances in mainstream smoke that mitigate the harsh taste during smoking while enhancing the overall flavour characteristics and aroma perception [32].By screening the characteristic metabolites of the two geographical locations, the abundances of saccharides and phosphorylated sugars in the Shuicheng sample were greater than those in the Bozhou samples.Notably, trehalose, fructose-6-phosphate, and glucose-6-phosphate were identified, two of which are intermediate products of glycolysis (the oxidation process from glucose to pyruvate).Additionally, proline and leucine were more abundant in the Shuicheng samples than in the Bozhou samples; proline contributes to freshness and floral attributes, while leucine significantly enhances acidic notes [33].Therefore, these metabolic characteristics may lead to the formation of light-flavoured tobacco leaves.It is widely recognized that a decreased nitrogen nutrition level promotes the formation of a delicate aroma profile in flue-cured tobacco, whereas an increased nitrogen nutrition level enhances the expression of a strong and abundant aroma style in tobacco [34].The abundance of organic acids in the Bozhou samples was much greater than that in the Shuicheng samples.Organic acids play a crucial role in smoke equilibrium and tobacco pH regulation, ultimately influencing the aroma quality indirectly [35].For instance, Bozhou has a higher concentration of oleic acid than Shuicheng, while high levels of unsaturated fatty acids can enhance the flavour of acidic wax and fat [34].Moreover, as intermediate products of the phenylpropane metabolic pathway (caffeic acid), the abundance of the Bozhou samples was also greater than that of the Shuicheng samples.Phenylpropanoid biosynthesis in most plants initiates the conversion of phenylalanine to cinnamic acid, resulting in diverse aromatic compounds and affecting the aroma of tobacco leaves [36].Furthermore, a high concentration of ester compounds leads to a stronger irritant taste, and the L-arabonic acid-1,4-lactone in Bozhou samples may be one of the reasons for the abundant aroma [37].Therefore, these metabolic characteristics may lead to the sweet honey and light flavour types in Guizhou. Characteristic Metabolites and Metabolic Pathways of Different Yields The metabolic pathways involved in C metabolism, such as sugar metabolism, glycolysis, the TCA cycle, and shikimate-phenylpropanoid metabolism, play crucial roles in generating energy that can be utilized by plants for growth and development, simultaneously improving the resistance of plants and providing the carbon skeletons necessary for various biosynthetic processes [38].It is widely acknowledged that the production of carbohydrates in source organs and their utilization in sink organs are tightly coordinated processes that ultimately determine the yield [39]. Starch and sucrose metabolism comprised the main enrichment metabolic pathway, and there was an increase in the abundance of glucose, fructose, sucrose, fructose-6-phosphate, glucose-6-phosphate, and other saccharides in the high-yield tobacco samples from the two regions.The metabolic pathways of starch and sucrose metabolism are complex biochemical processes that rely on the synergistic action of multiple enzymes [40].Phenylpropane metabolism intermediates are crucial for plant growth and the long-distance transport of water and nutrients while also aiding plant defence against abiotic and biotic stresses [41,42].For instance, quinic acid plays a vital role as an antioxidant by protecting enzyme structures within plants [43].Notably, low-yield tobacco leaves exhibited increases in caffeic acid, caffeoquinic acid, and quinic acid levels, indicating an higher expression in the phenylpropane metabolic pathway.Due to cultivation stress, this pathway can produce abundant antioxidants and protect tobacco plants from the stress effects.Conversely, high-yield tobacco showed a significant increase in phenylalanine but a decrease in phenylpropane metabolites due to potential inhibition of phenylalanine ammonia lyase enzyme activity [44]. The metabolic pathways involved in N metabolism, such as amino acid metabolism, polyamine metabolism, and the urea cycle, serve as crucial physiological mechanisms that regulate the synthesis and decomposition of nitrogen-containing compounds in plants [45].Amino acids serve as precursors for numerous nitrogen-containing compounds [46].The content of most amino acids in the Bozhou samples decreased from high yield to middle yield to low yield.Notably, the altered metabolic pathways included alanine, aspartate, and glutamate metabolism and glycine, serine, and threonine metabolism in the two regions.Glycine and serine, which are essential components of photorespiration, contribute to the provision of one-carbon (1-C) units that actively engage in diverse metabolic pathways, such as polyamine metabolism and nucleic acid metabolism [47].Furthermore, high-yield tobacco in Bozhou significantly enhanced the urea cycle, leading to increased contents of polyamines and nicotine while altering the isoquinoline alkaloid biosynthesis pathway.However, the breeding objective of tobacco has always been to reduce the levels of nicotine and related alkaloids [48].Furthermore, a previous study indicated that treatment of tobacco plants with polyamine biosynthesis inhibitors can reduce the polyamine content and ameliorate the phenotype [49].Hence, polyamine and nicotine biosynthesis in tobacco involves complex interactions that affect the quality of tobacco leaves. To summarize, the variation in plant metabolites is primarily influenced by the geographical location and yield [50]. Xu et al. reveal that metabolic differences of E. purpurea were related to geographical location (latitude and longitude) and environmental variables (climate and soil) with NMR [51], while, Benmahieddine et al. used HPLC-DA to identify metabolic characteristics of Pistacia atlantica Desf.with gender, organ type (roots, buds, and fruits), geographical location, and stage of ripening [52].The factors influencing metabolites are highly complex.Therefore, Further research needs to consider the effect of more environmental factors and different harvest time on metabolic characteristics and tobacco flavour types. Conclusions A total of 124 metabolites were identified in Guizhou tobacco leaves of different geographical locations and yields by GC-MS pseudotargeted metabolomics and were divided into 22 chemical categories.Multifactor analysis revealed that the geographical location had a greater influence on metabolites than the yield factors.A screening of the characteristic metabolites in tobacco leaves from different regions revealed that the levels of sugar metabolism-and glycolysis-related intermediate products and amino acids were greater in the Shuicheng samples (light flavour), and the contents of organic acid, sugar acid, and glycolactone involved in phenylpropane metabolism and the TCA cycle were greater in the Bozhou samples (sweet honey flavour).Metabolic pathway analysis revealed that glycolysis and sugar, amino acid, and alkaloid metabolism were maintained at higher levels in the high-yield samples, while higher expression of phenylpropane metabolism was maintained in the low-yield samples. Figure 2 . Figure 2. Interaction effect of geographical location and yield on metabolites.(A) PCA score plot; (B) metabolic trajectory diagram. Figure 2 . Figure 2. Interaction effect of geographical location and yield on metabolites.(A) PCA score plot; (B) metabolic trajectory diagram. Figure 3 . Figure 3. Analysis of metabolites in different geographical locations.(A) PLS-DA analysis.(B) Heatmap analysis of characteristic metabolites. Figure 3 . Figure 3. Analysis of metabolites in different geographical locations.(A) PLS-DA analysis.(B) Heatmap analysis of characteristic metabolites. Figure 4 . Figure 4. Metabolite pathway enrichment analysis for tobacco leaves with different yields.(A) Bozhou.(B) Shuicheng.The larger the circle and the darker the colour, the more significantly enriched the metabolites are in this pathway. Figure 4 . Figure 4. Metabolite pathway enrichment analysis for tobacco leaves with different yields.(A) Bozhou.(B) Shuicheng.The larger the circle and the darker the colour, the more significantly enriched the metabolites are in this pathway. Figure 5 . Figure 5. Metabolic pathway plot of the differentially abundant metabolites among the three yields in Bozhou.Red, yellow, and green indicate the relative concentrations of metabolites at low yield, intermediate yield, and high yield, respectively.△, *, and □ represent p values of metabolites using nonparametric tests that were less than 0.05 for the low-yield vs. high-yield, low-yield vs. middleyield, and middle-yield vs. high-yield comparisons, respectively. Figure 5 . Figure 5. Metabolic pathway plot of the differentially abundant metabolites among the three yields in Bozhou.Red, yellow, and green indicate the relative concentrations of metabolites at low yield, intermediate yield, and high yield, respectively.△, *, and □ represent p values of metabolites using nonparametric tests that were less than 0.05 for the low-yield vs. high-yield, low-yield vs. middle-yield, and middle-yield vs. high-yield comparisons, respectively. Table 1 . Metabolite contributions to geographical location and yield factors. Table 1 . Metabolite contributions to geographical location and yield factors.
7,340.2
2024-03-22T00:00:00.000
[ "Environmental Science", "Chemistry", "Biology", "Agricultural and Food Sciences" ]
Ontological Approach for Effective Generation of Concept Based User Profiles to Personalize Search Results : Problem statement: Ontological user profile generation was a semantic approach to derive richer concept based user profiles. It depends on the semantic relationship of concepts. This study focuses on ontology to derive concept oriented user profile based on user search queries and clicked documents.This study proposes concept based on topic ontology which derives the concept based user profiles more independently. It was possible to improve the search engine processes more efficiently. Approach: This process consists of individual user’s interests, topical categories of user interests and identifies the relationship among the concepts. The proposed approach was based on topic ontology for concept based user profile generation from search engine logs. Spreading activation algorithm was used to optimize the relevance of search engine results. Topic ontology was constructed to identify the user interest by assigning activation values and explore the topics similarity of user preferences. Results: To update and maintain the interest scores, spreading activation algorithm was proposed. User interest may change over the period of time which was reflected to user profiles. According to profile changes, search engine was personalized by assigning interest scores and weight to the topics. Conclusion: Experiments illustrate the efficacy of proposed approach and with the help of topic ontology user preferences can be identified correctly. It improves the quality of the search engine personalization by identifying the user’s precise needs. INTRODUCTION Web Search engines do an excellent job when the queries are understandable and exact. Generally, user queries are short, ambiguous and not well-formed in nature (Silverstein et al., 1999;Cronen-Townsend and Croft, 2002;Jansen et al., 2000). Ambiguous queries confuse the search engine and not satisfy the specific needs of the user. Search engines should provide precise search results to the end user. When queries are issued to the search engines they return the same results to user queries irrespective of topical interest or context. Different users may send query to the search engines that are short and ambiguous. Sometimes the same query may search for different information needs and purposes. But the system will never be able to provide users' precise needs but it provides in general. Personalization of search engine is not effective on some queries. Search engine respond to the list of ranked pages based on the relevance of the query. So that search engines generate user profiles to identify and get the users' actual needs. An effective user profile generation is an important task to customize the search engine to return outputs related to the personal interest of a user (Shen et al., 2005;Dou et al., 2007). Search engine personalization is an active research area which deals with automatic generation of user profiles from the query history and browsed documents. User profiles assist search engine to eliminate ambiguous queries and retrieve relevant documents based on users' interest. Effective user profiling strategy has to play key role in search engine personalization. To overcome this problem, an ontological approach is proposed to optimize the relevance of search engine results. This study presents user profiles by giving interest scores to the topics of topical ontology. User interest may change over time so that profiles are maintained and updated. An improved recommendation system using Profile Aggregation based on Clustering of Transactions (iPACT) shows better prediction accuracy than the previous methods PACT and Hypergraph (Almurtadha et al., 2011). To maintain the interest scores of the user profile, spreading activation algorithm is proposed to analyse ongoing browsing activities (Sieg et al., 2007a). Ontology affords well meaningful structure to relate user interests and a wealthy conceptualization among interested topics and permits latest interested topics into the structure (Gauch et al., 2003). The access time, browsed pages and mouse activities may determine user interests and content of the document may contain the topics of the user's interest (Bhowmick et al., 2010). Topic ontology is utilized for calculating users' topic preferences (positive or negative preferences) based on their queries and visited pages. To classify the user's recent topic preferences exploit semantic similarity amid users' present query and similar query pages. Calculate similarity and interest of query to the topics based on the viewed pages. Ranking function utilizes the preference topic in order to rank the search result. All users might not have equal interests but they can have a little amount of topic preferences in which they have shown their interests. User preferences give major improvements of the search results quality. User interests may be identified by watching user's surfing activities over period (Stamou and Ntoulas, 2009). The user activities are matched with presented topics in topic ontology and relations among the topics. At first, topics scores in profile would keep on varying. Though, the change in interest scores must be reduced once sufficient information collected for profiling (Pretschner and Gauch, 1999). Depth of ontology is common to signify the user interests for related search activities. Calculated interest score can be assigned by accumulating weights of its topics. Increasing use of web search engines requires mechanisms to select finest matches based on the users' need. Search session allows search queries over a period of time. The user profile is embodied using user search record in a search session. User profile is constructed over a search session to personalize search (Daoud et al., 2008). Previous work: The existing profile-based personalized search approaches are not consistent when compared to click-based method. To allow large-scale assessment of personalized search, an evaluation framework was developed on query logs and this approach has improved search precision on few selected queries but damage other queries and far from finest search (Dou et al., 2007). Existing personalized search includes constructing replicas of user framework as ontological profiles through giving interest scores to present ideas in domain ontology. To sustain the interest scores based on the user's enduring activities, spreading activation algorithm is used. Since, it is focused on inherent models for user profiles, profiles have to be adjusted over period (Sieg et al., 2007b). Existing strategy uses user framework to tailor search results by re ranking the outputs returned from search machine. Framework model of user is characterized as examples of indication domain ontology in which ideas are interpreted by interest scores copied and simplified completely based on user's requirements (Gauch et al., 2003). In existing ontology oriented user replica approach is proposed in the framework of personalized information access. Static user profile specifies user's interests in a focused way. Dynamic user profiling includes the characteristics of flexibility into it using hybrid method. During the browsing sessions dynamic user profile employs data sources such as usage log and mouse operations. These usage logs are considered to score the concepts. The concept age monitor monitors concepts usage in the user profile. The concepts score that were not considered by the user are reduced (Bhowmick et al., 2010). A user profile is generated over time by exploring browsed sheets to identify content and time. In this, the size of a browsed page may be ignored when the interest in a page is inferred (Pretschner and Gauch, 1999). In existing system, search machine returns document contents based on grouping of identical keyword and concept. Documents are categorized to establish the concepts to which they coordinate. Document contains both identical keyword and concept considered as irrelevant by semantic or content. It returns minimum number of relevant documents for each query and many documents belong to topic is irrelevant (Gauch et al., 2004). Many user profiling approaches are evaluated and these make use of click through data to extract from Web-snippets to construct concept-based user profiles. Users' positive and negative preferences were captured. But relationship among users and concepts are not performed. Conceptbased user profiles are not integrated with search engine ranking (Leung and Lee, 2010). Existing ontologybased retrieval model exploited complete domain ontology and information base, to sustain semantic search in storage places of document. This method was indirect relation with quantity and excellence of information inside knowledge base. The most recent developments of mechanize ontology construction and manuscript explanation are potential. For example, proposed marginal note weight method is not enchanting benefit of document relevance fields. Furthermore difficulties occur once interoperation associations amongst various arrangements from dissimilar sources are concerned (Castells et al., 2007). Existing search process integrates users' interests to get better search results. User profiles are ordered as a concept hierarchy and it allows automatic creation of huge structured user profiles. The length of a browsed page is ignored when the interest in a page is inferred with the proposed strategy (Vallet et al., 2006). An approach was proposed to tailored search that engages structuring replicas of user's framework as ontological profiles via transferring completely resultant interest scores to presented topics in domain ontology. Stability of the user profiles are not evaluated Sieg et al. (2007a). Most existing system proposed spreading activation algorithm with domain and reference ontology. This research addresses these problems by proposing topic ontology with spreading activation algorithm. By assessing user browsed pages, user profile is generated over time to identify users' content and time spent on it. When pages are repeatedly visited by the user, it embodies user's interest in subject. The objective is adapting search outcomes to a particular user based on user's interest and topic favourites. Session is introduced to capture users' browsing activities and profile of user refers to interest of user in a specific search time. Spreading of interest scores activate related topics and continued in same search period. This study explores how the user profiles attain the improvements of search engine performance. MATERIALS AND METHODS This study builds user profile from user interested topics. A topic consists of various concepts, which form the next stage of the ontology and a concept may belong to one or more topics. The concepts are interrelated within ontology by topic semantic relationships. More accurate information about the users' interests could be done based on users' surfing behaviours. Positive or negative weights of the concepts specify the interestingness or uninteresting ness of the user. Spreading activation algorithm is applied to preserve and modify users' clicking and browsing based on users' ongoing surfing activities and it updates the interest score of the topics. The main function of the spreading activation is scoring user interests, finding negative and positive preferences. It also evaluates session based user interests and topics similarity. Spreading activation is a procedure for retrieving and ranking related information by activating query items and applying their activation along interrelated topics. Profiling can be done based on the search session and change in user interest. User search history denotes profile of user in a period of search. User profile is started through topic ontology for primary query of the session. New topics can also be identified and added to the profile according to their browsing session. The interest score weighting was chosen to provide weighted topics in the topic ontology. Problem statements: It is difficult for clients to discover more appropriate information for their search query. This is because of increase of maximum users of internet and amount of web pages. It takes time to search results for users' particular needs. To get relevant information users should go for a public search engine and need to submit their query. But this is also rendering most irrelevant results. So users may be confused with the results and the problem arises here because of not providing users' actual needs in a well formed structure. Previous methods are not accurate in capturing user interest and profiling is not reliable. User interest may (or may not) vary from session to session. Creating of only one user profile and apply the same to all users is not reliable and there will be a problem of providing users actual need. Many user profiling approaches consider only on users positive preferences but they neglect the unclicked documents whenever a web page returns with ranked results. To overcome these problems and meet the users' actual needs and get accuracy of user profiles over time, a topic ontology method is proposed with advanced spreading activation algorithm. It constructs user profiles over time and based on search session. Interest score is propagated to the topics for identifying the weighted topics that can be updated whenever a new (or existing) keyword or query is issued. Negative preferences (i.e.,) users uninterested topics can also be captured. Proposed work: This research aims at evolving search engine personalization by proposing topic ontology to classify the web pages based on users' content. This study constructs topic ontology with hierarchical relationship among concepts and propose advanced spreading activation algorithm to calculate the interestingness and uninteresting ness of a particular user. In topic ontology, topics are structured hierarchically. More exact topics are presented in the hierarchy and categorize numerous numbers of pages into topic ontology. This directory computes semantic associations among huge records of web pages and topics (Dou et al., 2007). It uses spreading activation algorithm with construction of topic ontology to increase in updating topics interest score in user profiles. It is acting as a semantic network. Based on activation values, interest scores are restructured. Spreading activation is used to get relevant topics in topic ontology by giving primary concepts and equivalent primary activation values (Bhowmick et al., 2010). This proposal uses a very specific advanced spreading activation, for the intention of preserving interest scores inside user profile. Activation value is assigned to the specific topics and other adjacent topics as well. It is activated based on collection of weighted relationships throughout transmission. Adjacent activated topics are not presented in precedence queue will be inserted to queue and then restructured (Bhowmick et al., 2010). In this study, concepts of a topic and the relations are monitored. The search engine ranks pages and concepts in the links based on the user profile. Time between clicking activities is important since the interest score of a page mainly depends upon clicking. Users may spend more time for their interesting topics than uninteresting topics. When a user gets an interesting topic and the page has links for corresponding topics user may click on the links. If a topic is interested to the users, link may go in depth. Session is introduced to capture the users browsing behaviours during search. Users' negative preferences are also being captured to generate more precise user profile. Proposed algorithms: Nowadays, thousands of users search for number of topics and they can have different expectations. User interests can also be changed over time. But web search machines need to identify and suit user expectations effectively. Therefore, search engines personalize the outcome to give interested topics based on their search session. Search engine generates profile for each and every individual, if profile is available for users' query; it ranks topics and returns relevant pages back (Dou et al., 2007). Construction of topic ontology with advanced spreading activation algorithm is to (i) compute topics similarity for evolving user profiles (ii) identify the users' accurate topic preference by assigning interest scores (a) capture the users negative preferences and (b) construct profiles over the session for search engine personalization. Topic ontology construction: Topic ontology is a graph in which each node represents a topic. It contains the concepts that are organized by semantic relationships. A hierarchical relationship is maintained among the concepts in the topics. In this study, the topics of a user's interest are used to construct user profiles. The user profile consists of the topic's semantic relationship with the use of ontological approach. Topic ontology is built from some terms or keywords. Terms consist of smallest concepts. Topic relevance is calculated through semantic similarity amid ontological concepts. User profile is generated from users' interested topics (i.e.,) search intent. Similarity amid concepts is represented through extent to which they distribute information (Zhou et al., 2006a;2006b). The topic ontology is constructed in the following way: Define the keywords and their frequencies. Set of keywords that can be represented as K = (k1, k2,…, k3) and keyword frequencies can be of kf (d, k). Here k in d that is document d and a keyword k. Keyword Here, S is a pattern. Let keywordset (S) = {k │ (k, f) ε S} be the ε set of S. Given a pattern S = {(k1, f1), (k2, f2),…, (kn, fn)}, its usual form {(k1, w1), (k2, w2),…,(kn, wn)} is represented by Eq.(1). fi wi for all i n and 1 n fi j 1 Topic ontology is represented by T. Relational weight resolves the measure of association between two concepts. Hierarchy is formed by recognizing is-a relationships between the concepts. Hierarchical structure presents an understanding of the relations. If S1 = S2 then 'is-a' association exist between S1 and S2.The hierarchy of all keywords in K can be obtained. T is called a group of primitive objects. The concepts are constructed from the primitive objects and ontology contains the primitive and compound classes and these are inherited by resultant classes (Zhou et al., 2006a;Li and Zhong, 2004). Keywords k1 and k2 score function f through a relative r, depends on an Association Score (AS) between keywords and relation weight. Association score of keyword match up (k1 and k2) is represented through frequent occurrence of keyword. This is given in Eq.(2): log(p(k1, k2) 1) AS(k1, k2) Nf (k1).Nf (k2) Here, p(k1, k2) represents the probability of keyword pair (k1, k2) and Nf (k) is a normalization feature specifies the amount of keyword minds that exists in keyword k (Stamou and Ntoulas, 2009). Associations initiate relations between topics by generating associations. The most commonly used associations are binary. Associations can include any number of topics and are then said to be "n-ary". Association in the top of ontology is characterized by an association group <support, β> from T such that β(S) = {(k1, w1), (k2, w2),…, (kn, wn)} and β(S ) is S's usual type. Association group charts a prototype to a keyword set and presents keyword weight for keywords in a keyword set (Zhou et al., 2006a). Some patterns are discovered from relevant documents that are corresponding to a group of keyword occurrence couples: d1 = {(java language, 4), (programming, 6)} d2 = {(c++ language, 5), (programming, 15)} d3 = {(OOPS, 3), (programming, 7), (others, 10)} Compound objects are obtained using is-a relationship e1 and 2 from d1-3, where d1→e1, d2→e1 and d3→e2. e1 and e2 are the expanded patterns. Arrow represents the "is-a" relationship in the following fig. The user profile includes a hierarchical structure made of "is-a" links. Here C++ and Java languages are belonging to OOPS also program relates to computer (Li and Zhong, 2004). Compute topics similarity for evolving user profiles: Figure 1 depicts the user profiling methods based on topic ontology. In general, for each topic, topic ontology provides a way to classify relevant documents. Relevance documents are accessible once, proper performance replicates various characteristics of efficiency of topical search can be determined. Use the Topics Similarity (TS) standards specified by topic ontology to calculate similarity between queries and estimated topics and obtain the average similarity for those topics. The Semantic Similarity (SS) of topics can be calculated between the Sets of Topics (SOT) and the Expected Set of Topics ESOT (Stamou and Ntoulas, 2009) where, nl indicates total number of topics measured. Using cosine similarity measure calculates a keyword vector for document di for each topic Tj in user profile. According to this measure, the similarity among pages in topics that belong to various top-level groups is zero though the topics are obviously relevant. Thus, this measure is used to derive semantic relationships among thousands of web pages stored in this topic ontology (Sieg et al., 2007b). Given topic Tj its similarity with SOT is calculated by Eq.(4): Topics score (Tj) = cos (di, SOT) Topic list computes semantic associations amongst huge numbers of topics pair. Categorize the collection of relevant documents for a specified topic in turn to resolve if a topical search is effective. Categorization of clicked pages into topics and their semantic relationship derived from topical ontology. Identify the users' accurate topic preference by assigning interest score: User interest is constructed for a particular query. Spreading activation updates topics interest score in user profiles. Fig. 1: Topic ontology representation Initially, spreading activation gets related topics, primary group of topics and its related primary activation values. Hence, topic ontology for user profile acts as semantic web and interest scores are rationalized as per the activation values. Spreading activation methods specify the particular relations between keywords or topics. Obtain interest score in which user has exposed interest via observing user browsing behaviours. Collect the weights of possible topics that can be brought to the peak of user profile demonstration (Sieg et al., 2007a). Algorithm has primary group of topics from topic ontology based user profiles. Initial activation value is assigned to those topics in user profile. Key target is to trigger other topics following a collection of weighted relations throughout circulation. Finally get a collection of topics and relevant actions. Given topic circulates its activation to its adjacent and to find the activation throughout the network weight of relation between source and terminal is calculated. For each topic the initial activation value is reset to zero in the user profile. Topics similarity score sim (di, Tj) is greater than zero are inserted in a precedence queue, but it is in non rising sequence accordance by topic activation values. Activation rate of topic Tj is consigned to IScore(Tj) _sim(di, Tj), where IScore (Tj) is obtainable interest score to a particular topic. Uppermost activation rate for topic is deleted from priority queue. Activation quantity that is spread to each adjacent is relative to the weight of the relation. Activated adjacent topics are not presented in precedence queue are then appended to queue and recorded. The process continues until no more topics to be dealt with further. Adjacent spreading topics are measured to be the related topics. Related topics are triggered are appended to a precedence queue, then arranged with activation rates (Sieg et al., 2007b;Haveliwala, 2003). Spreading activation algorithm: Input: Topic Ontology for user profile with interest scores and a collection of topics Output: Topic Ontology for user profile topics with modified activation values SOT = {T1, ..., Tn}, topics with interest scores IScore (Tj), interest score IScore ( Spreading activation is assigned to input keywords and activation is then sent from node to node throughout the network, over a number of cycles. Web users likely to be submitted the same queries several times. In this study, users' current query is matched with existing queries. Based on users' classified and hierarchy of pages respect to their topics underlie a new user query in estimating the topic preference (Stamou and Ntoulas, 2009). Interest scores for topics are modified using spreading activation. Activation spreads to a wider of topics. Alternatively the huge related set of a topic gets several related pages. Scores of the weight are calculated respect to various topics (Haveliwala, 2003). Activation weight of word is grouping of topic weights in user queries along with documents correspondingly. Primary activation load propagates via a group with relations initiating at preliminary node. At last, every document node is triggered using topic weights of all topics exist in the document (Salton and Buckley, 1988). For instance, when user searches for information can verify topics of user interests and frequencies. Search engine retrieves a document list that can be attained using keyword in search process. Similarity of relations of user interests can be evaluated and obtain the documents with set of related topics. In particular, each topic and relation in topic ontology would give particular values for representing user interests (Jiang and Tan, 2006). Capturing negative preferences: Negative preferences may include unclear or inconsistent topics in topic ontology. The users' negative preferences can be captured by considering unclicked pages. If interest score is not assigned for a topic or negative score that can be represented as a negative preference. In this preference, pages can be searched but not clicked and visited by the user that pages may be uninteresting or unrelated topics to the users. Given a set of results for users query, if topic ti, tk are clicked and topic tj is not clicked rank between ti and tk (ti < tj < tk) then topics T (tj) in topic tj is considered less relevant to topics T (tk) and T (ti). Negative preferences are considered as an irrelevant to the user. If interest score is negative (i.e. IScore (Tj) = -1) then no interest information available and if sim (di, Tj) < 0 then no similarity between topics. Finally, negative weight can be added in a queue (Queue.Add (Tj)) (Leung and Lee, 2010). Constructing the user profile over a search session: This study generates topic ontology based user profiles for their surfing behaviour and giving interest score for the topics in a particular session. Using topic similarity measure, session activity states the following. If user issued query to a search machine at time t, search engine returns a top ten ranked list. Construct user profile in search period based on users interests by clicking and viewing them and it can also be maintained in same period. Once a new query is submitted it identifies a possible session for generating the topic ontology based user profiles. Ranking can be done in same search period based on highest interest scores with topic similarity. Accumulating nodes and edges to user profile allocates topics where user is interested in search session (Daoud et al., 2008). When a server gets user query and finishing when user quit the website or session timeout is called as session. In session, information is collected from users by the web server. Calculate the query frequency occurrence in a particular session by giving interest scores to them. Session is calculated as the total time for the user to complete a set of transactions. Group of user requests are the structure of URL called as user session. Browsing time is updated based on each user's information access. The request session of each user is transformed into an HTTP request and it is connected within the web server (Speretta and Gauch 2005). Generally, personalization can be done by creating and maintaining sets of user's interests, stored in profiles to give better results. To get effective personalization, these profiles differentiate between lengthy term and small term user interests. Purpose of user profiles indicating user's preferences rather than user's interests. User search histories are often used to generate interest profiles. User profiles are characterized as topic weighted hierarchies where topics are defined by topic ontology. Search outcomes are also categorized into same topic structure based on session. To compute topical similarity between document and user's interests, document profile is evaluated to user profile (Sampath et al., 2004). According to queries frequency occurrence, a Search Session (SS) is defined as in Eq.(5). SS= {qf1, qf2, qf3…qfn} (5) RESULTS In this study, midstream was developed to group all user information in order to perform the evaluation of user profiling methods. The midstream is intended at facilitating experimentation. The test queries are erratically chosen from 5 different topical classifications to avoid preconception. Table 1 illustrates the topical classification in which the test topics are selected from. The methods developed in this study can be incorporated into any search engine to present personalized user profiles. There are 100 test queries used to have indistinct representations in Table 2. Human judges determine a typical group for each query. To ensure for their accuracy, the topics obtained from the above algorithms are evaluated. To explore for the answers of the 100 test topics the 5 users are requested to use midstream. Top 50 results are returned to the users when query is submitted to midstream and users click on the outputs and then find relevant to their queries. Statistics of the clicked data are collected and according to users' needs the user profiles are exploited to collect similar topics together. Each topic's profile was described by the first 5 related documents carried out and scheduled in Table 3. It is observed that upgrading is much greater when user profile is generated using top ordered documents returned by the search machine regarding topic. This proves that proposed topic ontological user profile reaches an efficient search engines personalization. A session oriented assessment set-up incorporates search period as a series of subtopics produced for a particular topic is also defined. User profiling was evaluated successfully and proposed approach is effective. The performance is calculated by giving average search precision values. Experimental evaluation: The topic preference pairs obtained from Spreading Activation, Topics similarity and Weight of the topics are evaluated. Then user profiling strategies using those methods over a session are compared. Finally, the performance of the topic ontology with spreading activation algorithm that relies on following things. The effectiveness of search engine personalization within civilizing the excellence of search engine outcomes are evaluated. To evaluate the efficiency of search engine personalization, query and click sessions are recorded. Then the user profile is evaluated over search session. The user profiles are employed by the similarity method to group similar topics together according to users' precise needs. User profiling methods that integrate negative topic weights return execution positions are extremely close to the optimal points obtained. The finest Interest scores are compared to the standard topic scores using Eq. (6) and (7): Where: t = Input T related = Collection of topics that exists in topic ontology for t T retrieved = Collection of Interest scores generated by the spreading algorithm The precision and recall are averaged to design and comparing the effectiveness of the user profiles. Table 4 illustrates the Average Precisions of Topic Preference pairs obtained using Spreading Activation, Topics similarity and weight of the topics. Evaluating users' accurate topic preferences using spreading activation: Figure 2 illustrates the evaluation of effectiveness of topic preferences using spreading activation by assigning its interest scores. Interested topics scores in topic ontology oriented user profiles are modified whenever user shown importance in a new web page. Interest topic scores in a profile will keep on changing. Interest score in a user profile was assigned to zero then measure the interest score changes for topic and other topics as well. Finally interest scores are recorded. Clicked documents are used in profile set for the experimentation. While verifying interest score to users' topic use the user interests from topical group and semantic relevance of topic preferences. While ranking the pages it exploits user topical preferences from users' click record with keywords to recognize the possible topic of a query. Initially, use topic ontology to identify the visited pages' topics and later individual topic preferences are measured. User profiling is highly independent on the categorization in allocating a proper topical group to viewed pages when compare to user profiles in reference ontology. Topical hierarchy has improved the effectiveness of system that interprets the users' topical preferences and captured the users' topical interests. Topics and interests achieved by the topical ontology have improved relatively to search averaged over topics. Upgrading of interest scores 350 is achieved correspondingly at spreading activation. Evaluation results prove that search engine performs better personalization when the search results are ranked. Evaluating the topics similarity for evolving user profiles: Figure 3 shows the similarity of topics measure. If topic preferences are valued higher interest scores then it is removed from the priority queue. According to the updating of interest scores, it can be appended to the queue. By the occurrence frequency of keyword the Association Score (AS) of keyword brace (1 and k2) is described. Particularly, associate topic preferences of web pages with mouse clicks on search outcomes. Finally, search outcomes are ranked based on users' topical interests. Topics weight was executed after viewing every page and not later the execution of user session. Average of relevant topics in user profiles ordered by weight of the topics and number of web pages related to topics. Similarity score of similar topics is calculated in the chart based on topic preferences using cosine similarity calculation. Figure 4 depicts the average weighted topics. Some topics may or may not have same weight. Weighted scores are calculated with respect to various topics. High weighted and less weighted topics are represented as user interestingness and uninterestingness. Evaluating the user profiles over the search session: It is constructed based on score that spreading on set of related topics and uphold in unchanged search session. It recognizes session limitations by semantic relationship measure between topics with keyword relations and permits entering new interested topics. The session based activation score of topics is tabulated in table 5. For each user session lowest and greatest number of requests has been found. The graph (Fig. 5) explains that majority of the sessions contain small number of (less than or equal to 85) user requests. In this study sessions may have more than 100 user requests. This plays an important role to learn regarding the users browsing behaviours. Comparing obtained preference pairs for positive, negative preferences and topics similarity: In this experimental setup, evaluate the comparison between the obtained preferences for positive, negative and topics similarity. These are exploited to measure the topic preference pairs from the keyword occurrence frequency. The measured topic preference pairs from different methods are evaluated to get the portion of correct user preference. Then the topic with high interest score in the resulted topic preference is removed from the queue to avoid uncertainty. The Fig. 6 illustrates the precisions of the topic preferences, negative preferences and topics similarity. From 14 different users the average precisions are obtained. Interest scores and negative preferences are 9-11%. Topics similarity is 13%. Thus, it is capable of finding out more accurate negative preferences. Any changes in similarity values will be updated. With more accurate negative preferences, a more dependable set of negative topics can be determined. Table 6 demonstrates the topic preferences (i.e., similar and dissimilar topics). The browsing similarity collected from 3 users is shown. The -1 represents similar topics and 0 represents the dissimilar topics. CONCLUSION Search engine personalization based on topic ontology was presented for concept based user profiles which identified the user's search interests. This approach works on user's visited pages, also catches users' negative preferences and it can provide an efficient search engine personalization. A major benefit of proposed approach is to calculate more appropriate relevant topics to relate the contents with search profiles of users. Profile shows the user attention in a particular search session. Profiles are modified over time and guaranteed the updates to interest scores exactly. Evaluation results reveals that topic ontology for concept based user profiles attain accuracy of search results. Similarity of the topics is calculated using cosine similarity function. The user profiles were evaluated using different methods and compared with each other. Experimental result shows that profiles capture positive and negative preferences of users. Despite getting better quality of search engine personalization, negative preferences in proposed system also help to detach related and unrelated topics.
8,187.2
2012-01-01T00:00:00.000
[ "Computer Science" ]