id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
6032005
pes2o/s2orc
v3-fos-license
What is the possible role of PSA doubling time (PSADT) and PSA velocity (PSAV) in the decision-making process to initiate salvage radiotherapy following radical prostatectomy in patients with prostate cancer? This article is an attempt to present a contemporary view on the role of the kinetics of PSA levels as defined by PSA doubling time (PSADT) and PSA velocity (PSAV) in the decision-making process to initiate salvage radiotherapy in patients with prostate cancer after radical prostatectomy (RP). The dynamics of the rise of PSA levels may be an early endpoint parameter, preceding the diagnosis of distant metastasis or death due to prostate cancer based on a single PSA determination. Thus, it seems reasonable to include the kinetics of PSA levels, apart from single PSA determination, in the decision-making algorithm. In a group of patients after RP, PSADT might be an early endpoint that could replace cause-specific survival rate as a late endpoint. PSADT allows distinguishing subgroups of patients at high risk of distant metastases and death, which in turn may lead to a change in the further treatment strategy. Therefore, patients with short PSA doubling time should become a subgroup, in which hormonal therapy should be considered. To date, there is no unanimous consent to accept the criteria of assessment of the dynamics of PSA levels as determinants of treatment in case of recurrences following RP. However, a number of non-randomized clinical trials in patients after RP suggest it would be useful to include these parameters in the decision-making process. For instance, a relationship was found between increased PSA velocity (>2 ng/mL/year) before initiation of oncological treatment and increased (12-fold) risk of death. A number of well-documented retrospective analyses show that PSADT is one of the most important parameters to describe the disease aggressiveness. It has to be stressed that single determination of PSA levels is much less precise in terms of describing the biological aggressiveness of prostate cancer than PSADT. Of course, the question regarding the need to include the PSA levels kinetic parameters as crucial elements of patient management algorithms can be answered in a definitive manner only by randomized clinical trials. introduCtion One possible option for radical prostate cancer treatment is the highly efficacious radical prostatectomy (RP). However, treatment failures still occur in a significant percentage of patients [1,2]. Therefore, methods to improve the treatment results continue to be sought; one of these methods is adjuvant radiotherapy (RT) [3,4]. The initiation of RT in the prostate cancer treatment is based on the assessment of a number of prognostic factors, such as pT, pN, or post-operative margin status, all included in the histopathology protocol following the RP, and in addition on the determination of PSA levels [5]. In the group of patients after RP with high risk of prostate cancer progression, there is a possibility to initiate early RT without signs of biochemical progression, delayed RT initiated upon detection of biochemical progression, or hormonal therapy in case of systemic progression [6,7]. In case when recurrence risk factors are present (positive post-operative margin, infiltration outside the prostatic capsule -pT3a, infiltration of seminal vesicles -pT3b), the preferred treatment method is early RT [8,9]. The use of salvage RT, reserved for cases of biochemical progression, is a less efficacious method [10,11]. However, one must assume that the number of patients, in whom early RT was abandoned despite the presence of the disease progression risk factors defined in the pathology report, could be significant. At the same time, it must be stressed that this group is highly heterogeneous as regards the biological aggressiveness of the neoplastic process. One of the subgroups of these patients consists of patients in whom tumor microdisseminationundetectable by available diagnostic methods -had occurred before the surgery. Another subgroup of patients after RP may consist of patients at very high risk of systemic progression and, at the same time, at very low risk of local progression. And the last group, include patients with biochemical progression after RP with very high risk of local and at the same time very low risk of systemic progression. Therefore, RT in this subgroup (high risk of cell presence in the surgery site) is likely to be associated with therapeutic benefit. However, in clinical practice, it is very difficult to assign a patient to one of these groups, and therefore, additional tools, which would allow to do it in the best possible way, are being sought. a patient after radical prostatectomy: diagnostic dilemmas The goal of the surgical treatment is to remove the entire pool of tumor cells present in the prostate gland, in the seminal vesicles, key words prostate cancer » prognostic factors » PSA doubling time » PSA velocity » salvage radiotherapy abstraCt This article is an attempt to present a contemporary view on the role of the kinetics of PSA levels as defined by PSA doubling time (PSADT) and PSA velocity (PSAV) in the decision-making process to initiate salvage radiotherapy in patients with prostate cancer after radical prostatectomy (RP). The dynamics of the rise of PSA levels may be an early endpoint parameter, preceding the diagnosis of distant metastasis or death due to prostate cancer based on a single PSA determination. Thus, it seems reasonable to include the kinetics of PSA levels, apart from single PSA determination, in the decision-making algorithm. In a group of patients after RP, PSADT might be an early endpoint that could replace cause-specific survival rate as a late endpoint. PSADT allows distinguishing subgroups of patients at high risk of distant metastases and death, which in turn may lead to a change in the further treatment strategy. Therefore, patients with short PSA doubling time should become a subgroup, in which hormonal therapy should be considered. To date, there is no unanimous consent to accept the criteria of assessment of the dynamics of PSA levels as determinants of treatment in case of recurrences following RP. However, a number of non-randomized clinical trials in patients after RP suggest it would be useful to include these parameters in the decision-making process. For instance, a relationship was found between increased PSA velocity (>2 ng/mL/year) before initiation of oncological treatment and increased (12-fold) risk of death. A number of well-documented retrospective analyses show that PSADT is one of the most important parameters to describe the disease aggressiveness. It has to be stressed that single determination of PSA levels is much less precise in terms of describing the biological aggressiveness of prostate cancer than PSADT. Of course, the question regarding the need to include the PSA levels kinetic parameters as crucial elements of patient management algorithms can be answered in a definitive manner only by randomized clinical trials. what is the possible role of psa doubling time (psadt) and psa velocity (psav) in the decision-making process to initiate salvage radiotherapy following radical prostatectomy in patients with prostate cancer? and -less commonly -in lymph nodes. However, local efficacy of RP is not always sufficient, leading to biochemical recurrence, preceding or accompanying a simultaneous local recurrence. Postoperative assessment of PSA levels is an early measure of RP efficacy, commonly used in clinical practice [12]. Unfortunately, single determination of PSA levels does not allow defining the failure site (two sequential determinations are used to define the biochemical failure). It must also be stressed that non-lesioned fragments of prostate gland may be retained after the surgery in a group of patients. This may result in the maintenance of PSA level above the accepted cut-off levels, which, when exceeded, suggest biochemical failure. In general, documentation of biochemical failure extorts definition of failure "geography". Firstly, we should define whether the neoplastic process is limited only to post-operative site, or to the site with the accompanying distant metastasis currently beyond the detection capacities of diagnostic methods, or whether it is only a "chip" of non-lesioned prostate? This list shows that basing the therapeutic decision on the pathology report and single determination of post-operative PSA levels suggestive of biochemical failure is still associated with high risk of initiating a suboptimal treatment. In patients after RP, RT is an established adjuvant treatment method, leading to reduction of biochemical failure incidence by ca. 50% [13]. Unfortunately, most randomized clinical trials conducted to date did not bring evidence of a statistically significant effect of early RT on the improvement in total survival rates [14][15][16][17][18]. An exception is the analysis of survival rates of patients in the SwOG study, presented at the ASCO conference in 2008 [19]. Despite an overwhelming number of publications suggesting that early RT is more efficacious than salvage RT, many urology centers hold to the belief that since there are no clinical trial results explicitly suggesting an increase in total survival rates thanks to early RT, it should be used only in case of biochemical progression. Unfortunately, diagnostic tools helping to define the source of biochemical failure, and thus helpful in qualifying patients for RT, are imprecise due to their low sensitivity and specificity. The role of the simplest of these tools, i.e. the digital rectal examination (DRE), in the diagnosis of local recurrence in patients after RP is very limited; in case of lack of biochemical recurrence this examination provides no useful information at al. [20]. Also, the usefulness of imaging examinations in defining local recurrences is low, even in cases of significant increase in PSA levels exceeding 0.2 ng/mL. Despite the diagnosed biochemical recurrence, the sensitivity and specificity of TRUS procedures and, in case of determining dissemination or isolated local recurrence, examinations such as bone scan, CT, or MRI with surface or anal coils are of limited importance [21,22]. Hope for more sensitive and specific detection of micrometastases or minute local recurrences lies upon molecular imaging methods, namely [(11)C]choline PET/CT [23,24] Recent reports support its use, showing good pathologic correlation with imaging data [25]. Distinguishing of failure source (isolated local recurrence versus distant metastasis ± local recurrence) is important to the extent that it allows to initiate efficacious treatment in the form of salvage RT on one hand and, on the other hand, to avoid unnecessary RT in case of patients with a distant metastasis and to refer such patient to a clinical trial assessing novel systemic treatments. One of the parameters most commonly used in clinical practice due to its availability and low acquisition costs is determination of the dynamics of PSA levels, expressed by PSA doubling time (PSADT) and PSA velocity (PSAV). Therefore, a number of articles were published in recent years with regard to the usefulness of these parameters in the therapeutic decision-making process in patients after RP. One of the most important research teams is the D'Amico team, which assessed the usefulness of measuring the kinetics of PSA levels prior to RP for the assessment of patients' fates after prostatectomy in a group of 1,095 prostate cancer patients [26]. The authors pointed that in 28% of patients, in whom the PSA velocity (PSAV) exceeded 2 ng/mL/year, a 10-fold increase in the risk of death due to prostate cancer than in the group of patients with PSAV <2 ng/mL/year. what's interesting, this risk was practically independent of other clinical and pathological parameters describing the prostate cancer. According to the authors, adjuvant RT in cases when PSAV exceeds 2 ng/mL/year brings little benefit due to large risk of disease dissemination. Therefore, authors think it would be advisable to consider initiation of systemic treatment in this group of patients (PSAV >2 ng/mL/year). The next study by D'Amico et al. was very important in terms of defining the role of prognostic factors in patients after RP and RT [27]. In this study, clinical parameters associated with the risk of death due to prostate cancer in case of biochemical recurrence after radical treatment were singled out. with this purpose, an analysis of 8,669 patients (5,918 patients after RP, 2,751 patients after radical RP) was conducted, with mean observation time of 7.1 years after RT and 6.9 years after RP. The results of the statistical analysis showed that PSADT <3 months (found in 12% of patients in the operative treatment group and in 20% of patients in radiotherapy group) was an independent prognostic factor of the risk of death due to prostate cancer (HR = 19.6, CI 95%; 12.5-30.9). Therefore, the authors claim that documentation of PSADT <3 months indicates advisability of considering initiation of systemic treatment. In addition, the authors stress the potential usefulness of PSADT as an early endpoint, which might replace the assessment of prostate cancer-specific survival rates in clinical trials. The use of this parameter might lead to significant reduction in the waiting time with respect to the summaries of the results of clinical trials evaluating novel treatments. However, it must be highlighted that the most important premise stemming from this study is the fact that patients with short PSADT should constitute a group in which hormonal therapy or participation in clinical trials evaluating novel treatments should be considered. This is especially important since the probable cause of treatment failure is associated with micrometastases, present even before the radical treatment (RP). Also the study by zhou et al. assessed the usefulness of PSADT as a prognostic factor in patients after RP and radical RT. based on observation of 1,159 patients with prostate cancer (498 patients after RP, 661 patients after radical RT), the PSA doubling time shorter than 3 months was associated with relative risk of death due to prostate cancer of 54.9 (16.7-180.0) in RP patients, and 12.8 (7.0-23.1) in radical RT patients [28]. Tollefson of Mayo Clinic analyzed the treatment results of 1,064 patients after RP. For analytic purposes, the author differentiated three disease progression risk subgroups: a high risk subgroup, when PSADT was shorter than 12 months; a medium risk subgroup, when PSADT was between 1 and 10 years; and low risk subgroup, when PSADT was longer than 10 years [29]. The relative risk of distant metastases was 21.7 (8.0-58.6) and 6.8 (2.3-19.8) in the high and low risk groups, respectively. The author suggests that patients from the high risk group should firstly be the potential candidates for initiation of systemic therapies, while patients at medium risk (PSADT between 12 and 120 months) should be qualified for adjuvant RT, and patients at low risk remain under observation. Freedland et al. analyzed the relationship between PSADT and the risk of death due to prostate cancer in a group of 5,096 patients after RP [3]. The statistical analysis performed by the authors allowed differentiating patients with PSADT of less than 3 months, in whom the relative risk of death due to prostate cancer was 27 PSADT ranging from 3.0 to 8.9 months, the risk of death was 8.76 (3.74-20.50), while in the subgroup of patients with PSADT between 9.0 and 14.9 months, the risk of death was 2.44 (0.88-6.81). Thus, a question arises, whether early hormonal therapy may improve the survival of patients in the high failure risk subgroup? Experience gathered to date suggest a potential possibility of such an effect, but the lack of results of randomized clinical trials assessing this aspect of hormonal therapy does not allow routine recommendation of this treatment in clinical practice. Thus, probably future RCTs would evaluate all aspects of early HT in patients at high risk of prostate cancer progression determined on the basis of the PSADT. PSADT and PSAV seem to be attractive parameters that might significantly improve the optimization of treatment selection process in patients after RP. However, it must be kept in mind that calculation of these parameters requires observation of patients in whom biochemical progression was observed. On the other hand, it is a commonly held belief that early initiation of salvage RT, i.e. at the lowest possible PSA levels, is most efficient. Thus, based on the available clinical data it is impossible to assess how the length of the waiting period required for PSA measurements for PSADT or PSAV calculation might negatively affect the results of salvage RT. It is possible that in the future, the use of determination of PSA dynamics based on PSA determinations in the range of 0 ng/mL to 0.2 ng/mL would allow early determination of the "geography" of the biochemical failure. ConClusions To sum up, it must be stated that PSADT is a very useful tool for defining subpopulations of patients after RP in case of biochemical failure. This endpoint may be used as a potential tool for proposing local or systemic therapy in a subgroup of patients at high risk of distant metastases. Such differentiation would allow abandoning systemic therapy in patients at low risk of systemic progression while proposing salvage RT as highly efficacious treatment method. based on the review of studies assessing the kinetics of PSA level changes, it can be stated that in patients with the presence of biochemical recurrence, PSADT may be an early treatment efficacy endpoint, which might potentially replace the assessment of causespecific survival rates, especially in the clinical trials [27,29,30]. The most important premise stemming from this bibliographical review is that patients with short PSADT should constitute a group in which hormonal therapy should be considered, while patients with long PSADT should be destined for salvage RT.
2016-05-04T20:20:58.661Z
2011-06-02T00:00:00.000
{ "year": 2011, "sha1": "779acb7a6c6808ac0649cff1c96824efb6dab057", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc3921716?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "779acb7a6c6808ac0649cff1c96824efb6dab057", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252889162
pes2o/s2orc
v3-fos-license
Thermal Lattice Boltzmann Flux Solver for Natural Convection of Nanofluid in a Square Enclosure In the present study, mathematical modeling was performed to simulate natural convection of a nanofluid in a square enclosure using the thermal lattice Boltzmann flux solver (TLBFS). Firstly, natural convection in a square enclosure, filled with pure fluid (air and water), was investigated to validate the accuracy and performance of the method. Then, influences of the Rayleigh number, of nanoparticle volume fraction on streamlines, isotherms and average Nusselt number were studied. The numerical results illustrated that heat transfer was enhanced with the augmentation of Rayleigh number and nanoparticle volume fraction. There was a linear relationship between the average Nusselt number and solid volume fraction. and there was an exponential relationship between the average Nusselt number and Ra. In view of the Cartesian grid used by the immersed boundary method and lattice model, the immersed boundary method was chosen to treat the no-slip boundary condition of the flow field, and the Dirichlet boundary condition of the temperature field, to facilitate natural convection around a bluff body in a square enclosure. The presented numerical algorithm and code implementation were validated by means of numerical examples of natural convection between a concentric circular cylinder and a square enclosure at different aspect ratios. Numerical simulations were conducted for natural convection around a cylinder and square in an enclosure. The results illustrated that nanoparticles enhance heat transfer in higher Rayleigh number, and the heat transfer of the inner cylinder is stronger than that of the square at the same perimeter. Introduction Natural convection has received widespread attention by many researchers because it is relevant to many engineering applications, such as heat exchangers, solar energy and nuclear reactors. Conventional fluids, such as water and ethylene glycol mixture, are not effective heat transfer medias, due to low thermal conductivity. Therefore, nanofluids have gained attention as an alternative and effective heat transfer medium, due to having higher thermal conductivities [1]. There are two main research approaches for studying nanofluids: experiments and numerical simulations. In view of experiments, Song et al. [2] measured the thermal performance of SiC nanofluid in a water pool boiling experiment, and investigated the enhancement for critical heat flux. Nikhah et al. [3] carried out an experimental investigation on the convective boiling of dilute CuO-water nanofluids in an upward flow inside a conventional heat exchanger. Alkasmoul et al. [4] investigated the turbulent flow of Al 2 O 3 -water, TiO 2 -water and CuO-water nanofluids in a heated, horizontal tube with a constant heat flux. The results showed that the efficiency of nanofluids in enhancing heat transfer was not high for turbulent flows. Qi et al. [5] carried out an experimental study on boiling heat transfer of an α-Al 2 O 3 -water nanofluid. More researchers have applied numerical methods to study the performance of nanofluids. Khanafer et al. [6] directly solved the macroscopic governing equations to investigate heat transfer enhancement in a two-dimensional enclosure utilizing nanofluids for various pertinent parameters, including Grashof numbers and volume fractions. The results indicated that heat transfer increased with the volumetric fraction of the copper nanoparticles in water at any given Grashof number. Fattahi et al. [7] carried out a study on water-based nanofluid, containing Al 2 O 3 or Cu nanoparticles, in a square cavity for Rayleigh number 10 3 -10 6 and solid volume fraction 0-0.05, by means of the lattice Boltzmann method. The results indicated that the average Nusselt number increased by increasing the solid volume fraction and the effects of solid volume fraction on Cu were stronger than on Al 2 O 3 . He et al. [8] applied the single-phase lattice model to simulate convection heat transfer utilizing Al 2 O 3 -water nanofluid in a square cavity. Qi et al. [9] applied the two-phase lattice Boltzmann model for natural convection of nanofluid. From the above analysis, the lattice Boltzmann method (LBM) has obtained remarkable achievements in simulating incompressible viscous laminar nanoflow. Saadat et al. [10] developed a compressible LB model on standard lattices to solve supersonic flows involving shock waves, based on the consistent D2Q9 LB model, and with the help of appropriate correction terms introduced into the kinetic equations to compensate for deviations in the hydrodynamic limit. Huang et al. [11] improved the lattice Boltzmann model with a self-tuning equation of state to simulate the thermal flows beyond the Boussinesq and ideal-gas approximations. Hosseini et al. [12] derived the appropriate form of the correction term for the space-and time-discretized LB equations, through a Chapman-Enskog analysis for different orders of the equilibrium distribution function. As a mesoscopic approach, LBM can easily solve the macroscopic variables used by distribution functions and the linear streaming and collision processes can effectively simulate the nonlinear convection and diffusion effects in the macroscopic state. With the development of Lattice models in recent years, LBM can solve various flow problems successfully, including incompressible, compressible and thermal flows, by introducing a variety of applicable models. However, the solutions of flow for High Mach number and turbulence problems of complex shape are limited because the standard LBM is strictly limited to using the uniform Cartesian mesh due to the lattice uniformity for flow. Recently, the idea of coupling the LBM and conventional methods (including finite difference method and finite volume method) has been proposed for computational fluid dynamics. It effectively combines the merits of macroscopic and mesoscopic methods. The coupling algorithm can be divided into the whole region coupling algorithm and the partition coupling algorithm. The whole region coupling algorithm solves the different variables used by different numerical algorithms. Nie et al. [13] and Mezrhab et al. [14] used the LBM-FDM coupling method to solve natural convection problems, in which LBM solved flow problems and FDM analyzed heat transfer. Chen et al. [15] used the LBM-FDM coupling method to solve the two-phase interface convection problem, in which LBM solved the velocity field and FDM solved the concentration field. Mishra et al. [16] used LBM-FVM to solve heat conduction and radiation problems. Sun and zhang [17] used LBM-FVM for conduction and radiation in irregular geometry. The partition coupling algorithm divides the whole region into several sub-regions and realizes the coupling function through information transfer between the sub-regions. Luan et al. [18][19][20] simulated complex flows in porous media using LBM-FVM. Chen et al. [21][22][23] used LBM-FVM to study the multiscale flow, multi-component mass transfer, proton conduction and electrochemical reaction processes. Li et al. [24,25] used LBM-FVM to study natural convection and the solid-liquid variation problem. Feng et al. [26] developed a thermal lattice Boltzmann model with a hybrid recursive regularization collision operator on standard lattices for simulation of subsonic and sonic compressible flows without shock by LBM-FVM. Essentially, the main advantage of the above two coupling methods is to improve the calculation efficiency of LBM and expand the applications of macroscopic computational fluid dynamics. A new coupling idea gas been proposed in the past five years. This coupling method adopts the finite volume method to discretize macroscopic governing equations and uses local lattice Boltzmann equation solutions to calculate interface flux, on the basis of con-sidering migration and collision processes. This method realizes the coupling of the macroscopic method and the mesoscopic model and is named the lattice Boltzmann flux solver (LBFS). Yang et al. [27,28] proposed LBFS based on compressible models, which is suitable for calculating viscous and compressible multi-component flows. Shu et al. [29] and Wang et al. [30][31][32] developed LBFS for incompressible viscous flow problems. This method integrates the advantages of the macroscopic method and the mesoscopic model, to not only realize the unified solution of non-viscous flux and viscous flux, but also to improve calculation efficiency without using a uniform grid in the whole calculation domain. Based on the above development, Wang et al. [33] developed the thermal lattice Boltzmann flux solver (TLBFS) and successfully used it to simulate the natural convection problem. Cao [34] proposed a variable property-based lattice Boltzmann flux solver (VPLBFS) for thermal flows with partial or total variation in fluid properties in the low Mach number limit. In this paper, we attempted to build mathematical modeling to simulate the natural convection of Al 2 O 3 /water nanofluid in a square enclosure using the thermal lattice Boltzmann flux solver (TLBFS), which is a coupling method combining the finite volume method to discretize the macroscopic governing equations in space, and reconstructed flux solutions at the interface between two adjacent cell centers by using the single-relaxation-time Lattice Boltzmann model. The top mpotivating priority of this paper was to establish a simple and effective numerical calculation method to solve natural convection problems. Therefore, it was necessary to introduce the boundary treatment technique in the solver. Tong et al. [35] applied the multiblock lattice Boltzmann method with a fixed Eulerian mesh, and the fouling layer was represented by an immersed boundary with Lagrangian points. The shape change of the fouling layer could be carried out by deforming the immersed boundary, while keeping the mesh of flow simulation unchanged. Suzuki et al. [36] simulated lift and thrust generation by a butterfly-like flapping wing body model by means of immersed boundary lattice Boltzmann simulations. The immersed boundary method is an effective and simple method to treat solid surface boundary conditions and the numerical method based on a non-body-fitted grid can avoid the abundant work involved in grid generation. Therefore, the immersed boundary method was applied to implement the no-slip boundary condition and Dirichlet boundary condition was applied for natural convection around a bluff body in a square enclosure with the purpose of effective treatment of surface boundaries. Natural convection problems were investigated at different Rayleigh numbers and nanoparticle volume fractions. Influences of the Rayleigh number and nanoparticle volume fraction on the streamlines, isotherms and average Nusselt number were studied. The Macroscopic Governing Equations For incompressible thermal nanofluid, in consideration of single phase and constant properties flow conditions, the macroscopic governing equations of natural convection in a two-dimensional enclosure can be written as follows: Energy equation; where ρ, u, p and m represent fluid density, velocity, pressure, dynamic viscosity coefficient, respectively; e stands for internal energy defined as e = DRT/2, where D is the dimension, R is the gas constant and T represents the temperature; χ is the thermal diffusivity. The subscript nf denotes the nanofluid. Natural convection heat transfer in nanofluids is studied in a two-dimensional enclosure. Nanoparticles considered to be spherical and frictional forces are neglected. The flow is assumed as laminar with a single-phase homogeneous mixture. The buoyancy force always plays an essential role as an external force. Using the Boussinesq approximation, the force source term can be defined as: where g represents the gravity acceleration, β is the thermal expansion coefficient and T m is the average temperature. According to Chapman-Enskog analysis, the relationships can be established between the fluxes and the distribution functions of the lattice Boltzmann model. Based on the thermal lattice Boltzmann flux solver (TLBFS), the governing Equations (1)-(3) can be rewritten as: From the above process, the macroscopic flow variables and fluxes can be computed by equilibrium and non-equilibrium distribution functions of the lattice model for the governing equations of nanofluid. Equations (8) and (9) are used to solve the macroscopic flow variables, and fluxes can be evaluated by the thermal lattice Boltzmann flux solver, which is introduced in detail in the next section. The force source term is added at the cell center during the calculation process. Thermal Lattice Boltzmann Flux Solver The discrete term of the governing Equations (5)-(7) by finite volume method: where W = [ρ n f , ρ n f u, ρ n f v, ρ n f e] T ; dV i and dS k are the volume of ith control volume and the area of the kth interface. For the 2D case, the D2Q9 lattice velocity model [37] is used for momentum and energy fluxes. The expression of the fluxes R k at the cell interfaces is as followed: From Equations (13)-(15), it can be seen that the important segment to solve fluxes is to accurately evaluate the f eq α , fˆα and gˆα terms. The simplified thermal lattice Boltzmann model with BGK approximation can be written as: In which equilibrium density distribution function and equilibrium internal energy distribution function is given as: Using the second-order Taylor series expansion, Equations (16) and (17) can be transformed as below: By the multi-scale Chapman-Enskog expansion, the distribution function, the temporal and spatial derivatives, the non-equilibrium density and energy distribution functions can be transformed into an expression only related to the equilibrium distribution functions and can be derived from: From Figure 1, the flow properties of eight vertices of the D2Q9 model can be evaluated by interpolation with the given flow properties at the cell centers of two adjacent control volumes. The values r i , r i+1 and r are defined as the physical positions of the two cell centers and their interfaces, respectively. The interpolation formulation can be given as: where ψ stands for the flow properties, including ρ, u, v and e. f eq α (r − e α δ t , t − δ t ) and g eq α (r − e α δ t , t − δ t ) can be obtained by the corresponding equilibrium density distribution function and energy distribution function. Then, the flow properties of the cell interface can be written as: Next, f eq α (r, t) and g eq α (r, t) can also be easily solved by distribution functions. After obtaining the equilibrium distribution functions, the fluxes can be evaluated according to Equation (13). Computational Sequence The complete numerical simulation procedures for each time step of the proposed method are summarized below. 1. According to the fluid properties of the nanofluid, determine initial velocity and temperature field; 2. Based on the grid size, identify a streaming time step at each interface and then the single relaxation parameters, including dynamic viscosity and the thermal diffusivity; 3. Apply the D2Q9 model to compute the density and energy equilibrium distribution functions f eq α (r − e α δ t , t − δ t ) and g eq α (r − e α δ t , t − δ t ) around the middle point r of each interface; 4. Compute the macroscopic flow properties of nanofluid at the cell interface and then compute f eq α (r, t) and g eq α (r, t) by the equilibrium distribution functions of the D2Q9 model; 5. Compute fˆα and gˆα terms, then the fluxes at the cell interface can be solved by Equation (13); 6. Calculate the force source term and add this term to the fluxes; 7. Problem Description The computational domain and boundary conditions are shown in Figure 2. From this figure, it can be seen that the no slip boundary condition was applied on four walls. The adiabatic condition was set on the top and bottom walls and temperatures of 1 and 0 were applied on the left and right walls, respectively. The non-dimensional parameters, Prandtl number Pr and Rayleigh number Ra, were applied to determine the dynamic similarity as follows: where L = 1 is the characteristic length of the square cavity and V c is the characteristic thermal velocity which is constrained by the low Mach number limit. In the present simulations, V c = 0.1 was set in order to ensure incompressible viscous flow. In the present study, Al 2 O 3 /water nanofluid was used. The thermophysical properties of the water and nanoparticles are listed in Table 1. The homogeneous model for nanofluid was adopted. Physical properties of the nanofluids, including density, specific heat and thermal expansion coefficient, were obtained using the classical formula developed for conventional solid-liquid mixtures as follows: where φ refers to the volume concentration of nanoparticles and the subscripts s, f denote the particle and base fluids. The effective viscosity and thermal conductivity of the nanofluid strongly affect the heat transfer rate and flow characteristics of nanofluids. The effective viscosity could be estimated by experimental correlation for 47 nm Al 2 O 3 /water nanofluid by Angue Mintsa et al. [38] and thermal conductivity was given by Gherasim et al. [39] as follows: In the present simulations, the convergence criterion for flow field and temperature field were respectively given as follows: Natural Convection of Pure Fluid in a Square Enclosure To testify as to the accuracy and performance of the lattice Boltzmann flux solver based on the population model, the classical natural convection in a square enclosure filled with air and water was studied at Ra = 10 3 , 10 4 , 10 5 and 10 6 . Firstly, a grid independent study was conducted on five different uniform grids of 101 × 101, 151 × 151, 201 × 201, 251 × 251 and 301 × 301 for the natural convection problem at Ra = 10 6 and Pr = 0.7. As shown in Table 2, when the mesh size was 201 × 201, or even larger, the average Nusselt number did not change much and the value was between the benchmark solutions of Davis [40] and Hortmann et al. [41]. When the mesh size was larger than 151 × 151, the maximum horizontal velocity on the vertical mid-plane, the maximum vertical velocity on the horizontal mid-plane and their locations were in agreement with the benchmark solutions of Davis [40]. The above results illustrated grid independence on uniform grids of 201 × 201, for the case of Ra = 10 6 . Table 2. Grid independent study on uniform of natural convection at Ra = 10 6 . [41] 8.825 ----Based on the above results, the grid independent study was conducted on non-uniform grids by using the size of less than 201 × 201. Table 3 shows the numerical results of six different non-uniform grids of natural convection at Ra = 10 6 . From this table, the results were close to the data of uniform grids of 201 × 201 when the non-uniform mesh was more than 121 × 121. In order to ensure the accuracy and efficiency of numerical simulations, the non-uniform grid of 141 × 141 was chosen to simulate natural convection in a square enclosure. Table 3. Grid independent study on non-uniform of natural convection at Ra = 10 6 . The average Nusselt number results at different Rayleigh numbers are listed in Table 4, and it can be seen that the numerical simulation results were in good agreement with previous literature results at different Rayleigh numbers. This illustrated the accuracy of the present method for natural convection. Figure 3 shows the temperature distribution at horizontal midsections of the enclosure. For the enclosure filled with air, the results of Ra = 10 5 were compared with the numerical results of Khanafer et al. [6] and the experimental results of Krane and Jessee [43]. For the enclosure filled with water, the results were compared with numerical results of Lai and Yang [1]. It was noted from the comparisons that the solutions were in excellent agreement. This illustrated that the method in this paper could capture the temperature field very well. The streamlines and isotherms of air and water at various Rayleigh numbers are shown in Figures 4 and 5, respectively. It can be seen that the natural convection and heat transfer between the wall and fluid were enhanced as Ra increased. For Ra ≤ 10 4 , the flow characteristic was to appear as a central vortex. For Ra > 10 4 , the central vortex became more expanded and finally broke up into two vortices so that temperature boundary layers were formed. The above phenomenon agreed well with previous studies. Natural Convection of Nanofluid in a Square Enclosure After validating the numerical method for natural convection in a square enclosure filled with pure fluid, the natural convection in a square enclosure filled with Al 2 O 3 -water nanofluid of nanoparticles having volume fraction φ = 1-4% at Ra = 10 3 -10 6 was simulated to validate the present numerical algorithm. The presented averaged Nusselt numbers were compared with the numerical results of Lai and Yang [1] and listed in Table 5. It shows that there was a good agreement and the relative errors were less than 0.8%, which further illustrated that the present numerical method could simulate the natural convection of nanofluid at different Rayleigh numbers and nanoparticle volume fractions. In the present numerical simulations, the effect of nanoparticle suspensions (Al 2 O 3water) on flow and temperature characteristics for Ra = 10 3 -10 6 and nanoparticles volume fraction φ = 0-10% were studied. The variation of average Nusselt number against solid volume fraction for different Rayleigh numbers is shown in Figure 6a and the variation of average Nusselt number against Rayleigh number for different solid volume fractions is shown in Figure 6b. Numerical results indicated that average Nusselt number increased with the increase of Ra and φ. This illustrated that the function of heat transfer was enhanced with the augmentation of nanofluid thermal conductivity, which indicated that the major mechanism of heat transfer in flowing fluid was thermal dispersion. At the same Ra, the relationship of the average Nusselt number and solid volume fraction was almost linear. At the same solid volume fraction, the relationship of the average Nusselt number and Ra presented an exponential form. At higher Rayleigh number, the greater the heat transfer rate that could be obtained. Figures 7 and 8 indicate the isotherms and streamlines of nanofluid (Al 2 O 3 -water) at Ra = 10 3 -10 6 and φ = 0%, 5% and 10%, which show the effect of volume fraction and Ra on flow field and temperature field very well. From Figure 7, it can be seen that heat transfer between the wall and fluid were enhanced as Ra increased. As the volume fraction of nanoparticles increased, the isotherm changed slightly. That was because the mixture flow became more viscous, due to the nanoparticles. The velocity of flow fluid reduced and then natural convection weakened. However, the function of heat transfer in total computational domain was enhanced, which was attributed to the augmentation of nanofluid thermal conductivity. From Figure 8, it can be observed that the flow appeared as a central vortex for lower Ra. As Ra increased, the central vortex became more expanded and finally broke up into two vortices, so that temperature boundary layers were formed. For pure fluid, the vortex formed in the enclosure as a result of the buoyancy effect. By increasing the volume fraction of nanoparticles, the intensity of streamlines increased, due to the high energy transport through the flow as a result of irregular motion of the ultra-fine particles. Problem Description The boundary condition-enforced immersed boundary method was chosen for treatment of the solid boundary conditions. Based on the immersed boundary method and thermal lattice Boltzmann flux solver (IB-TLBFS), the macroscopic governing equations can be rewritten as: ∂ρ n f u ∂t where the force source term f b and the heat source term q b are both generated by the immersed boundary. To solve the governing equations, the calculation process is divided into two steps: the first step predicts the state variables without taking account of the boundary function and the second step corrects velocity and temperature by the immersed boundary method. In this work, the implicit velocity correction scheme proposed by Wang et al. [44] was be applied in view of satisfaction of the no slip boundary. The implicit heat source scheme proposed by Ren et al. [45] was applied for the Dirichlet boundary conditions of the temperature field. Natural convection of a heated bluff body in a square enclosure was studied. The physical models, computational domain and boundary conditions are presented in Figure 9. All boundaries were no-slip and isothermal boundary conditions. The flow was assumed to be laminar and driven by the temperature difference. Numerical investigations were carried on two types of bluff bodies, a circular cylinder and a square. The four side walls of the outer square enclosure were cooled isothermally at T C and the side length was L. The wall of the inner bluff body was heated isothermally at T H and D and a represent the diameter of the circular cylinder and the side length of the square, respectively. For fixed Rayleigh number, numerical simulation cases were designed to have a fixed perimeter for different bluff bodies and the influences of geometry on the heat transfer is discussed in detail. Natural Convection in the Annulus between Concentric Circular Cylinder and Square Enclosure After validating the numerical algorithm of the thermal lattice Boltzmann flux solver, natural convection in the annulus between concentric circular cylinder and square enclosure at Ra = 10 4 , 10 5 and 10 6 were simulated to validate the immersed boundary method and code implementation. Numerical simulations were conducted for three different aspect ratios (Ar = 1.67, 2.5 and 5.0). The average Nusselt number was also computed and compared with reference data in the literature. The computed average Nusselt numbers are compared in Table 6 with those of Ren et al. [45], Shu et al. [46] and Moukalled et al. [47]. From this table, it can be seen that the present results of the method combining IBM and TLBFS agreed very well with reference data. Besides this, the results revealed that the average Nusselt number greatly depended on Rayleigh number and aspect ratio. Due to buoyancy-induced convection, the average Nusselt number increased with increase of Ra, while it decreased with increase of Ar, due to the effect of annulus gap space. The streamlines and isotherms in the annulus at various Rayleigh numbers and aspect ratios are shown in Figure 10. Conduction dominated the flow field and a relatively weak convective flow could be observed in the annulus at the lower Ra. As the Rayleigh number increased, the strength of the convective flow grew and the center of the recirculation eddy changed its position. When Ra = 10 6 , a relatively stronger convective flow dominated the fluid field and a higher temperature gradient could be observed. In contrast, stronger convective flow and higher temperature gradient could be observed in the case of lower values of Ar. Natural Convection of Nanofluid between Bluff Body and Square Enclosure In the present study, numerical investigations of natural convection between heated bluff body and square enclosure were conducted for nanoparticles having volume fractions of φ = 0%, 2% and 4% and Rayleigh numbers of Ra = 10 4 , 10 5 and 10 6 . The averaged Nusselt numbers are listed in Table 7. The numerical simulation results indicated that average Nusselt number increased with increase of Ra and φ, which was the same as occurred in natural convection of square enclosure. By comparison, the averaged Nusselt number of the natural convection around a circular cylinder in an enclosure was greater than that of the square at the same calculation conditions. This illustrated that a smooth geometrical shape was beneficial to heat transfer. 6 ) and values of nanoparticle volume fractions (φ = 0 and 0.04). An overview of this figure indicated that the thermal fields strongly depended on Rayleigh number. When Ra = 10 5 or even lower, the isotherms of φ = 0 were almost close to that of φ = 0.04, which illustrated that nanoparticle volume fraction played a smaller role in heat transfer and flow pattern. When Ra = 10 6 , there were significant differences between the isotherms of φ = 0 and φ = 0.04, which illustrated that nanoparticle volume fraction played a role in heat transfer and flow pattern for high Ra. The thickness of the thermal boundary layer decreased as the volume fraction increased, which was due to the increasing conduction heat transfer by adding nanoparticle volume fraction. Figure 13 shows the streamlines for natural convection around a circular cylinder and square at nanoparticle volume fractions φ = 0.04 and Ra = 10 6 . From Table 7, it can be seen that the preferable heat transfer effect could be acquired by the cylinder in comparison with the square at the same perimeter. That was because the velocity and temperature gradients around the sharp corners of the square dramatically changed, which prevented the heat transfer effect. Conclusions The thermal lattice Boltzmann flux solver (TLBFS) was applied to simulate natural convection of nanofluid in a square enclosure. This method couples the finite volume method and lattice Boltzmann models to realize the solution of incompressible thermal flow. To validate the accuracy and performance of this method, natural convection in a square enclosure filled with pure fluid (air and water) was first studied. There were good agreements with previous literature. Numerical investigations of fluid flow and convective heat transfer were performed. The effects of some parameters, such as the Rayleigh number (Ra), and volume fraction of nanoparticles (φ), on natural convection were analyzed. With increase in the Rayleigh number and nanoparticle volume fraction, the heat transfer rate increased and the nanofluid flow became more viscous and this led to a decrease in nanofluid motion velocity. The average Nusselt number was an increasing exponential function of the Rayleigh number and an increasing linear function of the nanoparticle volume fraction. Then, natural convection around a bluff body in a square enclosure was studied by a method combining TLBFS and immersed boundary method. Natural convection problems in the annulus between concentric circular cylinder and square enclosure without nanofluid were simulated, which validated the feasibility of the numerical algorithm and code implement. Numerical investigations of natural convection between heated bluff body (cylinder and square) and square enclosure were conducted for different nanoparticle volume fractions and Rayleigh numbers. The numerical results illustrated that heat transfer effect increased with increase of Ra and φ. At lower Ra, the function of heat transfer with the augmentation of nanofluid thermal conductivity was counteracted by the more viscous flow. Nevertheless, nanoparticles played a better role in enhancing natural convection at higher Ra. The above results declare that the TLBFS is a promising method for heat transfer of nanofluids of the future.
2022-10-14T15:34:59.868Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "8d8732ab67d6c05c6824b1d34ac0ee7eed75cabd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/24/10/1448/pdf?version=1665485192", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "689ca3f8040d2fcc737de82407defb42ae0dfb75", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
53144728
pes2o/s2orc
v3-fos-license
Genetic Variation between Dengue Virus Type 4 Strains Impacts Human Antibody Binding and Neutralization Summary There are four distinct DENV serotypes, and within DENV4, there are five distinct genotypes. The impact of genotypic diversity is not known, nor is it clear whether infection with one DENV4 genotype results in protective immunity against the other genotypes. To measure the impact of DENV4 genetic diversity, we generated an isogenic panel of viruses containing the envelope protein from the different genotypes. We characterized many properties of these viruses and find that a small number of amino acids changes within the envelope have disproportionate impacts on virus biology. Additionally, we observe large differences in the ability of DENV4 antibodies, immune sera, and vaccine sera to neutralize the panel, suggesting that DENV4 immunity might not be equally protective against all DENV4s. Our results support the monitoring of changing or emerging DENV genotypes and their role in escaping pre-existing neutralizing antibodies in people who have been vaccinated or exposed to natural DENV4 infections. In Brief Gallichotte et al. show that subtle genetic variation within the envelope protein across DENV4 genotype viruses can have disproportionally large impacts on many aspects of virus biology. Additionally, genotype viruses are differentially bound and neutralized by DENV antibodies, suggesting that DENV4 immunity may not be equally protective against all DENV4 viruses. SUMMARY There are four distinct DENV serotypes, and within DENV4, there are five distinct genotypes. The impact of genotypic diversity is not known, nor is it clear whether infection with one DENV4 genotype results in protective immunity against the other genotypes. To measure the impact of DENV4 genetic diversity, we generated an isogenic panel of viruses containing the envelope protein from the different genotypes. We characterized many properties of these viruses and find that a small number of amino acids changes within the envelope have disproportionate impacts on virus biology. Additionally, we observe large differences in the ability of DENV4 antibodies, immune sera, and vaccine sera to neutralize the panel, suggesting that DENV4 immunity might not be equally protective against all DENV4s. Our results support the monitoring of changing or emerging DENV genotypes and their role in escaping pre-existing neutralizing antibodies in people who have been vaccinated or exposed to natural DENV4 infections. INTRODUCTION Dengue virus (DENV) is a single-stranded positive sense RNA virus. It is estimated that over one-third of the world's population is at risk for DENV infection, resulting in almost 400 million infections annually (Bhatt et al., 2013). Infection with DENV can result in a range of symptoms, from subclinical or mild disease, to severe DENV hemorrhagic disease and shock syndrome (Halstead, 2015;Katzelnick et al., 2016). There are four genetically and antigenically distinct DENV serotypes (DENV1-DENV4), which cocirculate around the world (Weaver and Vasilakis, 2009;Calisher et al., 1989;Holmes and Twiddy, 2003). Infection with one serotype is thought to provide long-term protection against subsequent infection with the homologous serotype; however, individuals are at risk for infection with the remaining three serotypes (Coloma and Harris, 2015). However, there are rare instances of reinfection with the homologous serotype (Forshey et al., 2016;Waggoner et al., 2016), suggesting that homotypic immunity may fail to prevent infection under some conditions (Katzelnick et al., 2015). The four DENV serotypes share approximately 80% homology at an amino acid level across the entire coding region of the genome (Fleith et al., 2016). The envelope glycoprotein is roughly 70% conserved across DENV1-DENV4, containing fully conserved regions with no variation (e.g., fusion loop), and other regions containing highly divergent sequences (Rey et al., 2018). The molecular and evolutionary drivers of variation between and within serotypes remains uncertain (Bennett et al., 2010;Holmes and Twiddy, 2003). As determined using phylogenetic analyses, within each serotype, there are multiple genetically distinct genotypes, which are more closely related to each other than they are to the other serotypes (Weaver and Vasilakis, 2009). DENV4 was first reported in the Philippines and Thailand in 1953, has since spread worldwide, and currently co-circulates with DENV1-DENV3 (Messina et al., 2014). Within DENV4, there are five distinct genotypes (I, II, III, IV, and V) with genotype II being further divided into IIa and IIb ( Figure 1) (Chen and Han, 2016). Genotypes I and II currently circulate in human populations throughout the world (Cao-Lormeau et al., 2011;Dash et al., 2011;Fares et al., 2015;Klungthong et al., 2004). Conversely, genotype III, IV, and V infections are relatively rare. Genotype III has been detected sporadically in Asia between 1997 and 2015, and genotype V was primarily detected in India in the 1960s, but has been detected as recently as 2009 (Klungthong et al., 2004;Zhao et al., 2010;Shihada et al., 2017). Genotype IV is sylvatic, with only three known sequences (Durbin et al., 2013;Rossi et al., 2012), and has not yet been shown to spillover into humans, although rare cases of transient spillover have been documented for DENV1-DENV3 (Teoh et al., 2010;Vasilakis et al., 2008b). In this manuscript, we used reverse genetics to generate a panel of recombinant DENV4 viruses that contain an isogenic backbone and differ only by the genotype sequence of the E protein. We used this panel of viruses to evaluate biological and virological properties associated with the E protein including its impact on neutralization using a well-characterized panel of human monoclonal antibodies, convalescent DENV4 sera, and vaccine sera from human volunteers. Our data reveal clear and significant antigenic differences among the DENV4 genotypes, which is critical for understanding immunity after natural DENV infection and evaluating vaccine responses. Design of DENV4 Isogenic Envelope Panel Phylogenetic analyses of DENV4 identifies six groups designated as genotypes I, IIa, IIb, III, IV (sylvatic), and V ( Figure 1). As different isolates and genotypes of DENVs demonstrate variable growth rates and foci morphology in cell culture, hampering comparative studies of E protein variation, we used reverse genetics to construct a panel of recombinant DENV4 viruses. Using our previously described DENV4 molecular clone (genotype IIb) (Gallichotte et al., 2015), we replaced the wild-type (WT) envelope sequence with that from each of the other genotypes (Table S1; Figure 2A). All other structural and non-structural proteins were derived from WT DENV4, resulting in an isogenic panel of viruses that only differ in the E gene sequence (Figure 2A). Sequence analyses across the DENV4 genotype viruses reveal significant amino acid variation in EDIII as well as residues adjoining the hinge region between EDI and EDII (Figures 2B, 2C, and S1). When looking at the representative strain for each genotype, some amino acids differ in only one virus (e.g., position 132), whereas at other sites (e.g., position 351) the residues are variable across multiple genotypes ( Figures 2B and 2C). DENV4 Viruses Differ in Growth Kinetics and Foci Morphology To recover recombinant viruses, full-length cDNAs were assembled as previously described (Gallichotte et al., 2015;. DENV4 envelope protein sequences were aligned using neighbor-joining method with 100 replicates based on the multiple sequence alignment. Numbers in parentheses following virus species names indicate the number of sequences represented at that tree position. Viruses were isolated by electroporating full-length infectious viral RNA into C6/36 cells, and passaging cell-culture supernatant once to produce infectious stocks. When C6/36 insect cells were infected in a multi-step viral growth curve (MOI of 0.01), all viruses replicated with similar kinetics and achieved similar peak titers of about 10 7 ffu/mL after 4 days, with genotype IIa having slightly lower titers at earlier time points (Figure 3A). Growth curves performed at a higher MOI in C6/36 cells had similar growth kinetics across the panel, although viruses reached peak titers by day 3 post-infection ( Figure 3A). The recombinant DENVs displayed more heterogeneous growth kinetics on Vero cells following both low and high MOI infections ( Figure 3B). Genotype IIb viruses replicated most efficiently, and genotype V was significantly attenuated in growth, with peak titers 3 logs lower than that of the other genotypes ( Figure 3B). While genotype V was highly attenuated in Vero cells, it was the only virus to cause complete syncytia in C6/36 cells (Figure S2). Although speculative, it is possible that this demonstrates a virus adaptation for increased growth and spread in insect cells and insects. Syncytia formation has been seen with other strains of DENV (Pierro et al., 2006), but we did not observe syncytia with any of the other DENV4 genotype viruses. In addition to virus growth, we also compared viral foci morphology ( Figures 3C-3F). C6/36 foci were similar across the entire panel ( Figures 3C and 3D). Slightly more variation in foci morphology and size was noted in Vero cells across the panel, with genotype V producing the smallest foci; however, all strains produced foci that were clearly defined and visible ( Figures 3E and 3F). The attenuated foci size of genotype V was consistent with reduced replication in Vero cell growth curves ( Figure 3B). DENV4 E Genotype Viruses Do Not Differ in Thermostability The large differences in viral replication of the DENV4 panel in mammalian and insect cells (Figure 3) may be a result of different temperature sensitivities, as studies have implicated envelope sequences and virion stability (Lim et al., 2017). A thermostability assay revealed that the DENV4 variants are similarly stable, with little loss of infectivity after incubation at 28 C and 37 C; however, all viruses lost $1 log of infectivity after incubation at 40 C ( Figure 4A). These results demonstrate that something other than thermostability contributes to differences in the ability of the viruses to infect and replicate in mammalian and insect cells. DENV4 E Variants Differ in Maturation Status, Enhanceability, and Glycosylation Pattern As DENV maturation state may be heterogeneous in vitro, the recombinant panel allowed us to evaluate the role of E protein sequence on maturation status in an isogenic DENV4 backbone. During infection, DENV is assembled within the endoplasmic reticulum as immature virions containing pre-membrane (prM) proteins, which prevent fusion during viral egress. As DENV transits through the trans-Golgi network, pH change triggers cleavage of prM by the host protease furin. As the virus leaves the cell, cleaved pr dissociates, leaving fully mature viral particles. In cell culture, furin cleavage and pr dissociation are inefficient processes and highly cell type dependent, leading to heterogonous population of differentially mature viruses, containing different amounts of uncleaved pr peptide (Pierson and Diamond, 2012). As maturation status can influence infectivity and antibody neutralization (Mukherjee et al., 2014), we compared the maturation status across the DENV4 panel using immunoblotting and ELISAs ( Figures 4B and 4C). Immunoblotting revealed that the levels of pr protein varied across the panel, with genotype I and III being the least mature (most pr present) and genotypes IV and V being the most mature (very little pr protein detected) ( Figure 4B). To corroborate these findings using a different assay, ELISA binding assays were performed, by capturing DENV4 viruses with cross-reactive monoclonal antibodies (mAbs) 4G2 (anti-E) and 2H2 (anti-pr), then probing with a pr-specific antibody (1E16) (Smith et al., 2015). These studies also demonstrated that there are differing levels of pr protein across the panel ( Figure 4C). Consistent with immunoblotting, genotype III was highly immature, whereas genotype V was the most mature. As the DENV4 variant panel contains differing amounts of pr protein ( Figures 4B and 4C), we sought to determine whether the viruses could be enhanced by a non-neutralizing, pr-specific mAb (Smith et al., 2015). An antibody-dependent enhancement (ADE) assay revealed that despite differing levels of pr present on viruses, all viruses are similarly enhanced, although the concentration of antibody needed to achieve peak enhancement and the level of enhancement do vary across the viruses ( Figure 4D). The furin cleavage site (located in prM protein) was not altered across the panel, suggesting that the E protein sequence can impact virus maturation (Pierson and Diamond, 2012). At a neutral pH of the released virus, the pr protein sits over the fusion-loop and is predicted to interact with seven amino acids in EDII (Figures 4E and 4F); however, at low pH, during processing of the virion, pr can make additional contacts across the envelope dimer. Under either condition, none of these amino acids were altered in the panel, suggesting that other residues may function to stabilize pr. Interestingly, there is little variability within the paired pr protein sequences of the variant panel ( Figure S3). Immunoblotting also revealed that the envelope protein of genotype V is slightly smaller than that of the other genotypes ( Figure 4B). The DENV envelope protein contains two glycosylation sites, one in EDII (Asn-67) and one in EDI (Asn-153) (Figure 4E). Analysis of the glycosylation site sequences across the panel revealed that a single amino acid change at residue 153 in genotype V disrupts the N-X-T/S glycosylation motif ( Figure 4G), resulting in a smaller molecular weight envelope protein ( Figure 4B). When looking at all genotype V viruses used in our phylogenetic tree ( Figure 1; Table S2), 87.5% have amino acid variability that disrupts the Asn-153 glycosylation site (Table S3), suggesting this disrupted motif is not unique Virus 4 19 46 54 96 120 122 132 148 150 153 154 162 174 182 202 203 222 227 230 233 260 265 310 329 335 340 342 351 355 357 364 365 382 384 385 Geno I to the genotype V strain selected within our panel, but appears to be conserved across most genotype V viruses. DENV envelope glycosylation can be important for binding to host cell receptors, determining infectivity in different hosts, and binding and neutralization by antibodies (Pokidysheva et al., 2006;Mondotte et al., 2007;Rouvinski et al., 2015). Therefore, genotype V's conserved lack of the second glycosylation site might impact the virus's ability to efficiently infect and be transmitted between vertebrate and invertebrate hosts, and be the result of adaptation to a different cellular or host tropism (Bryant et al., 2007;Lee et al., 2010). Additionally, the lack of this glycosylation site might contribute to the genotype V C6/36 syncytia phenotype ( Figure S2). Genetic alteration of glycosylation and pr protein sequences, and generation of fully mature and fully immature virus preparations would allow one to determine the role of glycosylation and maturation status on many aspects of virus biology. Glycosylation status and large differences in viral maturation state have previously been shown to impact antibody binding and neutralization (Mukherjee et al., 2014); therefore, the binding and neutralization differences we see within this panel might be partially attributable to the variation in glycosylation and maturation. Residue Domain I and II Binding of Serotype-Specific and Cross-Reactive mAbs to DENV4 E Genotype Variants We next measured the binding of a panel of well-characterized DENV4 serotype-specific and DENV cross-reactive mAbs to our DENV4 viruses by ELISA ( Figure 5). DENV4 serotype-specific antibodies D4-126 and D4-131 recognize partially overlapping epitopes that have not been fully defined in the EDI/II hinge region ( Figures S4A and S4B) (Nivarthi et al., 2017). All DENV4 genotypes bound similarly to D4-126 and D4-131 ( Figure 5A). mAb D4-141, which recognizes an EDIII epitope ( Figure S4C), also bound all viruses similarly, despite large amount of variation in EDIII across the panel ( Figure 5A). The non-human primate mAb 5H2, which binds to a well-defined epitope on EDI, displayed highly variable binding across the panel ( Figure 5A). Two amino acids (162 and 174) predicted to be 5H2 contact residues were variable across the DENV4 panel ( Figure S4A) (Cockburn et al., 2012). Genotype III did not bind 5H2 and contains an amino acid polymorphism at position 174, suggesting that this position is essential for 5H2 binding. The cross-reactive human mAbs C10 and B7 recognize quaternary envelope dimer epitopes (EDEs) that span across the fusion loop of one E monomer into EDIII or EDI of the neighboring monomer ( Figures S4A and S4B) (Rouvinski et al., 2015). These mAbs bind all four DENV serotypes, reflecting the highly conserved nature of the epitope across the DENV E protein. Consequently, it was not surprising that C10 and B7 bound all genotypes within the DENV4 panel with similar efficiencies (Figure 5B), as the differences between genotypes are smaller than those between serotypes. Binding of B7, however, is dependent on the presence of a glycan at position 153 in EDI (Rouvinski et al., 2015). The DENV4 genotype V virus in this panel, which lacked this glycosylation site ( Figure 4E), failed to bind to the B7 antibody ( Figure 5B). Neutralization of DENV4 E Genotype Variants by Serotype-Specific and Cross-Reactive mAbs Next, we evaluated the ability of DENV4 serotype-specific and cross-reactive mAbs to neutralize the DENV4 panel in a Vero cell focus reduction neutralization test (FRNT), and a flow cytometry-based neutralization assay with U937 cells expressing DC-SIGN, a known DENV attachment factor ( Figures 6A, 6B, and S5). We observed a 1-to 2-log difference in antibody neutralization titers of the DENV4 serotype-specific antibodies against the DENV4 panel. Importantly, some mAbs were not able to neutralize select genotypes even at the highest concentrations tested, despite robust binding (e.g., D4-126 and genotype III) ( Figures 5A, 6A, and 6B). Other mAbs have similar neutralization titers despite lower levels of binding (e.g., 5H2 and genotype V). These results reveal that, for each mAb and virus, the amount of mAb sufficient to bind and/or neutralize varies significantly. Consonant with the binding results, EDE mAbs C10 and B7 potently neutralize all viruses similarly ( Figures 6A and 6B), with the exception of B7 and genotype V, due to its missing glycosylation site. The C10 and B7 epitopes are highly conserved across the DENV4 panel, likely explaining the robust and consistent neutralizing titers ( Figures S4A and S4B). Additionally, the range of neutralization titers is smaller for cross-reactive mAbs compared to serotype-specific mAbs, suggesting that the more cross-reactive an antibody is (i.e., the more serotypes it recognizes), the less genotypic diversity matters. The Neutralization of DENV4 E Genotype Variant Viruses by Human Sera from Natural Infection and Vaccination Convalescent immune sera from people who have recovered from primary DENV4 infections contain strongly neutralizing serotype-specific and weakly neutralizing cross-reactive antibodies. We performed neutralization assays with DENV4 convalescent immune sera to measure the breadth of neutralization across different DENV4 E variant genotypes ( Figures 7A and S6A). While the absolute neutralization titers vary across samples by 1-2 logs, all DENV4 immune sera were able to neutralize all genotypes ( Figure 7A). While this suggests that natural infection with any DENV4 genotype elicits antibodies that are neutralizing against other genotypes as well, individuals who have weaker responses may be vulnerable to reinfection due to genotype variation. Individuals who received a genotype II DENV4 monovalent vaccine developed neutralizing antibodies (Figures 7B and S6B). As seen with natural isolates (Durbin et al., 2013), we also observed a larger spread in neutralization titers with the monovalent vaccine immune sera (>2 logs) compared to the natural infection sera. Additionally, for some vaccine sera, neutralizing antibodies were undetectable against some geno-types, despite robust neutralization of other strains (e.g., sample 68 does not neutralize genotype IV or V, but potently neutralizes genotype II viruses). Among the currently circulating genotype I, II, and III viruses, vaccine-matched genotype II viruses were most potently neutralized. To determine whether the differential genotype neutralization is driven by serotype-specific or cross-reactive antibodies, we used depletion techniques to specifically remove cross-reactive antibodies (de Alwis et al., 2012) ( Figures 7C, 7D, and S6C-S6E). We find that removing crossreactive antibodies minimally alters neutralization titers, suggesting that the majority of total neutralization comes from serotype-specific antibodies, and that the differences in titers across DENV4 genotypes are primarily driven by DENV4 serotype-specific antibodies as well. We next looked at the DENV4 genotypic neutralizing breadth of individuals that received a tetravalent DENV vaccine. As tetravalent vaccination can result in both DENV4 serotype-specific antibodies, and strongly neutralizing cross-reactive antibodies, depletion techniques were again used to determine the contribution of each population of antibodies to total neutralization. Control depleted sera, containing both serotype-specific and cross-reactive antibodies, differentially neutralized the DENV4 variants, with vaccine-matched genotype II viruses neutralized on average 3-to 20-fold more efficiently than the other genotypes ( Figures 7E and S7A). In addition, some sera failed to neutralize currently circulating genotype I or III variants. When we removed DENV serotype cross-reactive antibodies, we observed only a small reduction in neutralization titers, indicating that the vaccine mainly induced serotype-specific neutralizing antibodies ( Figures 7F and S7B). Importantly, after removing cross-reactive antibodies, we see a similar spread in titers across the panel, suggesting that DENV4 serotype-specific antibodies are driving the differential genotypic neutralization. When all DENV crossreactive and serotype-specific antibodies were depleted, we completely lost neutralization against all viruses ( Figure S7C). DISCUSSION DENV is the most significant arthropod-borne virus, causing significant morbidity and mortality worldwide. Sanofi-Pasteur's tetravalent DENV vaccine, Dengvaxia, has been marketed and used in human populations, and there are two additional commercial tetravalent vaccine candidates under evaluation in phase III human trials, including the NIH tetravalent DENV vaccine. Recent results with Dengvaxia demonstrate high vaccine efficacy in people who were dengue immune prior to vaccination (81.9%), and much poorer efficacy in people who were naive before vaccination (52.5%) (Hadinegoro et al., 2015). Moreover, naive individuals who received the vaccine appear to be at greater risk of developing severe disease, when exposed to a natural DENV infection approximately 24 months or more following the last dose of vaccine. As a result, Dengvaxia is currently recommended only for use in people who have been primed by natural DENV infections (Sridhar et al., 2018). Dengvaxia stimulated high levels of DENV4 serotype-specific neutralizing antibodies (Henein et al., 2017), and overall vaccine efficacy was highest against DENV4. However, in subjects who experienced DENV4 breakthrough infections, molecular analyses indicated that the vaccine had a greater efficacy against vaccine-matched DENV4 genotype II than the co-circulating genotype I virus (Rabaa et al., 2017). These data underscore the need for developing viruses and reagents that capture intra-serotype genetic variation when evaluating vaccine immune responses and identifying potential antibody-based correlates of protective immunity. The existence of phylogenetically and antigenically distinct DENV1-DENV4 serotypes is well accepted in the literature (Holmes and Twiddy, 2003); however, the role of genetic diversity across genotypes is less well studied. Many common laboratory DENV strains have either been heavily cell culture adapted and/or differ in sequence from contemporary circulating strains (Dowd et al., 2015;Katzelnick et al., 2015Katzelnick et al., , 2017. Additionally, some laboratory, and importantly, vaccine strains, are composed of DENV genotypes that are likely extinct and, consequently, do not circulate in human populations (Katzelnick et al., 2017). While CD8 + T cells, CD4 + T cells, and other mechanisms of cellular immunity are correlated with DENV protective immunity (Mathew and Rothman, 2008), and antibodies against DENV NS1 may also alter disease severity (Hertz et al., 2017), neutralizing antibodies represent the best correlate of protection to date (Katzelnick et al., 2016;Buddhari et al., 2014). Natural DENV infection is thought to provide lifelong protection against symptomatic reinfection with that serotype (Katzelnick et al., 2016;Buddhari et al., 2014); however, it is unknown whether individuals are protected with the same efficacy against all genotypes within the serotype. There are reports of rare, typically asymptomatic, homotypic reinfection in people in Nicaragua and Peru (Forshey et al., 2016;Waggoner et al., 2016), which is potentially driven by genotypic differences between the primary and secondary infecting viruses. Some studies have evaluated the breadth of antibody neutralization against different genotypes elicited by natural infection or vaccination (Blaney et al., 2005;Durbin et al., 2013;Katzelnick et al., 2015;Messer et al., 2012;Vasilakis et al., 2008a). While most individuals exposed to natural infections or vaccines neutralized multiple genotypes within each serotype, absolute levels of neutralizing antibodies vary depending on the individual and the DENV genotypes used. Indeed, even in the current study, we noted that most individuals exposed to natural infections or a vaccine, developed antibodies that neutralized the most Cross-Reactive Serotype-Specific Geno I Geno IIa Geno IIb (WT) Geno IIb Geno III Geno IV (sylvatic) Geno V B A Figure 6. DENV4 Genotypic Variants Are Differentially Neutralized by Monoclonal Antibodies (A and B) DENV4 serotype-specific antibodies D4-126, D4-131, D4-141, and 5H2 and DENV cross-reactive antibodies C10 and B7 were evaluated for their ability to neutralize DENV4 genotype viruses in (A) Vero cell focus reduction neutralization test (FRNT) and (B) flow cytometry-based neutralization assay (Neut) (mean ± SD of technical duplicates). The y-axes represent the concentration of antibody required to neutralize 50% of infectious virus. The dashed line represents assay limit of detection. prevalent DENV4 genotypes, but the levels of neutralizing antibody varied considerably by genotype. In 2008, the World Health Organization (WHO) noted that there is little evidence of antigenic drift within DENV serotypes that might lead to resistance of certain strains to post-vaccination neutralization, yet they advised that laboratories consider inclusion of multiple virus strains, including laboratory prototype strains and recent clinical isolates when performing neutralization assays (Roehrig et al., 2008). Our results demonstrate the value of including different DENV genotypes when evaluating vaccine responses. Using reverse genetics, we developed an isogenic panel of DENV4 recombinant viruses that only differ in their E glycoprotein, which was derived from different genotypes. We reconstructed clinically relevant isolates with E protein genes derived from clinical specimens or low passage history in culture. Our data demonstrate that DENV4 E protein genotypic diversity can impact many aspects of the virus's cell biology including growth in cells, glycosylation, syncytial formation, and maturation. Additionally, as all the viruses can be enhanced, it highlights the importance of determining the impact of DENV genetic variation on disease enhancement after infection and vaccination. The chimeric DENV4 virus panel described here is a powerful tool for initial assessment of the impact of DENV E genotypic variation on virus biology and humoral immunity. As we selected one representative envelope sequence per genotype and utilized a recombinant approach, the viruses here do not capture all E protein sequence diversity within each genotype and some will never actually be encountered by vaccine recipients. Hence, it will be important to evaluate more contemporary DENV4 genotype I, II, and III strains in future studies, including the use of both natural and recombinantly derived isolates. In agreement with our findings, we note that another study using WT strains of endemic and sylvatic DENV4 viruses demonstrated better neutralization of vaccine-matched endemic genotype II viruses compared to sylvatic viruses using sera from DENV4 monovalent vaccine recipients (Durbin et al., 2013). Our results demonstrate that infection or vaccination with a single DENV4 genotype stimulates variable levels of neutralizing (C-F) DENV4 natural infection sera (C), DENV4 monovalent vaccine sera (D), and NIH DENV tetravalent vaccine sera were control depleted with BSA (E), or depleted of cross-reactive antibodies (F) and evaluated for their ability to neutralize DENV4 genotype viruses. The y axis represents the dilution factor of immune sera required to neutralize 50% of infectious virus (mean ± SD of technical duplicates). The dashed line represents one-half the assay limit of detection. antibodies to different genotypes. Currently, there are insufficient data to correlate levels of neutralizing antibodies to protection from DENV disease. Moreover, other immune mechanisms involving T cells, NS1 immunity, and B cell memory may also reduce or eliminate clinical disease (Mathew and Rothman, 2008). However, it is worth noting that the licensed tetravalent DENV vaccine (Dengvaxia) had higher vaccine efficacy against vaccine-matched DENV4 genotype II ($83%) viruses compared to co-circulating DENV4 genotype I viruses ($47%) (Rabaa et al., 2017). We propose that targeted surveillance of changing or emerging DENV genotypes following vaccination will be valuable in assessing the influence of DENV genotype on the frequency of repeat infections and overall vaccine effectiveness. Phylogenetic Tree The tree was constructed in Geneious R11 using the neighbor-joining method (Jukes-Cantor genetic distance) with 100 replicates based on the multiple sequence alignment. The radial phylogram was visualized and rendered for publication using CLC Sequence Viewer 7 and Adobe Illustrator CC 2017. Virus Construction Chimeric recombinant DENV4 viruses were constructed as described before (Gallichotte et al., 2015. Briefly, DNA encoding isogenic envelope protein sequences was introduced into a quadripartite DENV4 infectious clone system using synthetically derived genes and recombinant DNA approaches. Plasmid DNA was digested and ligated together, and viral full-length genomic RNA was generated using T7 RNA polymerase. Infectious genome-length capped viral RNA transcripts were electroporated into C6/36 cells, and supernatant was harvested and passaged onto C6/36 cells to make viral working stocks. Cells C6/36 cells (ATCC CRL-1660) were grown in minimum essential medium (MEM) at 32 C. Vero cells (ATCC CCL-81) were grown in DMEM media, U937 and U937 cells stably expressing DC-SIGN (U937+DC-SIGN) were grown in RPMI medium 1640, and all were cultured at 37 C. All media were supplemented with 5% fetal bovine serum (FBS), which was reduced to 2% during DENV infection. All media were supplemented with 100 U/mL penicillin, 100 mg/mL streptomycin, and 0.25 mg/mL amphotericin B. C6/36 and U937/ U937+DC-SIGN media were additionally supplemented with non-essential amino acids, and U937/U937+DC-SIGN media was further supplemented with L-glutamine and b-mercaptoethanol. All cells were incubated at 5% CO 2 . Immune Sera DENV4 immune non-human primate serum was obtained from BEI Resources (NR-41789). Human dengue immune sera were obtained from a previously described Dengue Traveler collection at the University of North Carolina. Vaccine sera were obtained from individuals who received a live-attenuated monovalent DENV4 or tetravalent vaccine 180 days post-vaccination, as developed by the NIH, and were provided by A.P.D. and S.S.W. All human sera samples were anonymized and obtained under Institutional Review Board approval. Viral Titering and Immunostaining Cells were plated 1 day prior to infection. Growth media was removed, and virus stocks were serially diluted 10-fold, added to cells, and incubated for 1 hr at either 32 C (C6/36) or 37 C (Vero). After incubation, 1% methylcellulose in Opti-MEM (supplemented with 2% FBS, 100 U/mL penicillin, 100 mg/mL streptomycin, and 0.25 mg/mL amphotericin B) was overlaid, and cells were incubated for 3-4 days. Cells were washed with PBS and fixed with 80% methanol. Cells were blocked in 5% non-fat dried milk and stained with anti-E (4G2) and anti-prM (2H2) mAbs and horseradish peroxidase (HRP)-labeled secondary antibody. Foci were developed using TrueBlue substrate, and viral foci were counted manually. Growth Curves C6/36 or Vero cells were seeded in 24-well plates 1 day prior to infection. Viruses were diluted to an MOI of either 0.01 or 0.5, added to cells, and incubated for 1 hr at either 32 C (C6/36) or 37 C (Vero). Inoculum was removed, cells were washed three times with PBS, and growth media were replaced. Media were sampled daily, replaced with fresh media, and immediately frozen at À80 C. Samples were titered as described above. Thermostability Assay DENV4 viruses were diluted 1:10, then incubated at 4 C, 28 C, 37 C, or 40 C for 1 hr, then immediately transferred to 4 C for 15 min. Viruses were then titered on Vero cells and immunostained as described above. Immunoblotting Virus stocks were diluted in PBS, mixed with sample buffer, and heated at 95 C for 10 min. Samples were run on 4%-20% Protean TGX gels and transferred to polyvinylidene difluoride (PVDF) membrane. Membranes were blocked in 5% non-fat dried milk and probed with anti-E (4G2) and anti-prM (1E16) mAbs. Membranes were washed and probed with secondary antibodies labeled with HRP and developed using chemiluminescent substrate. Membranes were visualized using a LI-COR C-DiGit Blot Scanner. Enzyme-Linked Immunosorbent Binding Assay Plates were coated with anti-E (4G2) and anti-prM (2H2) antibodies in carbonate buffer overnight and blocked in 5% non-fat dried milk, and then virus antigen was added. Primary antibody was diluted in blocking buffer and added to plates for 1 hr at 37 C. Alkaline-phosphate-labeled secondary antibody was added and plates were incubated for 1 hr at 37 C. Plates were developed with p-nitrophenyl phosphate substrate and color changes were quantified using Bio-Rad iMark Microplate Absorbance Reader. ADE Assay mAbs was diluted 5-fold and mixed with virus previously diluted to result in $15% infection in U937+DC-SIGN cells. Virus:mAb mixtures were incubated at 37 C for 45 min, and then added to 5 3 10 4 U937 cells and incubated at 37 C for 2 hr. After incubation, cells were washed with growth media, and then resuspended in fresh growth media. The cells were incubated for 20 hr at 37 C, washed in PBS, fixed in 10% phosphate-buffered formalin, and then stained with anti-E mAb 4G2 directly conjugated to Alexa Fluor 488. Cells were analyzed on a Guava easyCyte flow cytometer. Neutralization Assays FRNT was performed by seeding Vero cells 1 day prior to infection. mAbs or immune sera were diluted 4-fold and mixed with virus stocks previously diluted to $40 ffu/well. Virus:Ab mixtures were incubated at 37 C for 1 hr, and then added to cells for 1 hr at 37 C. After incubation, overlay media was added and plates were incubated for 3 days. Cells were fixed and immunostained as described as above. Flow cytometry-based neutralization assays were performed as described above in ADE assays, except with U937+DC-SIGN cells. Polyclonal Antibody Depletion Assay Dynabeads were covalently bound to anti-E mAb 1M7 overnight at 37 C. Bead:mAb complex was blocked with 1% BSA in PBS at 37 C, and then washed with 0.1 M 2-(N-morpholino)ethanesulfonic acid (MES) buffer. Beads were incubated with BSA (control), purified DENV3 (cross-reactive depletion), or a mix of DENV3 and DENV4 (full depletion) for 1 hr at 37 C, and then washed three times with PBS. Bead:mAb:DENV complex was fixed with 2% paraformaldehyde in PBS for 20 min, and then washed four times with PBS. DENV-specific antibodies were depleted from sera by incubating beads with sera diluted 1:10 in PBS for 1 hr at 37 C with end-over-end mixing for at least two sequential rounds of depletions. Removal of DENV antibodies was confirmed by ELISA. Data Analysis and Software All data were analyzed and graphed using GraphPad Prism v7.0a. Protein structures were visualized using MacPyMOL: PyMOL v1.7.6.2. Replicate information is included in the figure legends. SUPPLEMENTAL INFORMATION Supplemental Information includes seven figures and three tables and can be found with this article online at https://doi.org/10.1016/j.celrep.2018. 10.006.
2018-11-15T08:55:05.777Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "fa51ea3a79b592cf5b64b664b6ac6dc811492b41", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2211124718315699/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa51ea3a79b592cf5b64b664b6ac6dc811492b41", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
231646441
pes2o/s2orc
v3-fos-license
Stereolithography Apparatus Evolution: Enhancing Throughput and Efficiency of Pharmaceutical Formulation Development Pharmaceutical applications of 3D printing technologies are growing rapidly. Among these, vat photopolymerisation (VP) techniques, including Stereolithography (SLA) hold much promise for their potential to deliver personalised medicines on-demand. SLA 3D printing offers advantageous features for pharmaceutical production, such as operating at room temperature and offering an unrivaled printing resolution. However, since conventional SLA apparatus are designed to operate with large volumes of a single photopolymer resin, significant throughput limitations remain. This, coupled with the limited choice of biocompatible polymers and photoinitiators available, hold back the pharmaceutical development using such technologies. Hence, the aim of this work was to develop a novel SLA apparatus specifically designed to allow rapid and efficient screening of pharmaceutical photopolymer formulations. A commercially available SLA apparatus was modified by designing and fabricating a novel resin tank and build platform able to 3D print up to 12 different formulations at a single time, reducing the amount of sample resin required by 20-fold. The novel SLA apparatus was subsequently used to conduct a high throughput screening of 156 placebo photopolymer formulations. The efficiency of the equipment and formulation printability outcomes were evaluated. Improved time and cost efficiency by 91.66% and 94.99%, respectively, has been confirmed using the modified SLA apparatus to deliver high quality, highly printable outputs, thus evidencing that such modifications offer a robust and reliable tool to optimize the throughput and efficiency of vat photopolymerisation techniques in formulation development processes, which can, in turn, support future clinical applications. Introduction Three-dimensional (3D) printing is defined as a set of manufacturing technologies used to make parts by adding material in a layer-by-layer fashion [1]. Due to its appealing features, 3D printing has received great interest from the pharmaceutical field, especially following the 2015 FDA approval of the first 3D-printed drug product, Spritam. Since then, interest has aroused fast and, so far, several 3D-printing technologies have been used, understood, and improved [2], and particular emphasis has been posed on its potential applications in delivering personalised medicine [3]. This particular use has been motivated by the intrinsic flexibility of 3D printers, that are able to fabricate solid oral dosage forms with bespoke properties potentially with no need to alter the formulation [4], in contrast to conventional tableting techniques which are not customizable at reasonable costs and only have limited geometries achievable. For example, the recently FDA-approved T19 rheumatoid arthritis drug, designed as a chronotherapeutic drug delivery system targeting the circadian symptoms of the disease, achieves its particular release profile thanks to the complex inner geometry fabricated through 3D printing [5]. Such an approach would complement the standard mass production of medicines, embracing a highly patient-centric method foreseen to revolutionise pharmacotherapy [6]. Promising 3D printing applications currently rely on Fused Deposition Modelling (FDM), Selective Laser Sintering (SLS) and vat photopolymerisation (VP) techniques [7]. Each of these technologies differ in the way the layers are built; for example, in FDM a drug loaded filament is thermally extruded into the desired geometry, while in SLS thin layers of powdered raw material are sintered by a laser [8]. VP techniques, such as Stereolithography (SLA) and Digital Light Processing (DLP) instead operate through light-induced curing of photosensible resins [8]. While FDM currently stands as a frontrunner in the advanced development of 3Dprinted solid oral dosage forms, its disadvantages should be considered as, for example, process limitations restrict the number of drugs that can be used due to potential processinduced thermal degradation [24,25]. This is especially true considering that FDM is generally coupled with hot-melt extrusion, thus doubling the incidence of thermal challenge and chance of stability issues [26]. Furthermore, developing drug-loaded filaments with satisfactory mechanical properties for extrusion and 3D printing can be challenging [27]. Similarly, heat-induced degradation could also affect SLS 3D-printed products due to the rise in temperature caused by the sintering activity of the laser [7]. Additionally, the required feed-stock material can suffer from flowability issues, particularly when the powder is thinly spread at the completion of each layer [28]. Such limitations are not shared by VP techniques, as both SLA and DLP do not rely on heat for fabrication and do not require powders. Instead, each layer is manufactured by either a laser beam (SLA) or a digital projector screen (DLP) inducing the polymerisation of a drug-loaded resin. VP is also a very accurate process with high printing resolution, enabling the fabrication of solid oral dosage forms with greater patient acceptability over other techniques such as FDM and SLS [7,29,30]. However, pharmaceutical applications of VP technologies still account for the smaller share and remain underdeveloped [7,31]. This is particularly dependent upon throughput limitations related to the impossibility of printing simultaneously using low volumes of different resins [21], thus making formulation development processes time consuming and cost inefficient. Although some discontinuous methods to overcome this limitation have been suggested [32], the overall process needs to be improved. Furthermore, limitations are posed by the lack of materials suitable for VP; commercially available photopolymer resins have been designed mainly for engineering purposes, where tough and resistant structures are needed with high crosslinking observed in the polymerised networks [33,34]. However, from a pharmaceutical perspective, such mechanical attributes are not desirable as orally administered dosage forms should completely break down to release their drug content and to then be eliminated with no risk of leaving tablet fragments in the gastrointestinal tract [35]. Additionally, despite the existence of biocompatible commercially available resins designed for special applications, such as dentistry [36,37], only a limited number of photopolymer formulations have been investigated for pharmaceutical applications [20,22,32,38]. Such limitations, therefore, lay the foundations for an extensive screening of photopolymer formulations and their respective evaluation. SLA and DLP 3D printers currently on the market are designed to operate using large volumes of a single resin at any one time [21] allowing for large prints, which can be advantageous in prototyping and similar applications. This is not required or desirable in pharmaceutical formulation development and, consequently, without addressing such aspects, developing novel formulations would require an unnecessarily large amount of resin resulting in a less than economical process. Hence, the aim of this work was to design and fabricate a novel SLA apparatus able to 3D print solid dosage forms using low volumes and multiple formulations at the same time, with the view to maximising throughput and cost-effectiveness of the technique. Lean production principles of avoiding waste related to 'inventory', 'overproduction' and 'waiting' were followed as a general guideline to identify critical areas to address to improve the technique for pharmaceutical applications [39]. Furthermore, the purpose of developing a novel SLA apparatus arrangement was to employ a high throughput screening of novel pharmaceutical photopolymer resins to address the lack of formulations for VP technologies. Screened formulations were evaluated based on their printability outcomes with the view to develop a pool of multi-purpose, drug-loadable resins that can be flexibly used to deliver safe, effective, and personalised dosage forms. Stereolithography Apparatus A Form 2 SLA 3D printer (Formlabs Inc., Somerville, MA, USA) was used as a desktop stereolithography apparatus to manufacture all the formulations presented in this work. The Form 2 3D printer is equipped with a 405 nm laser and has a build volume of 145 (width) × 145 (depth) × 175 (height) mm. The feedstock material consists of a photopolymer resin contained in a 200 mL vat. Printed objects are formed on a build platform made of aluminium and plastic, with a build area of 21,025 mm 2 and a weight of 635.18 g. Design and 3D Printing of a Modified Build Platform Prototype and Resin Tank An attachment consisting of twelve compartments to be inserted onto the original resin tank was designed on TinkerCAD (Autodesk Inc., San Rafael, CA, USA). In contrast to the original 200 mL resin tank, each compartment was designed to contain 10 mL of photopolymer resin. To match the novel resin tank, a modified version of the build platform featuring twelve build spots ( 12 BP) was also designed using TinkerCAD. Each spot has a build area of 400 mm 2 , allowing the fabrication of single tablets up to 20 mm in diameter. The modified build platform and the resin tank insert were 3D printed with the Form 2 using Clear resin photopolymer; each print was setup using PreForm 2.20.0-Beta 1 (Formlabs Inc., Somerville, MA, USA). The 3D printing of the build platform required 401.38 mL of photopolymer resin and took 31 h and 1 min to be completed. The resin tank insert required 156.33 mL of photopolymer resin and was completed in 9 h and 52 min. Both the modified parts were 3D printed at a resolution (layer thickness) of 100 µm. Following the 3D printing process, each part was placed in propan-2-ol and cleaned in a sonic bath for 20 min to remove any uncured resin. All the necessary supports were removed after drying for 10 min at room temperature. The twelve 3D printed compartments were finally fixed to the silicone layer on the original resin tank using silicone glue, while each spot of the 3D printed 12 BP was covered with 75 µm thick aluminium tape with an adhesion strength of 12N/cm to allow easy removal of printed tablets. Design and Fabrication of An Aluminium Multi-Build Platform A twelve spots aluminium build platform (aluminium 12 BP) was designed using SolidWorks (Dassault Systèmes, Vélizy-Villacoublay, France), based on the design of the 3D printed prototype, manufactured through computer numerical control (CNC) milling and finally bead blasted to provide a rough finishing aimed to increase objects' adherence while printing and to facilitate their release once fabricated; the support fixing the build platform to the SLA apparatus was designed using TinkerCAD and 3D printed with clear resin photopolymer. Tablet Uniformity Testing The original build platform (BP), the 3D printed 12 BP and the aluminium 12 BP were connected to the SLA apparatus and used to fabricate cylindrical tablets to evaluate the influence of different build platforms on tablet uniformity. Three batches of twelve tablets each were 3D printed on each platform. All tablets manufactured at this stage were composed of Clear Resin photopolymer V4.0 (Formlabs Inc., Somerville, MA, USA). After 3D printing, ten tablets per batch were randomly picked to carry out tablet uniformity tests. Measurements were taken for tablet weight, thickness and diameter. Tablets were designed using TinkerCAD. A conventional cylindrical geometry with a diameter of 12.0 mm and a thickness of 4.0 mm was selected and tablets were printed both directly on the build platform and oriented to 45 • using printing supports to evaluate the impact of scaffolds on tablet uniformity. Tablet thickness and diameter were measured using a digital caliper; tablet weight was measured on a precision balance. Statistical analyses were performed using SPSS Version 26.0.0.0 (IBM Corp., Armonk, NY, USA). Formulation of Photopolymer Resins and 3D Printing Cylindrical tablet CAD files were uploaded as stereolithographic files (.stl) using PreForm 2.20.0-Beta 1 and set to be printed directly on the build platform. In total, 156 photopolymer formulations were designed based on different combinations of PEGDA 250, PEGDA 575, PEGDA 700, N-vinyl-pyrrolidone, PEG 300, glycerol, propylene glycol and TPO, as described in Table S1. Then, 10 mL of each formulation was prepared by mixing the liquid photopolymers and eventual fillers with the powdered photoinitiator, and stirred for 12 h or until complete dissolution of all the ingredients and were kept away from light sources. Then, twelve formulations per time were loaded in the novel resin tank for 3D printing. Each run took 4.37 h to be completed at a resolution of 25 µm, 1.95 h at 50 µm and 1.1 h at 100 µm. Printability Evaluation Photopolymer formulations' printability outcomes were evaluated according to a six-point arbitrary scale (Figure 1). A printability score (PS) from 1 to 6 was assigned to each formulation based on visual inspection. An extra score was assigned to formulations providing 3D printed tablets with a well-defined lower edge. This was introduced to differentiate between formulations showing overcuring only in the first layers rather than the whole tablet. Inclusion criteria were then based on formulations reaching a printability score of 5 and/or showing a defined edge (*) after printing a cylindrical test tablet. Novel SLA Apparatus Cost-Effectiveness The total time required to screen 156 formulations using the novel SLA apparatus, as well as the volume of formulation samples needed and the cost per each formulation prepared, was noted. A cost-effectiveness comparison between the two SLA apparatus was carried out by calculating the time required to screen one formulation at a time using the original apparatus and the costs for preparing 200 mL of each photopolymer formulation as required by the original capacity resin tank of 200 mL. Resin Recovery Efficiency Evaluation The original BP and the aluminium 12BP were weighed separately. Each platform was then connected to the 3D printer and a print was initiated. Once the platform was completely lowered in the resin tank and covered in photopolymer resin, the print was aborted to allow the BP to home. As soon as the initial position was reached, a timer was started and the platform collected to be weighed again at given timepoints ( Figure 2). The experimental procedure was carried out at room temperature. The volume of resin adhered to the build platform at each timepoint was calculated using Equation (1): where Vn indicates the volume of resin adhering to the build platform at the n-time point, wn is the weight of the build platform at the n-time, wi is the initial weight of the build platform and ρ is the resin relative density. The economic loss relative to the wasted resin at the n-timepoint was calculated using Equation Novel SLA Apparatus Cost-Effectiveness The total time required to screen 156 formulations using the novel SLA apparatus, as well as the volume of formulation samples needed and the cost per each formulation prepared, was noted. A cost-effectiveness comparison between the two SLA apparatus was carried out by calculating the time required to screen one formulation at a time using the original apparatus and the costs for preparing 200 mL of each photopolymer formulation as required by the original capacity resin tank of 200 mL. Resin Recovery Efficiency Evaluation The original BP and the aluminium 12 BP were weighed separately. Each platform was then connected to the 3D printer and a print was initiated. Once the platform was completely lowered in the resin tank and covered in photopolymer resin, the print was aborted to allow the BP to home. As soon as the initial position was reached, a timer was started and the platform collected to be weighed again at given timepoints ( Figure 2). The experimental procedure was carried out at room temperature. The volume of resin adhered to the build platform at each timepoint was calculated using Equation (1): where V n indicates the volume of resin adhering to the build platform at the n-time point, w n is the weight of the build platform at the n-time, w i is the initial weight of the build platform and ρ is the resin relative density. The economic loss relative to the wasted resin at the n-timepoint was calculated using Equation (2) Novel SLA Apparatus Cost-Effectiveness The total time required to screen 156 formulations using the novel SLA apparatu well as the volume of formulation samples needed and the cost per each formulation pared, was noted. A cost-effectiveness comparison between the two SLA apparatus carried out by calculating the time required to screen one formulation at a time using original apparatus and the costs for preparing 200 mL of each photopolymer formula as required by the original capacity resin tank of 200 mL. Resin Recovery Efficiency Evaluation The original BP and the aluminium 12BP were weighed separately. Each platform then connected to the 3D printer and a print was initiated. Once the platform was c pletely lowered in the resin tank and covered in photopolymer resin, the print was abo to allow the BP to home. As soon as the initial position was reached, a timer was sta and the platform collected to be weighed again at given timepoints ( Figure 2). The ex imental procedure was carried out at room temperature. The volume of resin adhere the build platform at each timepoint was calculated using Equation (1): where Vn indicates the volume of resin adhering to the build platform at the n-time po wn is the weight of the build platform at the n-time, wi is the initial weight of the b platform and ρ is the resin relative density. Novel SLA Apparatus Cost-Effectiveness The total time required to screen 156 formulatio well as the volume of formulation samples needed a pared, was noted. A cost-effectiveness comparison b carried out by calculating the time required to screen original apparatus and the costs for preparing 200 m as required by the original capacity resin tank of 200 Resin Recovery Efficiency Evaluation The original BP and the aluminium 12BP were w then connected to the 3D printer and a print was in pletely lowered in the resin tank and covered in photo to allow the BP to home. As soon as the initial posit and the platform collected to be weighed again at gi imental procedure was carried out at room tempera the build platform at each timepoint was calculated Vn = (wn − wi)/ρ where Vn indicates the volume of resin adhering to th wn is the weight of the build platform at the n-time platform and ρ is the resin relative density. Novel SLA Apparatus Cost-Effectiveness The total time required to screen 156 formulations using the novel SLA apparatus, as well as the volume of formulation samples needed and the cost per each formulation prepared, was noted. A cost-effectiveness comparison between the two SLA apparatus was carried out by calculating the time required to screen one formulation at a time using the original apparatus and the costs for preparing 200 mL of each photopolymer formulation as required by the original capacity resin tank of 200 mL. Resin Recovery Efficiency Evaluation The original BP and the aluminium 12BP were weighed separately. Each platform was then connected to the 3D printer and a print was initiated. Once the platform was completely lowered in the resin tank and covered in photopolymer resin, the print was aborted to allow the BP to home. As soon as the initial position was reached, a timer was started and the platform collected to be weighed again at given timepoints ( Figure 2). The experimental procedure was carried out at room temperature. The volume of resin adhered to the build platform at each timepoint was calculated using Equation (1): where Vn indicates the volume of resin adhering to the build platform at the n-time point, wn is the weight of the build platform at the n-time, wi is the initial weight of the build platform and ρ is the resin relative density. The economic loss relative to the wasted resin at the n-timepoint was calculated using Equation (2): Stereolithography Apparatus Evolution In order to address throughput limitations of conventional vat polymerisation apparatus equipped with a single, large-volume resin tank, the first step in modifying the commercial SLA apparatus was the design of twelve resin compartments and a build platform featuring twelve separate build areas ( 12 BP) (Figures 3 and 4). The dimensions were selected to be the minimum dimensions to allow for tablet printing and resin-depth changes upon submersion of the printing platform. Stereolithography Apparatus Evolution In order to address throughput limitations of conventional vat polymerisation apparatus equipped with a single, large-volume resin tank, the first step in modifying the commercial SLA apparatus was the design of twelve resin compartments and a build platform featuring twelve separate build areas (12BP) (Figures 3 and 4). The dimensions were selected to be the minimum dimensions to allow for tablet printing and resin-depth changes upon submersion of the printing platform. The CAD files for the novel components were then sent to the 3D printer to be manufactured. The resin tank inserts were fixed onto the original resin tank and tested for being watertight by alternately filling the compartments with a green-coloured solution and leaving them overnight to assess any leaks from the filled compartment to the next ones ( Figure 5A,B), while the 3D printed 12BP was covered with aluminium tape to allow for ease of removal of printed dosage forms ( Figure 5C). Stereolithography Apparatus Evolution In order to address throughput limitations of conventional vat polymerisation apparatus equipped with a single, large-volume resin tank, the first step in modifying the commercial SLA apparatus was the design of twelve resin compartments and a build platform featuring twelve separate build areas (12BP) (Figures 3 and 4). The dimensions were selected to be the minimum dimensions to allow for tablet printing and resin-depth changes upon submersion of the printing platform. The CAD files for the novel components were then sent to the 3D printer to be manufactured. The resin tank inserts were fixed onto the original resin tank and tested for being watertight by alternately filling the compartments with a green-coloured solution and leaving them overnight to assess any leaks from the filled compartment to the next ones ( Figure 5A,B), while the 3D printed 12BP was covered with aluminium tape to allow for ease of removal of printed dosage forms ( Figure 5C). The CAD files for the novel components were then sent to the 3D printer to be manufactured. The resin tank inserts were fixed onto the original resin tank and tested for being watertight by alternately filling the compartments with a green-coloured solution and leaving them overnight to assess any leaks from the filled compartment to the next ones ( Figure 5A,B), while the 3D printed 12 BP was covered with aluminium tape to allow for ease of removal of printed dosage forms ( Figure 5C). Subsequently, after the 3D printed 12BP was shown to be firmly connected to the printer and compatible with the novel twelve-vats resin tank, a final version of the build platform made of aluminium (aluminium 12BP) was fabricated through CNC milling and fixed to the SLA apparatus using a 3D printed joint, which was easily replaceable in the case of breakage ( Figure 6). Aluminium was selected due to its similarity to the original component and its density of 2.70 g/cm 3 [40]. The fully assembled aluminium 12BP final weight was 625.15 g, resulting in a 1.58% decrease in weight compared to the original BP. Such weight was estimated before manufacturing and maintained by drilling holes in the aluminium block (visible in Figure 6A) to obtain a finished product whose weight could not damage the moving parts of the SLA apparatus. With the novel resin tank and build platform in place, a commercial stereolithography apparatus was converted into a piece of equipment able to print multiple formulations at a single time with a fraction of the material originally required (Figure 7). Such novel apparatus was designed with the intention to conduct the high-throughput screening of photopolymer formulations aimed to identify printable candidates to produce solid oral dosage forms. Subsequently, after the 3D printed 12 BP was shown to be firmly connected to the printer and compatible with the novel twelve-vats resin tank, a final version of the build platform made of aluminium (aluminium 12 BP) was fabricated through CNC milling and fixed to the SLA apparatus using a 3D printed joint, which was easily replaceable in the case of breakage ( Figure 6). Aluminium was selected due to its similarity to the original component and its density of 2.70 g/cm 3 [40]. The fully assembled aluminium 12 BP final weight was 625.15 g, resulting in a 1.58% decrease in weight compared to the original BP. Such weight was estimated before manufacturing and maintained by drilling holes in the aluminium block (visible in Figure 6A) to obtain a finished product whose weight could not damage the moving parts of the SLA apparatus. Subsequently, after the 3D printed 12BP was shown to be firmly connected to the printer and compatible with the novel twelve-vats resin tank, a final version of the build platform made of aluminium (aluminium 12BP) was fabricated through CNC milling and fixed to the SLA apparatus using a 3D printed joint, which was easily replaceable in the case of breakage ( Figure 6). Aluminium was selected due to its similarity to the original component and its density of 2.70 g/cm 3 [40]. The fully assembled aluminium 12BP final weight was 625.15 g, resulting in a 1.58% decrease in weight compared to the original BP. Such weight was estimated before manufacturing and maintained by drilling holes in the aluminium block (visible in Figure 6A) to obtain a finished product whose weight could not damage the moving parts of the SLA apparatus. With the novel resin tank and build platform in place, a commercial stereolithography apparatus was converted into a piece of equipment able to print multiple formulations at a single time with a fraction of the material originally required (Figure 7). Such novel apparatus was designed with the intention to conduct the high-throughput screening of photopolymer formulations aimed to identify printable candidates to produce solid oral dosage forms. With the novel resin tank and build platform in place, a commercial stereolithography apparatus was converted into a piece of equipment able to print multiple formulations at a single time with a fraction of the material originally required (Figure 7). Such novel apparatus was designed with the intention to conduct the high-throughput screening of photopolymer formulations aimed to identify printable candidates to produce solid oral dosage forms. The modified apparatus' reliability was assessed by printing cylindrical tablets using a commercially available resin photopolymer. Twelve tablets were printed on the aluminium 12BP, with and without supports (Figure 8). The printability score (PS) assigned to both the types of fabricated tablets was 5*, indicating a successful print with accurately defined edges in all cases. Tablet Uniformity Testing Three batches of twelve tablets each were fabricated using the original BP, the 3D printed 12BP and the aluminium 12BP. Each batch was 3D printed with and without supports to evaluate their impact on tablet uniformity. Results for the uniformity of weight, thickness and diameter are shown in Figure 9. The modified apparatus' reliability was assessed by printing cylindrical tablets using a commercially available resin photopolymer. Twelve tablets were printed on the aluminium 12 BP, with and without supports (Figure 8). The printability score (PS) assigned to both the types of fabricated tablets was 5*, indicating a successful print with accurately defined edges in all cases. The modified apparatus' reliability was assessed by printing cylindrical tablets using a commercially available resin photopolymer. Twelve tablets were printed on the aluminium 12BP, with and without supports (Figure 8). The printability score (PS) assigned to both the types of fabricated tablets was 5*, indicating a successful print with accurately defined edges in all cases. Tablet Uniformity Testing Three batches of twelve tablets each were fabricated using the original BP, the 3D printed 12BP and the aluminium 12BP. Each batch was 3D printed with and without supports to evaluate their impact on tablet uniformity. Results for the uniformity of weight, thickness and diameter are shown in Figure 9. Tablet Uniformity Testing Three batches of twelve tablets each were fabricated using the original BP, the 3D printed 12 BP and the aluminium 12 BP. Each batch was 3D printed with and without supports to evaluate their impact on tablet uniformity. Results for the uniformity of weight, thickness and diameter are shown in Figure 9. Considering a theoretical value for tablet weight of 0.493 g, estimated from tablet volume and resin density, the percent relative error (%Er) calculated for the original BP, the 3D printed 12BP and the aluminium 12BP was 32.67%, 24.50% and 6.90%, respectively, for tablets printed directly on the build platform, while the relative standard deviation (RSD) was 2.15%, 5.91% and 4.56%, respectively. However, the introduction of printing supports resulted in the fabrication of more accurate and precise batches, as shown by a decrease in the %Er and RSD, respectively, to 8.81% and 0.61% (original BP), 10.05% and 0.46% (3D printed 12BP), 5.64% and 0.61% (aluminium BP12). A similar trend was observed when evaluating tablet thickness; when comparing the original BP and the 3D-printed 12BP, there was a %Er of 21.57% and 15.52%, with an RSD of 2.22% and 5.77%, respectively, when printing without supports. As per tablet weight, introducing printing scaffolds lowered the %Er and the RSD, respectively, to 1.46% and 0.69% (original BP), 2.44% and 0.62% (3D printed 12BP). However, tablets printed using the aluminium 12BP showed a %Er and RSD, respectively, of 0.94% and 0.54% (with supports), −0.71% and 4.34% (without supports), indicating better uniformity performances when the aluminium 12BP was used. A multivariate analysis of variance (MANOVA), coupled with a Tukey post hoc test, was performed to evaluate the effect of the build platform used evidenced a statistically significant difference (p < 0.05) in tablet weight, thickness and diameter when the 3D printed 12BP was compared to the original BP and tablets were printed directly on the BP. Comparing weight and thickness uniformity results of unsupported tablets fabricated with the aluminium 12BP and the original BP also resulted in a statistically significant difference (p < 0.05), while no difference (p > 0.05) was observed for tablet diameter. The results firstly suggest that tablet thickness is the most susceptible factor to inhomogeneity; since it is generally observed that the tablet thickness is higher than the expected value, it is likely that this also led to a gain in weight and, therefore, inhomogeneity in tablet weight uniformity. In particular, high differences were related to the use of the Considering a theoretical value for tablet weight of 0.493 g, estimated from tablet volume and resin density, the percent relative error (%E r ) calculated for the original BP, the 3D printed 12 BP and the aluminium 12 BP was 32.67%, 24.50% and 6.90%, respectively, for tablets printed directly on the build platform, while the relative standard deviation (RSD) was 2.15%, 5.91% and 4.56%, respectively. However, the introduction of printing supports resulted in the fabrication of more accurate and precise batches, as shown by a decrease in the %E r and RSD, respectively, to 8.81% and 0.61% (original BP), 10.05% and 0.46% (3D printed 12 BP), 5.64% and 0.61% (aluminium BP 12 ). A similar trend was observed when evaluating tablet thickness; when comparing the original BP and the 3D-printed 12 BP, there was a %E r of 21.57% and 15.52%, with an RSD of 2.22% and 5.77%, respectively, when printing without supports. As per tablet weight, introducing printing scaffolds lowered the %E r and the RSD, respectively, to 1.46% and 0.69% (original BP), 2.44% and 0.62% (3D printed 12 BP). However, tablets printed using the aluminium 12 BP showed a %E r and RSD, respectively, of 0.94% and 0.54% (with supports), −0.71% and 4.34% (without supports), indicating better uniformity performances when the aluminium 12 BP was used. A multivariate analysis of variance (MANOVA), coupled with a Tukey post hoc test, was performed to evaluate the effect of the build platform used evidenced a statistically significant difference (p < 0.05) in tablet weight, thickness and diameter when the 3D printed 12 BP was compared to the original BP and tablets were printed directly on the BP. Comparing weight and thickness uniformity results of unsupported tablets fabricated with the aluminium 12 BP and the original BP also resulted in a statistically significant difference (p < 0.05), while no difference (p > 0.05) was observed for tablet diameter. The results firstly suggest that tablet thickness is the most susceptible factor to inhomogeneity; since it is generally observed that the tablet thickness is higher than the expected value, it is likely that this also led to a gain in weight and, therefore, inhomogeneity in tablet weight uniformity. In particular, high differences were related to the use of the 3D-printed 12 BP. A potential explanation can be found in the loss of structural integrity observed in the 3D printed 12 BP over time ( Figure 10). The clear resin photopolymer used to manufacture the 3D printed 12 BP suffers, in fact, from significant limitations in terms of mechanical properties and tends to deform over time and light exposure [41][42][43]. Even a minimal change in the BP geometry could eventually result in a print with poor dimension accuracy. As aluminium does not share such a limitation, this would explain the significant improvements in tablet uniformity when the aluminium 12 BP was used. Pharmaceutics 2021, 13, x 10 of 16 3D-printed 12BP. A potential explanation can be found in the loss of structural integrity observed in the 3D printed 12BP over time ( Figure 10). The clear resin photopolymer used to manufacture the 3D printed 12BP suffers, in fact, from significant limitations in terms of mechanical properties and tends to deform over time and light exposure [41][42][43]. Even a minimal change in the BP geometry could eventually result in a print with poor dimension accuracy. As aluminium does not share such a limitation, this would explain the significant improvements in tablet uniformity when the aluminium 12BP was used. Figure 10. Bending of the 3D printed 12BP leading to misalignment of the BP in the SLA apparatus. Secondly, it was found that introducing printing supports considerably improved tablet uniformity when using the original BP and the 3D printed 12BP. In comparison with the original BP, no statistically significant difference (p > 0.05) in weight and thickness uniformity was observed for tablets fabricated on the 3D printed 12BP. Supported tablets printed on the aluminium 12BP also showed no significant difference in terms of uniformity of thickness and diameter when compared to results obtained from tablets produced on the original BP. Such improvements are compatible with the general recommendation to use printing supports for fabricating objects with minimum risk of size inaccuracies [44]. However, it should be considered that printing scaffolds require extra material to be fabricated and are a primary source of waste (Table 1). Table 1. Weight of 3D-printed tablets and relative supports produced using the original BP and the 3D-printed 12BP. Measurements were taken before and after supports were removed from tablets (n = 10). Material waste percentage is expressed as the ratio of support weight over initial weight. Material Waste Assessment Original BP 3D Printed 12BP Initial weight (g) 16 Resin Recovery Efficiency Evaluation At the completion of each print, the BP is automatically lifted and later removed by an operator to collect the fabricated dosage forms, while any uncured resin remaining on the platform is removed and disposed of. Attempts to manually recover resin adhered to the BP using metal tools could result in accidentally recovering partially cured resin debris, or in scratching the aluminium surface with risk to contaminate the feedstock material. Although manual removal determines most of the final resin loss, the amount of material wasted, and its related cost, have not been defined before. As a variable amount of recoverable resin drops from the BP into the resin tank as soon as a print is finished, it was hypothesized that the time the platform was left in the 3D printer before being removed was a critical parameter to estimate the final material wastage. In fact, the longer the BP remains connected to the SLA apparatus, the more photopolymer resin is recovered and saved. Therefore, the impact of the time the BP is left in the 3D printer after a print is Secondly, it was found that introducing printing supports considerably improved tablet uniformity when using the original BP and the 3D printed 12 BP. In comparison with the original BP, no statistically significant difference (p > 0.05) in weight and thickness uniformity was observed for tablets fabricated on the 3D printed 12 BP. Supported tablets printed on the aluminium 12 BP also showed no significant difference in terms of uniformity of thickness and diameter when compared to results obtained from tablets produced on the original BP. Such improvements are compatible with the general recommendation to use printing supports for fabricating objects with minimum risk of size inaccuracies [44]. However, it should be considered that printing scaffolds require extra material to be fabricated and are a primary source of waste (Table 1). Table 1. Weight of 3D-printed tablets and relative supports produced using the original BP and the 3D-printed 12 BP. Measurements were taken before and after supports were removed from tablets (n = 10). Material waste percentage is expressed as the ratio of support weight over initial weight. Material Waste Assessment Original BP 3D Printed 12 BP Resin Recovery Efficiency Evaluation At the completion of each print, the BP is automatically lifted and later removed by an operator to collect the fabricated dosage forms, while any uncured resin remaining on the platform is removed and disposed of. Attempts to manually recover resin adhered to the BP using metal tools could result in accidentally recovering partially cured resin debris, or in scratching the aluminium surface with risk to contaminate the feedstock material. Although manual removal determines most of the final resin loss, the amount of material wasted, and its related cost, have not been defined before. As a variable amount of recoverable resin drops from the BP into the resin tank as soon as a print is finished, it was hypothesized that the time the platform was left in the 3D printer before being removed was a critical parameter to estimate the final material wastage. In fact, the longer the BP remains connected to the SLA apparatus, the more photopolymer resin is recovered and saved. Therefore, the impact of the time the BP is left in the 3D printer after a print is completed on the amount of resin eventually wasted was investigated ( Figure 11). Both the original SLA apparatus and its modified version were compared to assess potential differences in their capacity to generate time-dependent resin waste. Cost implications of such waste generation were also assessed. completed on the amount of resin eventually wasted was investigated ( Figure 11). Both the original SLA apparatus and its modified version were compared to assess potential differences in their capacity to generate time-dependent resin waste. Cost implications of such waste generation were also assessed. Measurements were taken at 14 time points covering a period of 1 h. At t = 0 s, 16.63 mL of resin adhered to the original BP, while only 3.28 mL were recorded on the aluminum 12BP. At t = 3600 s, the amount of adhered material was quantified as 5.92 and 1.76 mL for the original BP and the aluminium 12BP, respectively. According to the results, it can be stated that, if the BP is left in the SLA apparatus at the end of a print for an increasing amount of time, a clear effect on reducing resin waste is observed. Furthermore, the aluminium 12BP used in the novel SLA apparatus has proven to reduce the amount of adhering resin by 70.27%, in comparison to the original BP; avoiding such waste would allow for the saving of enough material to produce an additional 11 and 3 tablets (based on a 0.5 mL tablet volume) using the original and the modified SLA apparatus, respectively. From a cost point of view, the effect of time on material saving, as well as differences between the use of the original and the modified SLA apparatus, are evident ( Figure 11). The economic loss due to the resin adhering on the build platforms just returned in position after a print (t = 0 s) was quantified as GBP 2.00 for the original SLA apparatus versus GBP 0.39 for the modified version. By leaving the platform above the tank until the end of the experiment (t = 3600 s), wasted resin value decreased to GBP 0.71 and GBP 0.21 for the original and the modified build platforms, respectively. It should be noted that the suggested model was based on the use of a commercial photopolymer resin not intended for pharmaceuticals applications. The lack of commercially available resins designed for pharmaceutical manufacturing necessitates the on-site production of photopolymer formulations consisting of polymers, photoinitiators, active pharmaceutical ingredients and other excipients, which eventually increase the final cost per mL. For example, considering the highest cost per mL for the formulations discussed in this work (Table S1) According to the results, it can be stated that, if the BP is left in the SLA apparatus at the end of a print for an increasing amount of time, a clear effect on reducing resin waste is observed. Furthermore, the aluminium 12 BP used in the novel SLA apparatus has proven to reduce the amount of adhering resin by 70.27%, in comparison to the original BP; avoiding such waste would allow for the saving of enough material to produce an additional 11 and 3 tablets (based on a 0.5 mL tablet volume) using the original and the modified SLA apparatus, respectively. From a cost point of view, the effect of time on material saving, as well as differences between the use of the original and the modified SLA apparatus, are evident ( Figure 11). The economic loss due to the resin adhering on the build platforms just returned in position after a print (t = 0 s) was quantified as GBP 2.00 for the original SLA apparatus versus GBP 0.39 for the modified version. By leaving the platform above the tank until the end of the experiment (t = 3600 s), wasted resin value decreased to GBP 0.71 and GBP 0.21 for the original and the modified build platforms, respectively. It should be noted that the suggested model was based on the use of a commercial photopolymer resin not intended for pharmaceuticals applications. The lack of commercially available resins designed for pharmaceutical manufacturing necessitates the on-site production of photopolymer formulations consisting of polymers, photoinitiators, active pharmaceutical ingredients and other excipients, which eventually increase the final cost per mL. For example, considering the highest cost per mL for the formulations discussed in this work (Table S1), and assuming comparable materials' behaviour, GBP 4.16 worth of photopolymer resin would be wasted at t = 0s using the original SLA apparatus, while the resin loss using the modified build platform would be quantified as GBP 0.82 at the same timepoint. Recovering photopolymer resin from the build platforms for one hour would instead decrease the value of wasted material to GBP 1.48 and GBP 0.44 using the original and the modified SLA apparatus, respectively. Ultimately, our findings aim to suggest a potential solution to minimise photopolymer resin wastage by avoiding the immediate removal of the build platform after the completion of dosage forms of 3D printing. This would, in fact, allow a certain amount of resin to be time-dependently recovered and reused, with no need of operator intervention. While the effect of time and the type of BP used have been evaluated, other factors, such as photopolymer resins' viscosity and surface tension, should also be investigated, in order to establish a solid model to universally predict material wastage and identify the amount of time providing the highest recovery. In fact, it is likely that the production of personalised dosage forms in clinical settings, such as hospital pharmacies, will have higher costs than the mass production of drugs at an industrial level, and it is, therefore, necessary to maximise process cost-effectiveness [45]. Novel SLA Apparatus Cost-Effectiveness Evaluation The modified SLA apparatus was used to carry out a printability screening of 156 pharmaceutical photopolymer formulations. The total time required for the screening, formulations amount needed, and the related costs are reported in Table 2. A comparison of the same parameters estimated considering the use of the original apparatus is also delineated. The developed modified SLA apparatus proved to dramatically reduce both the time and the sample amount required to conduct systematic screening of photopolymer formulations. In particular, the use of the novel SLA apparatus resulted in a 91.66% reduction in the amount of time needed to complete the screening, and 95% less raw materials being used. These results make the introduction of the modified apparatus into formulation development processes a promising tool to enhance the application of SLA 3D printing in pharmaceutics, which has been limited until now. Furthermore, our aim was to bridge the gap between general use SLA equipment and those designed for research applications, with the view of developing SLA 3D printers specifically designed for pharmaceutical purposes in the future. Printability Outcomes Evaluation Based on the inclusion criteria, the whole set of photopolymer formulations screened was classified in four groups ( Figure 12). Out of the 156 formulations tested, 96 provided a PS =5 indicating poor printability outcomes (Figure 12, group A), while the remaining 60 formulations met the eligibility criteria by reaching a PS = 5 or showing defined edges (*) with at least one printing resolution, making up a pool labelled as Printable Formulations (PF, n = 60) ( Figure 12, group B). Formulations included in group B were then subclassified into groups B1 (n = 35; formulations reaching PS = 5* at least for one printing resolution) and B2 (n = 5; formulations reaching PS = 5* at each printing resolution). Formulations belonging to groups B1 and B2 were jointly labelled as Best Formulations (BF, n = 40). A detailed table, including the composition of each formulation, the printability score assigned at each resolution, and the group to which it belongs, is shown in Table S1. The effect of 3D printing resolution on printability outcomes was also investigated ( Figure 13). Selecting a resolution of 25 µm resulted in 43.3% of group B formulations being classified as BF, followed by 30.0% and 33.3% selecting a resolution of 50 and 100 µm, respectively. In total, 33.3% of group B formulations were instead classified as PF when screened at 50 µm, while a reduction to 28.3% and 20.0% was observed when printing at 25 and 100 µm, respectively. Overall, of all the formulations classified in group B, 71.7% met targeted printability criteria using a printing resolution of 25 µm, whereas a decrease in printing resolution to 50 and 100 µm also reduced the fraction of formulations providing satisfactory outcomes to 63.3% and 53.3%, respectively. Formulations included in group B were then subclassified into groups B1 (n = 35; formulations reaching PS = 5* at least for one printing resolution) and B2 (n = 5; formulations reaching PS = 5* at each printing resolution). Formulations belonging to groups B1 and B2 were jointly labelled as Best Formulations (BF, n = 40). A detailed table, including the composition of each formulation, the printability score assigned at each resolution, and the group to which it belongs, is shown in Table S1. The effect of 3D printing resolution on printability outcomes was also investigated ( Figure 13). Selecting a resolution of 25 µm resulted in 43.3% of group B formulations being classified as BF, followed by 30.0% and 33.3% selecting a resolution of 50 and 100 µm, respectively. In total, 33.3% of group B formulations were instead classified as PF when screened at 50 µm, while a reduction to 28.3% and 20.0% was observed when printing at 25 and 100 µm, respectively. Overall, of all the formulations classified in group B, 71.7% met targeted printability criteria using a printing resolution of 25 µm, whereas a decrease in printing resolution to 50 and 100 µm also reduced the fraction of formulations providing satisfactory outcomes to 63.3% and 53.3%, respectively. It should, however, be considered that printing with a resolution of 25 µm increases printing time by 55.38% and 74.83% compared to using a layer thickness of 50 and 100 µm, respectively. Despite the better results observed using higher resolution, the increase in production time should not be underestimated. The implementation of SLA 3D printing It should, however, be considered that printing with a resolution of 25 µm increases printing time by 55.38% and 74.83% compared to using a layer thickness of 50 and 100 µm, respectively. Despite the better results observed using higher resolution, the increase in production time should not be underestimated. The implementation of SLA 3D printing in clinical settings to produce personalised dosage forms will in fact be possible if the overall efficiency of the process is optimised, reducing costs and production times, and ensuring the safety and efficacy of the printed medicines [45,46]. It is, therefore, essential to identify novel formulations, designed to provide best printability even at low resolution. Our systematic screening has shown how modifying a commercial SLA apparatus allows us to address the limitation of identifying printable resin formulations with a significant reduction both in terms of time and costs. Furthermore, the application of the modified SLA apparatus in a clinical scenario would allow for the printing of multiple formulations at the same time to provide patients with their personalised medicines in reasonable time. Conclusions A commercial SLA apparatus was modified into a novel, multimaterial device specifically designed to address the limitations of SLA 3D printing in pharmaceutical applications. The novel SLA apparatus was tested by carrying out a high-throughput screening to identify pharmaceutical photopolymer formulations with satisfactory printability and was proved to considerably reduce the time and economic resources needed. Furthermore, potential areas of wastage were identified and solutions to address them were described with the view to enhance SLA 3D printing feasibility at a clinical level. In conclusion, the novel apparatus' power to 3D print different formulations at the same time may not only be advantageous at a formulation development stage, but also in clinical scenarios where different solid oral dosage forms can be produced together using the same 3D printer, making access to personalised medicines to patients more achievable. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/pharmaceutics13050616/s1, Table S1: % w/w composition of the 156 photopolymer formulations prepared and screened. Printability score assigned per formulation at each printing resolution tested and the classification group of each formulation are reported. Costs/mL per formulation are described in the right column. Funding: This research was funded by Aston University. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are contained within this article.
2021-01-20T01:36:40.509Z
2021-04-25T00:00:00.000
{ "year": 2021, "sha1": "57ca52653030cb34f6f1185658c8c99b2528a635", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/13/5/616/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a24619ba67b7a1bb1009df859c4c8ad9fea8fa4", "s2fieldsofstudy": [ "Medicine", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
261580614
pes2o/s2orc
v3-fos-license
Extracts of Thesium chinense inhibit SARS-CoV-2 and inflammation in vitro Abstract Context The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is still spreading rapidly. Relevant research based on the antiviral effects of Thesium chinense Turcz (Santalaceae) was not found. Objective To investigate the antiviral and anti-inflammatory effects of extracts of T. chinense. Materials and methods To investigate the anti-entry and replication effect of the ethanol extract of T. chinense (drug concentration 80, 160, 320, 640, 960 μg/mL) against the SARS-CoV-2. Remdesivir (20.74 μM) was used as positive control, and Vero cells were used as host cells to detect the expression level of nucleocapsid protein (NP) in the virus by real-time quantitative polymerase chain reaction (RT-PCR) and Western blotting. RAW264.7 cells were used as an anti-inflammatory experimental model under lipopolysaccharide (LPS) induction, and the expression levels of tumor necrosis factor-alpha (TNF-α) and interleukin-6 (IL-6) were detected by enzyme-linked immunosorbent assay (ELISA). Results The ethanol extract of T. chinense significantly inhibited the replication (half maximal effective concentration, EC50: 259.3 μg/mL) and entry (EC50: 359.1 μg/mL) of SARS-CoV-2 into Vero cells, and significantly reduced the levels of IL-6 and TNF-α produced by LPS-stimulated RAW264.7 cells. Petroleum ether (EC50: 163.6 μg/mL), ethyl acetate (EC50: 22.92 μg/mL) and n-butanol (EC50: 56.8 μg/mL) extracts showed weak inhibition of SARS-CoV-2 replication in Vero cells, and reduced the levels of IL-6 and TNF-α produced by LPS-stimulated RAW264.7 cells. Conclusion T. chinense can be a potential candidate to fight SARS-CoV-2, and is becoming a traditional Chinese medicine candidate for treating COVID-19. Introduction Coronaviruses are a class of positive single-stranded RNA viruses with encapsulation, which evolves rapidly due to their high nucleotide substitution and recombination rates, and can infect a variety of mammals including humans (Fung and Liu 2019).Since the beginning of the twenty first century, coronavirus has appeared periodically all over the world, such as severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory syndrome coronavirus (MERS-CoV), and currently circulating SARS-CoV-2 that caused coronavirus disease 2019 .They are all related to the major outbreak of human fatal pneumonia (Kirtipal et al. 2020).With extremely high pathogenicity and infectivity, COVID-19 has rapidly spread to all parts of the world and caused extremely serious respiratory diseases, becoming a new public health problem in the twenty first century.By April 22, 2023, about 686 million people had been infected with SARS-CoV-2, distributed in more than 200 countries and regions around the world, and the cumulative death cases exceeded 6.85 million.Therefore, the prevention and treatment of novel coronavirus strains have become a major scientific challenge in the field of global health today.The research and development of drugs (Wang et al. 2022;Wang and Yang 2022) to effectively prevent and treat COVID-19 has become a major strategic and social demand.There is an urgent need to discover and provide safe, original anti-novel coronavirus candidate drugs (Wang et al. 2022) with clear activity, and a high drug completion rate. Conventional antiviral drugs, of which nucleoside analogs are the most common, include favipiravir, ribavirin, and remdesivir.Drugs enter virus-infected cells and form adenine or guanine nucleoside analogs that competitively inhibit key enzymes of viral RNA and protein synthesis, such as viral RNA polymerase, thus interfering with and blocking viral replication and transmission (De Clercq 2019).Although all of them can inhibit viral RNA replication, there are certain limitations in the safety and effectiveness of clinical medication.The toxic side effects of ribavirin monotherapy exceed the potential benefits (Mo and Fisher 2016), and the effect of large doses of ribavirin on SARS-CoV may lead to adverse events (Muller et al. 2007).Fapiravir clinical use showed a trend towards improved survival in patients treated with the Ebola virus, but the effect of treatment was not statistically significant (Kerber et al. 2019).Remdesivir has no significant clinical or antiviral effect in patients with severe COVID-19 but does show clinical improvement in patients treated early, which still needs to be confirmed in a larger number of studies (Wang et al. 2020).There have been studies showing very low levels of Remdesivir resistance (Focosi et al. 2022), but there is also literature pointing to the emergence of resistant strains in clinical cases (Gandhi et al. 2022).However, remdesivir significantly reduces the viral load at the cellular and animal levels (Pruijssers et al. 2020;Williamson et al. 2020).Although scholars have different opinions on the clinical use of Remdesivir, in our preliminary experiment results, Remdesivir showed better anti-SARS-CoV-2 effect, so we used Remdesivir as our positive control.Of course, there are also natural plants with better antiviral effects (Yang and Wang 2021).For example, glycyrrhizic acid, the active ingredient in licorice root, can significantly inhibit the replication of the SARS-CoV and inhibit virus adsorption and penetration (Cinatl et al. 2003).The ethanolic extract of Scutellaria baicalensis Georgi (Labiatae) and its major component baicalein inhibited the activity of SARS-CoV-2 and its 3 C-like protease in vitro, and both inhibited the replication and entry of SARS-CoV-2 in Vero cells (Liu et al. 2021).Therefore, we can continue trying to discover natural plants with an anti-coronavirus effect. Thesium chinense Turcz (Santalaceae) is the dried whole plant.It has antibacterial and anti-inflammatory effects, reduce body temperature, detoxifies, and is a broad-spectrum antibacterial Chinese herbal medicine.Because of curative and rapid effects, known as 'natural antibiotics', clinical antibiotics are often used to treat mastitis, tonsillitis, pharyngitis, all kinds of pneumonia and upper respiratory tract infections (Parveen et al. 2007).Pharmacological studies have shown that T. chinense has good effects in anti-inflammatory (Sun et al. 2019), antioxidation (Shao et al. 2020), antibacterial (Liu et al. 2018), analgesia and other aspects, and has a broad application prospects.At present, there is no literature report on the anti-SARS-CoV-2 of T. chinense, but kaempferol, as a monomeric compound isolated from T. chinense, can inhibit the replication of human coronavirus (Cheng and Wong 1996).Kaempferol has been found to have a strong affinity for the S protein and angiotensinase 2 (ACE2) of SARS-CoV-2 by molecular docking (Yang et al. 2020), and it is also one of the main active ingredients in commonly used Chinese herbal medicines for the prevention and treatment of COVID-19 (Li et al. 2020).Bacterial co-infection is a common complication of many viral respiratory tract infections.Some studies have found that mild symptoms and long-term hospitalized COVID-19 patients are more likely to cause bacterial coinfection (Westblade et al. 2021;Davies-Bolorunduro et al. 2022).T. chinense has an excellent effect on treating inflammation, microbial infection and upper respiratory tract disease (Li et al. 2021), and has an obvious inhibitory effect on a variety of bacteria (Liu et al. 2006).Therefore, T. chinense can inhibit a variety of bacterial infections, reduce the severity of COVID-19 and avoid complications.This indicates that T. chinense may have a unique role in the treatment of diseases caused by SARS-CoV-2. Infection with coronavirus can lead to the release of a large number of proinflammatory cytokines and a cytokine storm caused by the maladjustment of body release, which caused serious damage to host tissues and organs by stimulating the death of inflammatory cells (Zheng et al. 2021).The cytokine storm caused by SARS-CoV-2 is considered to be the main cause of disease progression (Pum et al. 2021).In clinical studies, serum levels of IL-6 and TNF-a are independent and significant predictors of disease severity and death in critically ill patients with COVID-19 (Del Valle et al. 2020).Therefore, we wanted to further study the anti-inflammatory effects of different polar parts of T. chinense, using LPS to stimulate RAW264.7 cells to produce inflammatory factors as a model, and using IL-6 and TNF-a as detection indices.The anti-inflammatory effects of extracts from different polar parts of T. chinense against SARS-CoV-2 were studied in vitro. The SARS-CoV-2 is still spreading and mutating around the world, but the current traditional anti-coronavirus drugs still have certain limitations.Traditional Chinese medicine has rich experience in the prevention and control of plague.Due to its complex composition, it can exert the overall effect through multi-level, multi-target and multi-pathway, and can improve symptoms and reduce mortality and recurrence rate.Therefore, this experiment explores the antiviral and anti-inflammatory effects of T. chinense. Sample preparation The whole dry herb of T. chinense was collected from Xiangyang City, Hubei Province, P. R.China in May 2019.The species was identified by Prof. Kai-Jin Wang at the School of Life Sciences, Anhui University, and a voucher number (No. 20190927) was deposited in the School of Pharmacy, Anhui Medical University.A total of 230 g of T. chinense was added to 2 L of 95% ethanol solution for 12 h and ultrasonicated at 50 � C for 3 h.Then, the filtrate was collected, and the above steps were repeated with 85% and 70% ethanol solution for the filter residue.Half filtrate was concentrated and then freeze-dried (denoted BRY).The other half filtrate was extracted with 1 L petroleum ether at 20 � C for 30 min, 4 times in total, and then the organic phase was collected and concentrated (denoted BS1).Then, the above steps were repeated with ethyl acetate (denoted BY2) and n-butanol (denoted BZ3) (El-Hilaly et al. 2021).The filtrate residue after ethanol extraction was extracted twice at 80 � C for 3 h in 2 L distilled water by ultrasound and then filtered.The filtrate was concentrated and then freeze-dried (denoted BRS).All dry extracts were completely dissolved in dimethyl sulfoxide (DMSO) to obtain 160 mg/mL mother liquor.Remdesivir (denoted RDV) was purchased from Shanghai YuanYe Biotechnology Co., Ltd.Remdesivir was dissolved in DMSO to obtain 1 mg/mL mother liquor, and all samples were stored at 20 � C before use.Ethanol, petroleum ether, ethyl acetate, n-butanol and other organic solvents were purchased from Shanghai Titan Scientific Co., Ltd. Cell lines and virus African green monkey kidney cells (Vero) were treated with Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum, 2% L-glutamine, and 1% penicillin/streptomycin (DMEM, Gibco, USA).Mouse monocyte-macrophage leukemia cells (RAW264.7) were treated with Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum, 2% L-glu- tamine, and 1% penicillin/streptomycin (DMEM, BasalMedia, Shanghai).The cells were cultured at 37 � C and 5% CO 2 .Both cell lines were obtained from ATCC.SARS-CoV-2 (a virus strain isolated from the laboratory of Anhui Provincial Centers for Disease Control and Prevention in Suzhou Patient 005 in 2020) proliferated in Vero cells, and the virus titer was measured by 50% tissue culture infection dose (TCID 50 ) using the Karber method based on the cytopathic effect.SARS-CoV-2 infection experiments were conducted in the Biosafety Level 3 (BSL-3) laboratory, and other infection experiments were conducted in the BSL-2 laboratory. Cytotoxicity assay The methyl thiazolyl tetrazolium (MTT, Biofroxx, Germany) method was used to detect the cytotoxic effects of different polar extracts (BRY, BS1, BY2, BZ3, BRS) and positive drugs (RDV) on Vero cells.Single-layer growth Vero cells were incubated with the extracts at the specified concentration on a 96-well plate at 37 � C with 5% CO 2 for 24 h, then 20 lL MTT of 5 mg/mL was added to each well.The incubation was continued at 37 � C and 5% CO 2 for 4 h.The cell supernatant was discarded, and 100 lL DMSO was added to each well to evenly oscillate (Attallah et al. 2021).The light absorption value of each well was measured and recorded with a microplate reader at a wavelength of 570 nm.The cell inhibition rate and half maximal cytotoxicity (CC 50 ) were calculated using GraphPad Prism 8.0 software.The cytotoxic effects of different polar extracts on RAW264.7 cells were also determined by the above methods and operations, and the half maximal inhibitory concentration (IC 50 ) was calculated. Antiviral activity assay Vero cells were inoculated in a 96-well plate at a density of 1.5 � 10 4 cells/well and incubated at 37 � C and 5% CO 2 for 24 h.To explore the antiviral replication effect, cells were infected with 30 TCID 50 SARS-CoV-2 (TCID 50 was 10 À 3.5 ), and the infected volume was 20 lL per well for 1.5 h.The supernatant was removed, and 100 lL of virus culture medium containing preconfigured drugs at different concentrations was added.After 48 h, the virus cells were frozen, thawed and collected.To explore the antiviral entry effect, Vero cells were pretreated with a virus culture medium with different concentrations of drugs for 3.5 h.Then, the SARS-CoV-2 was added and coincubated for 1.5 h, after which the virus and drugs were washed away.After 48 h, freeze-thaw virus cells were collected for RT-PCR detection. Anti-inflammatory activity assay RAW264.7 cells were inoculated in a 24-well plate at a density of 5 � 10 4 cells/well, and incubated at 37 � C and 5% CO 2 for 24 h.In the sample solution group, the culture medium was changed to medium containing LPS (1 lg/mL, Sigma, USA) and different concentrations of drugs to induce RAW264.7 differentiation.In the blank control group, the culture medium was changed to fresh medium, and in the model group, the culture medium was changed to medium containing LPS (1 lg/mL) to induce RAW264.7 differentiation.The expression levels of TNF-a and IL-6 were detected by ELISA. Western blot Vero cells were inoculated in 6-well plates at a density of 2.5 � 10 6 cells/well, and cultured at 37 � C and 5% CO 2 for 24 h.To explore the antiviral replication effect, SARS-CoV-2 was first added to Vero cells for 1.5 h, and the infected volume of each well was 400 lL.The virus was washed, and then sample solutions of different concentrations were added to the infected Vero cells and incubated for 24 h.RIPA (Beyotime, Shanghai, China) cell lysate (200 lL) containing phosphatase inhibitor (Beyotime, Shanghai, China) and protease inhibitor (Beyotime, Shanghai, China) was added to each well and placed on ice for 30 min to extract cell proteins.To explore the antiviral entry effect, Vero cells were pretreated with a virus culture medium with different concentrations of drugs for 3.5 h.Then SARS-CoV-2 was added and coincubated for 1.5 h, after which the virus and drugs were washed away.Cell protein was extracted after 24 h.The cell lysate was centrifuged at 12000 rpm at 4 � C for 20 min.After centrifugation, the supernatant was taken and an equal volume of SDS-PAGE sample loading buffer (Beyotime) was added and then heated at 100 � C for 10 min.They were separated by 10% SDS-PAGE, transferred to a polyvinylidene fluoride (PVDF, Merck Millipore Ltd., Ireland) membrane, sealed with 5% skim milk, washed with TBST 3 times, 15 min each time, and incubated overnight with SARS-CoV-2 nucleocapsid antibody (GTX632269, GeneTex, North America) and antibodies against GAPDH (AF7021, Affinity, USA) at 4 � C. TBST was used for washing 3 times, 15 min each, and the membrane was incubated with horseradish peroxidase (HRP) labeled secondary antibody (goat anti-rabbit IgG HRP and goat anti-mouse IgG, Affinity, USA).Images were obtained by the ECL (Glpbio, Montclair, USA) chemiluminescence method and quantified with ImageJ software (Bio-Rad, USA). RNA extraction and RT-PCR For the antiviral experiment, the copy number of viral nucleic acid was detected by TaqMan probe RT-PCR.Viral RNA was extracted from the supernatant of cells using an automated nucleic acid extraction system (Tianlong, Hangzhou, China) and reverse-transcribed using 4 � TaqManV R FastVIRus1-StepMastermix (Thermo Fisher, USA).The full-length SARS-CoV-2 N gene was synthesized and cloned into pcDNA3.1.A standard curve was prepared by measuring the copy number of the plasmid diluted (3 � 10 2 -3 � 10 6 copies).Primer sequences are as follows: N: F-GGGGAACTTCTCCTGCTAGAAT; R-CAGACATTTTGCTCTC AAGCTG and probe: 5 0 -FAM-TTGCTGCTGCTTGACAGATT-TA MRA-3 0 . ELISA For anti-inflammatory experiments, the expression levels of TNF-a and IL-6 were detected by ELIASA (ProteinTech, China).Standard curves were prepared according to the kit requirements.When adding samples, according to the preliminary experimental results, the IL-6 sample well was diluted 3 times, and the TNF-a sample well was diluted 20 times.After washing, detection antibodies were added and incubated at 37 � C for 1 h.After washing, streptavidin labeled with HRP was added, incubated at 37 � C for 40 min, and then washed again.For color development, TMB color development solution was added to each well for 15-20 min at 37 � C. When terminating, terminating solution was added to each well, and the blue will turn yellow.The optical density (OD) of each well was measured at 450 nm with a microplate reader at 630 nm as the correction wavelength.For data analysis, the OD value of each standard and sample was subtracted from the blank-hole OD value.With the concentration of the standard substance as the abscissa and the OD value as the ordinate, the professional software ELISACalc was used for fourparameter fitting (4-PL).The fitting concentration was calculated from the standard curve according to the OD value of the sample, and then the measured concentration of the sample was obtained by multiplying the dilution factor. Antiviral effect of extracts from different polar parts of T. chinense against SARS-CoV-2 We first investigated the effects of BRY, BS1, BY2, BZ3 and BRS on Vero cell viability.BRS had no significant effect on Vero cell viability when the drug concentration was 640 lg/mL (Figure 1Ae).The inhibitory effect of BRY, BS1, BY2 and BZ3 on Vero cell viability increased with increasing drug concentration, and their CC 50 values were 834.5 lg/mL BRY, 268.0 lg/mL BS1, 25.94 lg/mL BY2, and 95.14 lg/mL BZ3 (Figure 1Aa-d).In the study of antiviral replication effects, BRY showed an extremely significant antiviral replication effect with an EC 50 at 259.3 lg/mL, the cytotoxicity was weak, accompanying the therapeutic index (SI) of 3.22 (Figure 1Aa).BS1 (EC 50 : 163.6 lg/mL, SI: 1.64, Figure 1Ab) and BZ3 (EC 50 : 56.8 lg/mL, SI: 1.68, Figure 1Ad) also showed anti-replicating effects.BY2 (EC 50 : 22.92 lg/mL, SI: 1.13, Figure 1Ac) had an antiviral replication effect at a certain concentration; however, it also significantly inhibited cell viability at the same concentration, causing a low level of SI.BRS showed no effect against replicating viruses (Figure 1Ae).Further study showed that BRY also had a significant antiviral entry effect with an EC 50 of 359.1 lg/mL and an SI of 2.32 (Figure 1C).Based on RT-PCR, we further investigated the specific NP levels in the BRY replication and entry stages of SARS-CoV-2 and found that the NP content decreased with the increase of BRY drug concentration, consistent with the RT-PCR results (Figure 1D,E).However, the positive drug RDV (EC 50 : 17.21 lM, Figure 1B) has no antiviral entry effect, because of its nucleoside analog that only inhibits virus replication and does not affect virus entry. Anti-inflammatory effect of extracts from different polar parts of T. chinense The anti-inflammatory effects of different polar parts of T. chinense were investigated, using LPS (1 lg/mL) to stimulate RAW264.7 cells to produce inflammatory factors IL-6 and TNFby ELISA kit.The results showed that BRS and BZ3 had no obvious effect on the viability of RAW264.7 cells at a concentration of 640 lg/mL, while BY2, BZ3 and BRS had obvious inhibitory effects on the viability of RAW264.7 cells.The IC 50 values were 706.8 lg/mL for BRY, 129.1 lg/mL for BS1, and 355.8 lg/mL for BY2 (Figure 2A).In the inflammatory experiment model group, the content of IL-6 and TNF-a in the medium was significantly increased after LPS stimulation.BRY showed obvious dose dependence from 80 lg/mL to 640 lg/mL.With increasing drug concentration, the release of TNF-a and IL-6 was significantly inhibited.BRY significantly inhibited the release of TNF-a and IL-6 at a concentration of 640 lg/mL compared with the virus group (p < 0.0001, Figure 2Ba,b).BS1 significantly inhibited the release of IL-6 (p < 0.0001) and TNF-a (p < 0.01) at concentrations of 80 lg/mL and 160 lg/mL in a dose-dependent manner compared with the virus group (Figure 2Bc,d).BY2 showed obvious drug dependence at 20 lg/mL and significantly inhibited the release of TNF-a and IL-6 with increasing drug concentration.Compared with the virus group (p < 0.0001, Figure 2Be,f), BZ3 significantly inhibited the release of IL-6 at a concentration of 640 lg/mL (p < 0.0001), and the inhibition of TNF-a was weaker (p < 0.01, Figure 2Bg,h).BRS had a weak effect on TNF-a release at drug concentrations of 40 lg/mL and 160 lg/mL, with statistical significance.Moreover, BRS did not inhibit the release of IL-6 (Figure 2 Bi,j). Discussion SARS-CoV-2 is still spreading worldwide, and research and development of related drugs are crucial for epidemic prevention and control.Traditional Chinese medicine has been used for fighting against diseases for thousands of years with rich theoretical and practical experience.Natural products are stable in the human gastrointestinal tract, increasing their bioavailability, and they have a long track record in the treatment of respiratory infections.Many natural products have been approved as drugs and are generally safe, so they are often used in combination therapy.In the face of this epidemic, traditional Chinese medicine has also played a significant role in treating COVID-19 (Wang and Yang 2021).It is the active intervention of traditional Chinese medicine that further improved clinical treatment efficacy (Chen and Chen 2020).Therefore, we focused on natural medicines, and tried to find a traditional Chinese medicine with anti-coronavirus effects.According to ancient books and literature review, we selected T. chinense, known as a 'natural antibiotic', to explore the effects of extracts from different polar parts of T. chinense on novel coronaviruses and their antiinflammatory effects. The replication cycle of SARS-CoV-2 includes adsorption, entry, replication, assembly and secretion.In the replication cycle of SARS-CoV-2, NP plays a key role in the virus replication, assembly and immune regulation (Peng et al. 2020).It is an important indicator of progeny virus formation.Therefore, NP was chosen as the research index in our experiment.In our study of the antiviral entry experiment, the SARS-CoV-2 was first added to Vero cells for 1.5 h to wash away the virus, and then different concentrations of extracts of different polar parts of T. chinense were added to treat infected Vero cells.The results showed that the ethanol, petroleum ether, ethyl acetate and nbutanol extracts could inhibit the replication of the virus, and the ethanol extracts were better than the other extracts.However, petroleum ether, ethyl acetate, and n-butanol extracts have antiviral activity but also present strong toxicity.To further explore the stage of antiviral replication, Vero cells were pretreated with different concentrations of T. chinense ethanol extract for 3.5 h, incubated with 2019-nCoV virus for 1.5 h, and then washed away the virus and drug.It was found that the ethanol extracts of T. chinense could inhibit the entry of the virus.According to the results of the RT-PCR and WB experiments, we can intuitively see that the amount of NP decreases significantly with increasing drug concentration, which proves that the ethanol extract of T. chinense can further inhibit virus replication in cells by inhibiting the generation of NP.Kaempferol, as a monomeric compound isolated from T. chinense, has been found to have a strong affinity for ACE2 of SARS-CoV-2 (Pan et al. 2020).The glycosidic derivatives of kaempferol have also been proven to be inhibitors of virus release, and the combination with 3CLpro of SARS-CoV-2 has the most advantages (Chen et al. 2021;Liao et al. 2021), which gives us the inspiration to explore the antiviral mechanism of T. chinense in future studies. Infection with SARS-CoV-2 can lead to the release of a large number of proinflammatory cytokines (Zheng et al. 2021), and the serum levels of IL-6 and TNF-a are significantly increased in critically ill patients with COVID-19, which is important factors affecting the severity of disease and death of patients.Both IL-6 and TNF-a are pleiotropic cytokines that play important roles in the pathogenesis of most acute inflammatory diseases (Kalliolias and Ivashkiv 2016).Many IL-6 inhibitors and TNF-a inhibitors have been used in clinical practice (Dorner and Kay 2015; Rossi et al. 2015), and more studies have shown that IL-6 inhibitors can effectively treat COVID-19 (Xu et al. 2020).Therefore, we studied the anti-inflammatory effects of different polarities of T. chinense using LPS-stimulated RAW264.7 cells as a model of inflammation, and the expression levels of TNF-a and IL-6 were detected by ELISA.In the model group, the content of IL-6 and TNF-a in the medium was significantly increased after LPS stimulation.BRY, BS1, BY2 and BZ3 showed significant concentration dependence within specific drug concentrations, and gradually decreased the expression levels of IL-6 and TNF-a induced by LPS with increasing concentrations.BRY showed the best anti-inflammatory effect.Compared with the virus group, BRY significantly inhibited the release of TNF-a and IL-6 at drug concentrations of 320 lg/mL and 640 lg/mL (p < 0.0001).BRS had no effect on LPS-induced IL-6 expression levels but had a slight inhibitory effect on LPSinduced TNF-a expression levels at the low concentration of 160 lg/mL.In previous studies, ethyl acetate extract of T. chinense showed a significant anti-inflammatory effect on xylene-induced ear edema (Parveen et al. 2007), which was consistent with the results of our anti-inflammatory experiment.The glycosidic derivatives of kaempferol isolated from T. chinense can inhibit the expression of TNF-a, IL-6, IL-1b and PGE2, improve pulmonary edema in vivo, and inhibit the phosphorylation of NFjB and MAP kinase in mice in vitro (Sun et al. 2019).In conclusion, the extract of T. chinense has a good anti-inflammatory effect, and has the potential to be an inhibitor of IL6 and TNF-a. Bacterial co-infection is a common complication of many viral respiratory tract infections.Viral infections alter the bacterial community in the upper respiratory tract, which may increase susceptibility to secondary infections and disease severity.This often leads to more severe clinical symptoms and significantly increases complication rates and mortality (Gupta et al. 2008;Rattanaburi et al. 2022).Bacterial co-infection is an important factor in almost all influenza deaths.Studies have found that Staphylococcus aureus and Mycoplasma pneumoniae have high infection rates in mild COVID-19 cases, and patients hospitalized for a long time are more likely to experience bacterial co-infection.Early diagnosis and treatment of bacterial co-infection can reduce the severity of COVID-19 and avoid complications (Davies-Bolorunduro et al. 2022).T. chinense is known as 'natural antibiotic' and has an excellent effect on treating inflammation, microbial infection and upper respiratory tract disease (Li et al. 2021).At the same time, T. chinense also has a broad-spectrum antibacterial effect, significantly inhibiting S. aureus, Aeromonas hydrophila, Sarcina lutea, Bacillus cereus, Bacillus subtilis, Pseudomonas aeruginosa (Liu et al. 2006).Therefore, it is not difficult to see that T. chinense can inhibit multiple bacterial infections, and has a high utilization prospect for bacterial co-infection.Based on this, we further investigated the anti-inflammatory and antiviral effects of T. chinense.The results showed that the ethanol extract of T. chinense significantly inhibited the replication and entry of SARS-CoV-2 in Vero cells and significantly reduced the levels of IL-6 and TNF-a produced by LPS-stimulated RAW264.7 cells.Therefore, the COVID-19 caused by SARS-CoV-2, T. chinense can not only significantly reduce viral load and inhibit the activity of a variety of bacteria, but also reduce the influence of inflammation caused by viral or bacterial infection.T. chinense is a natural antiviral medicine with great potential. Conclusions The extracts of T. chinense significantly inhibited the replication and entry of SARS-CoV-2, had good anti-inflammatory effects, and inhibited the expression of the inflammatory factors IL-6 and TNF-a.The petroleum ether fractions, ethyl acetate fractions and n-butanol fractions have certain antiviral replication activity, but also have certain cytotoxicity.In the anti-inflammatory effect, the petroleum ether fraction, ethyl acetate fraction and n-butanol fraction showed obvious concentration dependence, and the expression levels of IL-6 and TNF-a induced by LPS gradually decreased with increasing concentration.The aqueous extract had a slight inhibitory effect on LPS-induced TNF-a expression only at the low concentration of 160 lg/mL.It is not difficult to suggest that T. chinense has good antiviral and antiinflammatory activities, and it can be a potential candidate to fight SARS-CoV-2. However, this experiment also has some limitations.Although T. chinense has strong antiviral activity, its cytotoxicity is also relatively high.If the separation of monomers can be considered in the subsequent experiments, then in-depth studies can be carried out to further achieve the purpose of increasing efficacy and reducing toxicity through the optimization of monomer chemical structure.In terms of the exploration of the mechanism of action, it is not possible to describe the concrete mechanism of T. chinense antiviral action.Therefore, it is possible to explore the specific pathway through which T. chinense exerts its antiviral effect and explore its mechanism of action in more detail. Figure 1 . Figure 1.Antiviral effect of extracts from different polar parts of T. chinense against SARS-CoV-2. A. Effects of ethanol extract of T. chinense (a), petroleum ether extract of T. chinense (b), ethyl acetate extract of T. chinense (c), n-butanol extract of T. chinense (d), and water extract of T. chinense (e) on Vero cell viability (CC 50 ) and antiviral replication in Vero cells (EC 50 ).B. positive drug remdesivir for antiviral replication in Vero cells (EC 50 ).C. Antiviral entry effect of ethanol extract of T. chinense in Vero cells.D. Western blot analysis of nucleocapsid levels in Vero cells at virus replication.E. Western blot analysis of virus at nucleocapsid level during entry into Vero cells. Figure 2 . Figure 2. Anti-inflammatory effect of extracts from different polar parts of T. chinense.A. ethanol extract of T. chinense, petroleum ether extract of T. chinense, ethyl acetate extract of T. chinense, n-butanol extract of T. chinense and water extract of T. chinense inhibitory effect on RAW264.7 cells (IC 50 ).B. Effects of ethanol extract of T. chinense (a, b), petroleum ether extract of T. chinense (c, d), ethyl acetate extract of T. chinense (e, f), n-butanol extract of T. chinense (g, h) and water extract of T. chinense (i, j) on the expression levels of TNF-a and IL-6 in RAW264.7 cells stimulated by LPS.Compared with virus control group, � p < 0.05; �� p < 0.01; ��� p < 0.001; ���� p < 0.0001.
2023-09-08T06:17:04.285Z
2023-09-07T00:00:00.000
{ "year": 2023, "sha1": "f2835eef1c599f855ea6655d2bf08ecb9ac510f0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "db14cbdec89b353c06f1c362fe76bb2325010f39", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
204243507
pes2o/s2orc
v3-fos-license
4D biofabrication of skeletal muscle microtissues Skeletal muscle is one of the most abundant tissues in the body. Although it has a relatively good regeneration capacity, it cannot heal in the case of disease or severe damage. Many current tissue engineering strategies fall short due to the complex structure of skeletal muscle. Biofabrication techniques have emerged as a popular set of methods for increasing the complexity of tissue-like constructs. In this paper, 4D biofabrication technique is introduced for fabrication of the skeletal muscle microtissues. To this end, a bilayer scaffold consisting of a layer of anisotropic methacrylated alginate fibers (AA-MA) and aligned polycaprolactone (PCL) fibers were fabricated using electrospinning and later induced to self-fold to encapsulate myoblasts. Bilayer mats undergo shape-transformation in an aqueous buffer, a process that depends on their overall thickness, the thickness of each layer and the geometry of the mat. Proper selection of these parameters allowed fabrication of scroll-like tubes encapsulating myoblasts. The myoblasts were shown to align along the axis of the anisotropic PCL fibers and further differentiated into aligned myotubes that contracted under electrical stimulation. Overall the significance of this approach is in the fabrication of hollow tubular constructs that can be further developed for the formation of a vascularized and functional muscle. Introduction Growing population, increasing lifespan and an aging of society has lead to a growing need for donor organs and organ replacement. However, limited availability of donor tissues, as well as the high risk of immune rejection of the transplant raise a great need for development of new methods for tissue engineering [1]. One of the most important tasks in tissue engineering is the replacement of damaged or lost skeletal muscle. An average adult male is made up of ca. 40% of skeletal muscle, making skeletal muscle the most abundant tissue in the body [2]. Skeletal muscle has a complex structure and is formed with bundles of parallel, packed and organized fibers [3]. Even though skeletal muscle shows a high capacity of self-repair, it is unable to regenerate in the case of severe damage or disease, such as tumor ablation and volume loss injuries (VML) [4,5]. One big challenge in engineering of functional skeletal muscle tissues is the formation of highly aligned muscle fibers similar to the native structure. Due to this, skeletal muscle is considered especially important for further development of tissue-replacement strategies. Multiple attempts have been undertaken to engineer 2D structures mimicking the natural morphology of skeletal muscle tissue and using diverse techniques which are based on (i) patterned substrates, and (ii) mechanical/electrical stimulation [6][7][8]. In the case of chemically/topographically patterned substrates (quasi 2D objects), cells form a continuous layer on a patterned surface (cell density is comparable to that in natural tissue) and tend to align according to the pattern [9,10]. This approach allows fabrication of relatively thin aligned cell sheets. However, scaling-up to highly organized and orientated cells in a thick multilayer structure using this method is not trivial. The second strategy consists of the use of hydrogels with encapsulated myoblasts (3D objects) and their exposure to oscillating/constant mechanical deformation or pulsing electrical stimulation [11,12]. Common cell density used in this approach is ca. 10 6 -10 7 cell ml −1 [8,13]. Higher initial cell density might change the stability of the hydrogel and make it brittle [14], while the fabrication of tissues with a high cell density and supporting the maturation and functionality of these cells is still a challenge. In this paper, we introduce 4D biofabrication [15,16] to address current challenges in fabrication of skeletal muscle tissues. 4D biofabrication comprises a variety of fabrication technologies and assumes that the desired structure/shape/morphology is achievable by a shape-transformation of preliminary fabricated 3D elements. Importantly, the shape-transformation should occur in a controlled manner by applying an external stimulus such as swelling in aqueous media, pH or temperature changes [15,17]. In our approach, we utilize a shape-changing polymer bilayer consisting of stimuli-responsive, biodegradable methacrylated alginate anisotropic fibers (AA-MA, outer layer of self-folded tube) and aligned, biodegradable electrospun polycaprolactone nanofibers (PCL, inner layer of self-folded tube). This construct has a number of advantages over shapechanging layers previously used for 4D biofabrication [16,[18][19][20]. In contrast to widely used thermoresponsive (meth-) acrylamides [18,[20][21][22] and unstructured poly(thylene) glycol(PEG) based scaffolds [16,23], a bottom AA-MA layer is simultaneously, both sensitive to signal compatible cells with a presence of Ca 2+ ions, and is biodegradable [24][25][26]. Furthermore, we observed that the AA-MA fibers swelled as expected for a thin layer of hydrogel. The aligned PCL is biodegradable and the electrospun fibers were shown to guide both shape-transformation and cell orientation [27][28][29][30][31][32][33][34], which cannot be provided by isotropic hydrogels and unstructured solid polymers. Moreover, the porosity of an electrospun layer allows for the diffusion of oxygen and nutrition to cells [18,35], which is usually hindered by used solid polymer layers [16,19,36]. There have been extensive studies on cell alignment on various 2D substrates like ribbons and electrospun fibers [5,30], though in this study we describe self-folding material with the ability to align cells during their growth that has not been discussed before. We show the ability to fine-tune formed self-folded tubular structure diameters in a significantly wide range (0.1-70 mm), which makes this material suitable for single micrometer size muscle fiber and large muscle bundle formation. In this study, first, we cultured mouse myoblast cells on such a fibrous bilayer mat (figures 1(a), (b)). Mats roll and form tubular multilayer structure with a channel inside ( figure 1(c)). Finally, myotubes are formed inside the rolled structure after differentiation ( figure 1(d)). This process results in the formation of a continuous cell construct inside a rolled fibrous scrolllike tube ( figure 1(d)). Synthesis of methacrylated alginate (AA-MA) The methacrylate groups in alginate were introduced using the procedure described before [37]. A 20-fold excess of methacrylic anhydride was added dropwise to 2% alginate solution. The reaction pH was constantly adjusted to pH 8 using 5 M NaOH. The mixture was incubated at 4°C for 24 h using constant stirring at 800 rpm. AA-MA was precipitated and washed in ethanol to remove the remaining methacrylic acid and anhydride. Clean substance was air dried for further use. Electrospinning The electrospinning setup consists of a custom-made multi-syringe pump, a needle holder with a variable distance between the needles and an electrospinning equipment (30 kV voltage controller, two conductive bars and a rotating drum as collectors). Omnifix ® 3 and 5 ml syringes were used, and flow rates were adjusted to 0.02 ml min −1 . Needles with 0.8 mm inner diameter were used and 15 kV voltage was applied to the tip of the needle, whereas 5 kV voltage was added to collector. Electrospun fibers were collected either on the rotating drum (640 rpm) or between two conductive bars (distance between bars 4 cm). The distance between the needle tip and collectors was kept constant at 15 cm. Bilayer systems were produced by sequential deposition of different polymer solutions during electrospinning. The PCL solution in chloroform with 8.5 wt% concentration was electrospun to obtain PCL fibers. Alginate fibers were electrospun using 3 wt% AA-MA solution containing 10 μl of 0.5% EY in VP and 200 μl of 0.5 M TEA . The spinning solution was also mixed with 5 wt% PEO and 30 wt% Pluronic F127 with weight ratio 70/30/2 and stirred overnight [25]. Scanning electron microscopy (SEM) Structure and microscopic features of the fibers were investigated by field emission scanning electron microscopy (FE-SEM) (FEI Teneo, FEI Co., Hillsboro, OR and Carl Zeiss Microscopy GmbH, Germany). Fully dried samples where covered with ∼10 nm gold, to ensure conductivity. Dynamical mechanical analysis (DMA) The mechanical properties of electrospun fiber mats were characterized by dynamic mechanical analysis (Anton Paar MCR 702 TwinDrive, Austria). Samples with dimensions 50×10×0.8 mm 3 were prepared and dual cantilever tension mode was used for the measurement. During the measurement, static (150 mN) and dynamic force (130 mN) was added to the sample. Frequency was kept constant during the measurement (1 Hz). The temperature range used during the measurement was from 20°C to 37°C and a scanning rate of 2°C min −1 was used to characterize the viscoelastic properties. 2.6. Differential scanning calorimetry (DSC) DSC was performed on a Metler Toledo DSC3 (USA). Samples were prepared by loading 5-10 mg of finely cut PCL mat pieces in a closed aluminum crucible. The polymers were scanned in three steps: (1) heating from −10°C to 120°C, (2) cooling down to −10°C, and (3) heating to 120°C again. For all samples, the heating/cooling rate was 10 K min −1 . Small-angle x-ray scattering (SAXS) All small-angle x-ray scattering (SAXS) data were measured using the SAXS system 'Double Ganesha AIR' (SAXSLAB, Denmark). The x-ray source of this laboratory-based system was a rotating anode (copper, MicoMax 007HF, Rigaku Corporation, Japan). The data were recorded by a position sensitive detector (PILATUS 300 K, Dectris). To cover the range of scattering vectors between 0.002-1 Å −1 , different detector positions were used. The measurements were done in parallel and with a perpendicular geometry of the beam to the bilayer mat at room temperature. Rheology Rheological properties of non-crosslinked 3% AA-MA solution were measured using Rheometer AR G2 (TA instruments, USA). Cone-plate geometry with a size of 20 mm was used in oscillatory mode. Solution complex viscosity, depending on the temperature, was measured using a temperature range from 20°C to 40°C, shear rate was kept constant at 3.34 1/s (calculated theoretical shear rate in electrospinning needle). Storage and loss modulus depending angular frequency was measured using frequency sweep measurements, where angular frequency varied from 0.1 to 100 Hz at 10% strain. 2.9. Cell culture studies C2C12 mouse muscle cells (passage number <7) were cultured on the PCL aligned fibers and bilayer PCL/ AA-MA fibrous scaffold. First, fibrous scaffolds were fixed in the cell crown (Scaffdex CellCrown™ inserts) and after washing with 70% ethanol and PBS, were sterilized using UV light for 30 min. To enhance the cell adhesion on PCL fibers, the PCL side of the bilayer and PCL scaffolds were coated with sterilized FNC solution (fibronectin, collagen, albumin cocktail solution, Thermo Fisher) for 30 s. Following the coating of scaffolds, a cell suspension with a density of 10 5 cell ml −1 was seeded on top and incubated for 30 min for initial attachment of the cells. The growth medium of C2C12 cells containing DMEM, 10 v/v% FBS serum, 1% Pen/Strep, 4 mM Glutamin and 20 mM HEPES were added to samples and the cell viability as well as their alignment was monitored at different time points of 1, 3, 5, and 7 days after the culture. Myoblast cells, cultured on both bilayer and pure PCL fibrous scaffolds, after 7 days of culture, were moved to a differentiation medium containing DMEM, 2 v/v% Horse serum, 1% Pen/Strep, 4 mM Glutamin and 20 mM HEPES. The differentiated cells and myosin expression were evaluated by immunostainings on day 4 and 7 after differentiation and cells were stimulated electrically to evaluate their functionality on day 7. Live/dead assay The viability of the muscle cells cultured on bilayers, as well as PCL fibers, were measured using Live-Dead assay. A staining solution containing 1 μl of Calcein AM (Thermo Fisher) and 4 μl of Ethidium EthD-1 (Thermo Fisher) was prepared in 2 ml PBS and samples were covered with staining solution and incubated for 20 min at room temperature before imaging using a fluorescence microscope (Leica DMi8, Germany). The cell viability was analyzed at different time points, such as 1, 3 and 7 days after the culture by counting the number of live and dead cells in ten images. Staining of actin filament and nucleus To quantify the alignment of the muscle cells, cultured on bilayers as well as PCL fibers, the actin filaments and nuclei were stained using DAPI (Thermo Fisher) and Phalloidin Dylight™ 488 (Thermo Fisher). The staining solution containing an initial concentration of 500 μl 0.1 mg ml −1 DAPI and 250 μl Phalloidin in 10 ml PBS, was prepared according to the number of samples to stain the actin and nuclei of the cells. Firstly, the samples after 1, 3 and 7 days in culture were washed with PBS and then fixed using 3.7% formaldehyde solution for 15 min at room temperature. After fixation of cells, the sample was washed with PBS and the cell membranes were permeabilized with 0.1% Triton solution for 5 min at room temperature. Next, samples were washed with PBS, and fully covered with staining solution. After 30 min of incubation samples were washed with PBS and imaged using a fluorescence microscope. To investigate the cell alignment, morphological changes of nuclei were analyzed. Image J and an orientation plug-in was used to analyze ten images taken from different samples. Nuclei orientation angles of <10°to fibers were considered as aligned. Myosin heavy chain staining To investigate the formation of myotubes and the expression of the myosin heavy chains on day 4 and 7 after incubation in the differentiation medium, samples were stained using the immunostaining protocol in the following. Firstly, samples were fixed and cell membranes were permeabilized as described previously for actin staining. Then 5% BSA in PBS blocking solution was added and incubated at 37°C for 15-30 min. Solution was aspirated and samples were washed with PBS. Next, primary antibody MY32 (Thermo Fisher) 1000x diluted in BSA 0.1% was added and incubated overnight at 4°C. Samples were further washed with PBS and then staining solution of a secondary antibody (goat anti mouse lgG 488, Thermo Fisher), and DAPI in 1000x dilution in BSA 0.1% was added. Samples were covered from light and incubated at 37°C for 1 h. After incubation and washing steps with PBS the images were taken using a fluorescent microscope. The images taken from different samples and different time points were analyzed using Image J software and by evaluating the myotube length, aspect ratio and counting the number of nuclei in the formed myotubes. To get more representative results, ten images of different places on samples were used. Electrical stimulation Differentiated muscle cells grown on fibrous bilayer scaffolds after 7 days of differentiation were stimulated using a custom-made electrical stimulation device to evaluate the functionality of the myotubes. Briefly, cell seeded electrospun mesh were taped in 60 mm culture dishes with silicone adhesive tape and after 7 days of differentiation, they were stimulated electrically using a self-made stimulation dish which was made of two parallel platinum electrodes. The stimulation dish was sterilized each time prior using under the UV light for 15 min and platinum electrodes were placed perpendicular to the fiber direction and subjected to previously optimized continuous square electrical pulses (4-5 V, frequency: 1 Hz, duration: 1 ms) [38]. A stimulation medium was prepared adding 1% MEM-non-essential amino acid (gibco) and 2% MEM essential amino acid (gibco) to the differentiation medium. The contraction of muscle cells was captured using time-lapse imaging. 2.9.5. Cell imaging using SEM To investigate the cell spreading and their morphology after adhesion on fabricated bilayers, samples were fixed, dehydrated and analyzed using SEM. Alcohol concentration gradient (50%, 70%, 80%, 90%, 100% of EtOH) was used to gradually remove water from the samples. Then, samples were covered with a mixture of EtOH and tert-butyl alcohol (1:1 v/v) for 5 min at room temperature. Next, samples were dipped in pure 100% tert-butyl alcohol. To freeze tert-butanol, samples were put in a −80°C freezer for 1 h. Lyophilizer was used to dry the samples completely. When samples were fully dried, they were prepared for SEM imaging as mentioned before in the section SEM. Statistical analysis Obtained data were shown as the mean±standard deviation (SD) (three replicates were used). A student's test and one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison tests were performed to analyze differences between each two experimental groups. A value of p<0.05 was considered as statistically significant. Results and discussion We used electrospinning to prepare polycaprolactone (PCL)-methacrylated alginate (AA-MA) bilayer mats. It is essential that PCL fibers are uniaxially aligned to, (i) provide orientation cues to cells, and to (ii) guide shape-transformation of the bilayer resulting in the formation of a tube. Orientation of AA-MA fibers was not crucial as they were not used as the cell substrate and, due to extensive swelling, no longer had distinctive structural cues for cells to follow. PCL fibrous mats with aligned fibers were fabricated as a first layer by electrospinning, where two different collectors such as a rotating drum and parallel bars were used. We found that the diameter of PCL fibers varies slightly with the method of deposition and was ∼2.0±2.0 μm and 1.6±1.1 μm, when a rotating drum and parallel bars collector were used, respectively (figures 2(a)-(b), S1(a)-(d) are available online at stacks.iop.org/BF/12/015016/mmedia). AA-MA mats with random orientation of fibers were deposited on top of preliminary prepared PCL mats. The spinning solution of the AA-MA fibers contained Eosin Y and Triethanolamine (TEA) as photoinitiator and crosslinking agent for photocrosslinking under green light (wavelength: 532 nm). We also prepared individual PCL and AA-MA mats for characterization of their individual properties. It was determined that the parallel bars collector resulted in a higher degree of alignment of PCL fibers (60% of all fibers had <10°fiber orientation angle) compared to the drum collector (40%). On the other hand, the rotating drum allowed fabrication of a thicker fiber mat (500 μm) in comparison wiht the parallel bar (100 μm) ( figure 2(c)). We observed that the degree of fiber alignment in mats produced by both methods decreased with increasing spinning time and the thickness of mat, which was caused by fiber thickening and entanglement [39]. Small and wide angle x-ray scattering measurements confirmed uniaxial orientation of polymer chains and lamellas. At low scattering vectors q the SAXS signal scales with q −4 as expected for a 3D object as the mat. The strong anisotropy of the signal in that range proves the preferred orientation of the fibres inside the mat. A shoulder around 0.05 Å −1 is visible in the 1D data, corresponding to a correlation length of about 14 nm, if the mat is measured in parallel beam direction this feature is seen in the 2D image as spots on opposite sites. As a result of the preparation method, the peakedness of the shoulder is most pronounced in parallel geometry as a consequence of the better orientation in the equatorial plane (thinnest dimension) of the mat. The preferred orientation of the fibres itself is reflected by the lozenge-like shape at lowest q. The degree of crystallinity of electrospun PCL fibers was of about 40% as it was revealed by Differential Scanning Calorimetry (DSC) ( figure S2(a)). Dynamic mechanical analysis (DMA) revealed the anisotropic mechanical properties of PCL matsthere is a significant difference of elastic modulus in different directions ( figure 2(d)). Elastic modulus measured for longitudinal orientation of fibers was 20.7 MPa and it was significantly higher than that measured for transverse orientation (E=6.3 MPa). The higher value of longitudinal modulus can be explained by the fact that the stretching of the oriented fiber requires larger force than the separation of individual fibers. We separately investigated swelling properties of freestanding AA-MA mat. It results that the swelling degree of polymer mat decreases with increasing concentration of Ca 2+ ions (figure 2(f)). Interestingly, while the thickness of photocrosslinked AA-MA mats strongly depends on concentration of Ca 2+ ions (range 300%-1300%) (figure 2(f)), the lateral width changes are almost independent of concentration of Ca 2+ ions and are in the range 10%-15% ( figure S2(d)). The PCL-alginate bilayers demonstrated shapetransformation behavior in an aqueous environment (water, PBS, water with different Ca 2+ ion concentration) as they started to roll and form tubular scroll-like structures. Eventually, the diameter of these tubes and the direction of folding of bilayers depend on both: (i) the orientation of PCL fibers with respect to the main axis of a polymer mat, and (ii) the concentration of Ca 2+ ions (figure 3). We observed that bilayers roll and form tubes in an aqueous environment with low Ca 2+ ions concentration. PCL and AA-MA mats formed the inner and outer surfaces of the tubes, respectively ( figure 3(a)). Increased concentration of Ca 2+ ions resulted in unrolling of the bilayers due to the decreased swelling degree of alginate in different concentrations of Ca 2+ ions solution (0.00145-0.08 mol l −1 , figure 3(b)). Interestingly, we observed a change of direction of folding of bilayer towards AA-MA side at a certain Ca 2+ ions concentration (∼0.08 mol l −1 for samples with PCL/AA-MA thickness ratio 0. [5][6]. In this case, the bilayers flexed such that the PCL layer was under tension, or, in other words, there was convex flexure of the PCL side (figures 3(c)-(e)). The mechanism behind this bending is likely the relaxation of stretched PCL fibers [18]; during electrospinning the PCL fibers are stretched and therefore they relax and slightly bend towards the AA-MA side as soon as they are removed from the collectors (dry state). Likewise, after the addition of Ca 2+ ions solution (wet state) the relaxation behavior of PCL fibers restrict the swelling of AA-MA fibers and bend the bilayer towards the alginate side. The ratio between thickness of PCL and AA-MA mats as well as the temperature of aqueous environment also affects the tube diameter. As it is shown in figures 3(d)-(e), increase of the thickness of PCL mat first resulted in decrease of tube diameter. Symmetric bilayer mats (1:1 thickness ratio of PCL and AA-MA, h(PCL)/h(AA-MA)) form tubes with minimal diameters. Further, an increase of the thickness of PCL mat resulted in the increase of the diameter of the tubes (figures 3(d)-(e)). Qualitatively, this behavior can be explained by considering intrinsic properties of the materials. In a thin PCL mat (ratio PCL/AA-MA<0.3), PCL layer is not stiff enough to resist against the swelling of alginate layers, therefore, the tube diameter is large. Thicker PCL layer (0.3<ratio PCL/AA-MA<2 ) is able to sufficiently restrict the swelling of AA-MA mat to reduce the diameter of the resulted tube. However, a very thick layer of PCL (ratio PCL/AA-MA>2), in opposition, showed stiffer behavior and a swollen layer of AA-MA was not able to deform or bend the bilayer. Further, temperature was shown to have an effect on the tube diameter: generally, the diameter of tubes formed at higher temperature (37°C) was slightly smaller than those formed at room temperature (figures 3(f)-(g)) due to the softening and relaxation of PCL at higher temperatures [18]. Therefore, the variation caused by temperature changes on tube diameters is more profound when the AA-MA layer is thicker than the PCL layer. Consequently, we can conclude that PCL/AA-MA bilayer electrospun mats are able to form tubes in an aqueous environment and the diameter of these tubes can be precisely controlled by varying the thickness of the layers, the concentration of Ca 2+ ions and the temperature. The folding scenario of bilayer mats depends also on their shape and total thickness (figures S3-5). In terms of shapes, we examined rectangular, square and circular mats, and in terms of thickness we measured thin (thickness 100 μm) and thick (thickness 500 μm) mats. Thin rectangular bilayer mats (100 μm) were always rolled perpendicular with respect to the orientation direction of fibers ( figures S4(a)-(d)). Square mats also fold crosswise with respect to direction orientation of fibers (figures S4(e)-(f)). Their folding starts from two opposite sides and results in fibers being perpendicular to the main axis of the formed tube ( figure S3). Interestingly, the circular mat started to fold predominately from one side and the folding was always parallel with respect to the main axis of orientation and the direction of fibers. In other words, the resulted tubes had fibers oriented along the main axis of the tube (figures S4(g)-(h)). Folding of thicker samples (500 μm) also depends on their shape and thickness ( figure S5). Similar to the folding scenario of thin circular mats, thick circular mats form tubes by having PCL fibers oriented along its length ( figure S5(c)). Rectangular mats folded predominantly such that fibers in the formed tube are oriented along the axis of the tube ( figures S5(b), (d)). In case of square shaped mats, the folding began at opposite corners of the mat, which resulted in a small-angle twisting in respect to the length of the tube ( figure S5(a)). One can assume that there are four factors that can effect a scenario of folding of rectangular and square bilayers: (i) mechanical anisotropy determined by orientation of PCL fibers, (ii) shape/edge effects, (iii) thickness and (iv) environmental conditions (temperature, ion concentration). Due to the full symmetrical shape, folding of circular mats shall not be influenced by shape/edge effects, therefore, the circular shape is most promising for the fabrication of tubes with parallel orientation of fibers. To study the formation of muscle microtissue on electrospun mats, we used thin (<100 μm) PCL/AA-MA bilayer mats prepared using parallel bars cut into the shape of circles and seeded with mouse myoblasts (C2C12) on the PCL side of the bilayer (oriented fibers). This system was used due to 20% higher alignment of PCL fibers, and a smaller formed tube diameter which is closer to natural muscle fiber diameter (100 μm) [40]. The cell adhesion to hydrophobic PCL layer was improved by its treatment with fibronectin, collagen, and albumin (FNC) solution. Previously, we showed the effect of FNC protein coating to improve the adhesion of cells on the surface of hydrophobic materials such as PCL [31]. Non-coated bilayer and PCL mats were used as controls. During experiments, we have observed no cell adhesion on the alginate side of the bilayer which can also clearly show that PCL fibers are the side for adhering and aligning the cells. As mentioned above, our porous PCL/AA-MA bilayer mats form tubes due to the swelling of alginate material and self-folding behavior. Therefore, to control the cell seeding and to facilitate the imaging process for future steps, samples were fixed in cell culture crowns to avoid instant tube formation. However, as a result of self-folding, cultured cells were able to be trapped inside the non-transparent 3D structure and formed a cell layer within the construct without any disruption of self-folded structure, as is shown in the electrical stimulation section ( figure 6(c)). Cells cultured on bilayers as well as PCL aligned fiber mats showed a high viability above 90% independently, whether mats were treated with FNC or not. As it was shown in (figure S6-9), myoblast cells on noncoated (aligned) PCL fibers, did not adhere with homogenous distribution and rather formed clusters. Moreover, after 7 days of culture those clusters did not show alignment on fibers and stayed with a round morphology. However, on FNC coated fibers, cells were able to adhere and spread (figure S13). Therefore, bilayers were treated with FNC in all further experiments to promote cell adhesion. In addition, we observed that thickness of AA-MA and PCL layers affected the cell growth and viability. On the bilayer mats with 20 μm thick PCL and 60 μm thick AA-MA, cells tended to make a cluster and spread weakly (figure S10). In fact, on such a bilayer with thicker AA-MA layer and thinner PCL layers, more cells are in contact with swollen AA-MA fibers, which do not offer any chemical groups to adhere to (figures S7 and 11). Cell alignment on fibrous mats, which was analyzed after staining of actin filament and nuclei, using DAPI and phalloidin, also confirmed the dependency of cells behavior on thickness of alginate layer and surface treatment with FNC. As mentioned above, the presence of an AA-MA layer in the bilayer mat resulted in the lower degree of cell alignment one day after the cell seeding independently, whether it was coated with FNC or not. However, this poor alignment was significantly improved after a week of culture (figures 4, S11-14). Accordingly, we cultured the cells on PCL control samples (coated and non-coated) and observed the treatment of PCL mats with FNC allowed substantial increase of degree of cell alignment even one day after culture. After seven days, all samples showed comparable cell alignment ∼30% of the cells were aligned with the fibers. This was also confirmed by SEM micrographs, where we could see the formation of monolayer of muscle cells on bilayer after seven days of culture (figure S15). As it was mentioned, lower initial cell alignment on bilayer mats and weak cell adhesion on thin bilayers can be explained by the influence of alginate. In fact, alginate is a polysaccharide that does not contain any chemical groups promoting cell adhesion. FNC most probably was not adsorbed by this hydrophilic hydrogel but provided a thin protein coating to cells for the initial cell attachment. After 7 days of culture, the medium was exchanged with one containing a lower serum content (2% horse serum) to enhance the differentiation and fusion of myoblasts cells to myotubes. To prove the formation of muscle tissue bundles, the myogenesis was monitored using immunostaining and quantified within one week of differentiation on the fibrous mats. We observed generation of small myotubes and fusion of myoblasts after 4 days of differentiation (150 μm length and number of nuclei lower than 5) (figures 5(h), (i), S16-18). They continued to enlarge and mature within 7 days up to 350 μm length and number of nuclei higher than 8 on FNC-coated PCL/ AA-MA bilayer mat. In the first 4 days of differentiation, we observed faster myotube formation on PCL mats compared to bilayers, and cells grew in width rather than length, resulting in a slight decrease of their aspect ratio. Myotube formation was likely delayed by alginate, which does not promote cell adhesion. We observed the formation of continuous layer of muscle fibers on the surface of bilayer and PCL mats after 7 days of differentiation. The functionality and contractility of the muscle cell layer formed on the fibrous self-folding bilayer was tested after 7 days of differentiation by electrical stimulation (14 days total in culture). We observed that the mature myotubes, which are oriented along fibers contract synchronously with the applied pulses (figure 6, video S19-21). Moreover, we observed that a continuous muscle fiber layer, which was delaminated from a bilayer mat upon its manual unfolding, contracts as whole under electrical stimulation. This implied that there was a formation of functional, aligned skeletal muscle microtissue (thickness ∼20 μm) inside the self-rolled fibrous PCL/AA-MA bilayer mat. Conclusions Using a 4D biofabrication approach, we were able to produce functional skeletal muscle microtissues. PCL/AA-MA electrospun bilayer mats with uniaxial alignment of PCL fibers were able to undergo programmed shape-transformation and to form multilayer scroll-like tubular constructs, where the fibers were aligned in parallel with the tube's axis. These longitudinally aligned fibers were able to guide the alignment of myoblasts and to allow the fabrication of a continuous structure of aligned myotubes inside the self-rolled multilayer construct, which are able to contract responding to electrical stimulation. This new approach allows fabrication of important building blocks for tissue engineering-aligned 3D skeletal muscle fiber bundles with tubular structure. This hollow tubular construct can be further developed for formation of vascularized tissue to deliver oxygen and nutrition to the cells inside the rolled construct, eventually maturing into a piece of tissue to be implanted into the body. Figure 6. Contractility of the muscle fibers layer under electrical stimulation (4-5 V, frequency: 1 Hz, duration: 1 ms): functional contracting myotubes that are observed by cyclical displacement of features inside yellow circles (a); contracting cell monolayer, solid and yellow dashed lines show edge of contracted and relaxed myotubes layer, respectively (b). Time between images is 1 s; 3D projection of myoblast muscle cells on self-folded bilayer (c). Actin filament and nuclei staining using DAPI (blue) and Phalloidin (green) to evaluate the cell alignment on bilayer mats.
2019-10-12T13:01:58.740Z
2019-12-11T00:00:00.000
{ "year": 2019, "sha1": "0a3672b3c4c41fa7f2e415c2742c1e77c7292433", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1758-5090/ab4cc4", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "dd4d937c160484f55202cbcf6a2322f60f4d4324", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
247218056
pes2o/s2orc
v3-fos-license
Code Smells in Machine Learning Systems As Deep learning (DL) systems continuously evolve and grow, assuring their quality becomes an important yet challenging task. Compared to non-DL systems, DL systems have more complex team compositions and heavier data dependency. These inherent characteristics would potentially cause DL systems to be more vulnerable to bugs and, in the long run, to maintenance issues. Code smells are empirically tested as efficient indicators of non-DL systems. Therefore, we took a step forward into identifying code smells, and understanding their impact on maintenance in this comprehensive study. This is the first study on investigating code smells in the context of DL software systems, which helps researchers and practitioners to get a first look at what kind of maintenance modification made and what code smells developers have been dealing with. Our paper has three major contributions. First, we comprehensively investigated the maintenance modifications that have been made by DL developers via studying the evolution of DL systems, and we identified nine frequently occurred maintenance-related modification categories in DL systems. Second, we summarized five code smells in DL systems. Third, we validated the prevalence, and the impact of our newly identified code smells through a mixture of qualitative and quantitative analysis. We found that our newly identified code smells are prevalent and impactful on the maintenance of DL systems from the developer's perspective. INTRODUCTION In the past few years, Deep Learning (DL) systems, a branch of machine learning (ML), has become an inseparable part of billions of peoples' lives worldwide, from personal banking to communication, from entertainment to transportation, and more [2,4]. Due to such ever-increasing dependence, ensuring DL system quality is of utmost importance. Failure to do so has already resulted in catastrophic consequences [1]. As DL systems evolve and grow in size and complexity, continuous maintenance in the form of performance improvement, mandatory upgrades, and fixing bugs is necessary to ensure its correctness and continuous availability during its lifetime [44]. However, maintenance of DL systems, similar to non-ML systems, can be hindered due to poor design and implementation choices. Compared to non-ML systems, DL systems are even more affected by maintenance issues since they are prone to the maintenance issues pertaining to both non-ML components and DL components as DL systems are combinations of both non-ML and DL components [42]. However, the majority of these studies focus on non-ML code smells with only a few focusing on ML code smells [25] and none focuses on DL-specific code smells. Since DL and traditional software development is significantly different in terms of workflow and engineering practices [21], as well as DL's data dependent behavior [12,47], it is safe to assume that along with previously known code smells, there are code smells that are unique to DL systems which have not yet identified. A study conducted by Hadhemi et al. [25] is closest to our work, where they studied code smells in DL systems. However, they investigated the prevalence of Python code smells; and analyzed code smells that were designed for non-DL general-purpose source code [18]. We posit that generic Python code smells provide only a partial picture, and there are DL-specific code smells that require further investigation. For example, Fig.1 shows an example of Jumbled Model Architecture (JMA) code smell where a Variational Autoencoder (VAE) [30] is extracted into encoding, sampling, and decoding 1 . Intuitively, jumbled VAE impedes the understandability of model architecture and makes future maintenance difficult, this refactoring helps to ameliorate that. Due to the already proven impact of code smells on various aspects of non-DL software, it is safe to assume that code smells will have a similar, if not more detrimental effect on the long-term maintainability and overall quality of DL systems. Making it of utmost importance to get a complete picture of the unique code smells in DL systems and understand their impact. The first step towards achieving this goal is by identifying DL-specific code smells derived from real-world modifications applied to DL projects by developers. In this study, we identify and analyze maintenance related modifications done by developers on 59 open source DL projects that were previously investigated by other researchers [25]. By employing a combination of PythonChangeMiner [7], GitcProc [16] and manual analysis, we collected 426 maintenance related code changes from these 59 projects, where each change has at least three other similar occurrences among the projects. Next, using qualitative analysis, multiple coders independently coded collected changes into nine groups and extracted five frequently occurring code smells. Next, we validated the prevalence and severity of code smells by conducting a survey of 235 OSS DL developers. The survey analysis results show that our identified new code smells are often seen and have a significant impact on system maintenance activities. In this paper, we answer the following research questions: RQ1: What kinds of modifications do developers make frequently in DL systems? RQ2: How prevalent are code smells in DL systems? RQ3: How do practitioners perceive the identified code smells in DL systems? The remainder of the paper is structured as follows. Section 2 provides an overview of related work. Section 3 details our methodology, with Section 4 presenting our findings. Section 5 places our results in the broader context of work to date and outlines the implications for DL practitioners and researchers. Section 6 lists the threats to the validity of our results. Section 7 concludes with a summary of the key findings and an outlook on our future work. do not make the software system behave incorrectly or crash but make it harder to understand, and maintain [20]. Research communities have investigated the impact of code smells in non-ML software systems such as how code smells impact fault-proneness and change-proneness [20,28,29], it's impact on maintainability [22,34,43,48,49], when and why code smells are introduced [46], how they evolve over time [13,17,35,39,46], and how to detect code smells using different techniques [33,36,37,40]. However, whether these code smells can capture all code smells relevant to DL systems is still an open question since existing research shows that there are significant differences between DL and traditional software systems. Wan et al. showed that the incorporation of DL into a software system significantly impacts the requirement analysis, system design, testing, and process management [47]. Scully et al. presented a set of unique anti-patterns in DL system development and highlighted a number of areas where technical debts unique to DL systems exist [42]. Researchers also identified differences in the development process for DL systems due to the team formation and dependence on data which necessitates steps such as data understanding, data cleaning, model training, model deployment, and monitoring [3,9,12,21]. All these differences can potentially introduce unique poor designs or implementations in source code, also known as code smells. Despite the clear differences between DL and traditional software systems, only a few studies have investigated code smells in the context of DL systems. Hadhemi et al. [25] investigated the prevalence of Python code smells in DL systems along with investigating the differences in the distribution of code smells between DL and traditional systems. The code smells they investigated are: Long Parameter List (LPL) [23]: A method or a function that has a large number of parameters. Long Method (LM) [23]: A method or a function that is extremely long. Long Scope Chaining (LSC) [19]: A method or a function that has a deeply nested closure. Large Class (LC) [15]: A class that has a large number of source code lines. Long Message Chain (LMC) [15]: An expression for accessing an object using the dot operators through a long sequence of attributes or method calls. Long Base Class List (LBCL) [15]: When a class extends too many base classes due to the multiple inheritances that Python language supports, it makes code hard to understand. Long Lambda Function (LLF) [15]: An anonymous function that is extremely long and complex in terms of conditions and parameters. Complex Container Comprehension (CCC) [15]: One-line comprehension list, set, or dictionary that contains a large number of clauses and filter expressions. As it can be seen for the definitions, these code smells were designed for traditional general-purpose Python code [18]. However, in a DL system, there is general-purpose code, along with model architecture, data preparation, and pipeline related code. Hence, we posit that there are other code smells that are unique to DL specific code (i.e., model architecture, data preparation and pipeline, etc.) Prior research in the context of non-ML systems indicated that developers have differentiated opinions about code smells, their prevalence, and effect [49]. However, existing research in DL did not investigate how developers perceive code smells in the context of DL systems. As result, questions such as how prevalent these smells are, and how developers perceive their impact remains unanswered. We aim to fill the gap and answer the questions in this work. METHODOLOGY We used a mixed method approach consisting of mining software repositories and qualitative analysis. Figure 2 shows the process that we follow in this study. We start by code mining to gather recurring code change patterns, then apply open card coding to identify new code smells, and finally, conduct a large-scale survey to validate the prevalence and impact of the newly identified code smells. Code Mining Our first step was collecting recurring code changes in 59 open source DL systems. These projects were investigated by Hadhemi et al. [25] in their study and we wanted to investigate whether there are other codes smells unique to DL in these systems besides generic Python code smells, thus we used the same dataset. 3.1.1 Data Collection. We started by obtaining 90, 301 commits from the 59 DL open source projects downloaded on May 20, 2020. Next, we used PythonChangeMiner [7] to detect and group commits with similar change patterns. PythonChangeMiner mines the history of a given repository using the PyDriller framework [6] and builds change graphs for matching functions in each changed file for a commit. To achieve this, both versions of the file (before and after the change) are parsed into Abstract Syntax Trees (ASTs) [ Compatibility with next version of TensorFlow: multiple methods in same file which are then traversed to create the structure of fine-grained Program Dependence Graph (fgPDG). Then, the obtained fgPDG are analyzed to find all node pairs before and after the change using GumTree [31], resulting in grouped change pattern categories. Figure 4 shows an example of a changing pattern identified in several projects that developers switched from using built-in copying to creating a deep copy of an object using a copy module. To make sure that our analyzed patterns are common across multiple projects and not specific to a project, we extracted code changes that happened at least three times within all commits across multiple projects. We identified 1, 942 commits matching this criterion. model = optimizer.target ) .copy( model = optimizer.target) copy.deepcopy( ) .copy( votes = rule.get_class_votes(X, self)) copy.deepcopy( (c) Code snippet 3 Figure 4: An example of a changing pattern identified in several projects on GitHub. The developers switched from using built-in copying to creating a deep copy of an object using a copy module of a Standard library. Research shows that refactoring (non-bug-fixing and non-program behavior-altering commits) is performed to remove code smells [24]. Since our identified 1, 942 patterns contained both bug-fixing, nonbug-fixing commits, we removed bug-fixing commits from our analysis as they alter program behavior. We used GitCProc [16] for this purpose, which identifies the bug-fixing commits based on the presence of specific words in the commit message. Words such as error, bug, defect and, fix are considered while identifying bug-fix commits by GitCProc. After removing bug-fix related commits, 1, 335 non-bug-fixing commits were left, which come from all 59 projects. Next, the first and second authors independently went through the commits to identify the commits related to maintenance. They relied on the commit message and compared the code before and after the update for deciding whether the commit was maintenance related or not. They initially used 10% (134) of the commits and independently labeled them. After initial labeling, the inter-rater agreement was 0.61, which according to Landis et al. [32] is considered as a substantial level of agreement. After an initial disagreement on some of the commits, the authors discussed their approach and had a complete agreement regarding the label of commits initially disagreed. Then the two authors labeled the remaining 1,201 commits together. This resulted in selecting 426 maintenance related commits where each commit had at least three occurrences across multiple projects. Modification Category Creation. Our next step was to group these commits based on the modification reasons. To do so, we followed descriptive coding [41] which is used for identifying topics from data. The result of descriptive coding is categorized groups based on identified topics. Two authors jointly conducted the descriptive coding on the selected 426 commits. They relied on the commit message and compared the code before and after the update for identifying the reasons for making the changes. This resulted in grouping the commits into nine modification categories. We selected descriptive coding technique for the following reasons: (1) we can get an overview of recurring changes that are indicative of poor maintainability; (2) we can obtain the context of these modifications. Code Smell Categorization. Our primary goal was to extract code smells from the frequently occurred modifications. For this purpose, in the next step first and the second authors checked if the modification reasons mentioned in Table 1 met the following criterion: (1) whether the modification reason is general (common to many DL systems), (2) if there is a general solution to the root cause that required the modification. If both criteria are met, they considered the modification reason as a code smell. Figure 3 shows an example of qualitative analysis to determine whether a modification is a code smell. Two rounds of descriptive coding were conducted. In the first round, the first and second authors independently investigated all modification reasons and created a list of code smell candidates based on the previously mentioned two criteria. After discussing they curated a list of 12 code smells and reached an inter-rater agreement of 83.2%. In the second round, these 12 code smells were presented to all authors, and after discussion, everyone agreed on five new code smells and the remaining seven were discarded as they did not meet the previously mentioned criteria completely. Since the collected commits consisted of both new code smells and pre-existing code smells that were identified by Hadhemi et al [25] (listed in Section2), we check the prevalence of generic Python code smells among the 427 commits. Since Pysmell [18] can identify these smells, instead of applying quantitative techniques, we relied on Pysmell for this purpose. We ran Pysmell before and after applying each of the 426 commits and calculated the number of fixed generic Python code smells. If the count of code smells decreased, we labeled that commit as Python code smell fixing commit. Through this analysis, we identified eight Python code smells in our data. Survey We delivered a survey to gain an understanding of the prevalence and severity of the newly identified code smells and gather the developer's perspective about them. 3.2.1 Protocol. We based our questions on the identified code smells from code change pattern mining. Our questionnaire included questions about the following topics (the complete questionnaire is available as supplemental material 2 ): • Demographics: We asked questions about organizations, geographical locations, and ML-related working experiences for this part of the survey. • Self-perception: We let respondents self-identify their professional categories ("I think of myself as a/an..." like researchers, engineers, scientists, etc). We used the answers to classify all respondents into four groups based on the result's keywords: data scientist/engineer, Machine Learning (ML) engineer, software engineer, and project manager based on their self-perception. ML engineers sit at the intersection of software engineering and data science, whose job is applying ML techniques and developing DL models. Data scientists/engineers are the group of people who create and maintain optimal data pipeline architecture, study and understand the data, and clean data. All respondents who are working on data-related jobs are grouped. Software engineers are those who build the software system and deploy the DL models. • Perception on code smells: We asked respondents whether they have encountered the code smells. To clarify any possible confusion, we provided a definition and a simple example for each code smell. If they responded "yes", we also asked them to what extent the code smell impacts their DL system maintenance (Very Serious, Serious, Moderate, Scarcely, and Not At All). We followed a pilot protocol [14] while designing the survey. We designed a pilot version and sent them to a small subset of developers (11 developers). Based on the feedback, we rephrased some questions to make them easier to understand. We simplified and merged some questions to ensure that participants could finish the survey in 7 minutes. The responses from the pilot survey were used solely for improving the survey questions and were not included in the final results. We also translated our original survey to a Chinese version to support respondents who read Chinese before 2 https://github.com/codesmell-material/codeSmell Respondent Selection. We aimed to get a sufficient number of practitioners from diverse backgrounds working on open source DL development and maintenance. Thus, we collected active contributors' emails in the 59 DL projects by using GitHub REST APIs. In total, we collected 1,157 email addresses and successfully distributed them to 1,061 contributors. We kept the survey anonymous, but the respondents could choose to receive a summary of the study. In total, we received 265 responses. After excluding incomplete surveys, 235 responses were considered valid. The countries and the corresponding number of respondents are shown in Fig. 5. The survey respondents who met our criteria are distributed across 15 countries and six continents. The majority of our respondents currently work in North America, Asia, and Europe, with the United States and China being the top two countries. Respondents' software development experience varies from 1 to 23 years with an average of 5.25 years, and their DL development experience varies from 1 to 10 years with an average of 3.13 years. Survey Data Analysis. To analyze the responses, we used descriptive statistics. For the 235 valid responses to the question related to whether they have encountered our identified code smells, we normalized the frequency of each code smell by computing the percentage of respondents who have encountered code smells. If a high proportion of respondents reported that they have encountered a certain code smell, we consider this smell as more common. We did the same for the impact level of the code smell question. We also analyzed the responses based on roles. We mainly analyzed the responses from the top three categories of respondents which belonged to software engineers, ML engineers and data scientist/engineers since the number of project manager respondents were too few. To check if there is a significant difference between new identified code smells in terms of impact, we adopted the Scott-Knott test [26]. Scott-Knott test divides the measurement averages into statically distinct groups by hierarchical clustering analysis. However, the limitations of the Scott-Knott test are that it assumes the data are in a normal distribution and it may create groups with trivially different from each other. Thus, we adopted its normality and effect size-aware variant Scott-Knott effect size difference (ESD) test [45]. The Scott-Knott ESD test (1) corrects the normal distribution of the input data and (2) merges any two statistically different groups of negligible effects. A detailed description of the Scott-Knott ESD test can be found in [45]. RESULTS In this section, we report the answers to our targeted research questions and findings that emerged from the data. Maintenance Related Modifications in Deep Learning RQ 1: What kinds of modifications do developers make frequently in DL systems? To answer this question, we mined 59 open source DL project repositories and identified 426 maintenance-related modification commits. By using descriptive coding, we categorized the selected commits into nine modification categories (explained in Section 3). The modification categories, identified modification reasons, and their corresponding distributions are shown in Table 1. Our manual analysis revealed that, as expected, some of the frequent modifications are specific to DL systems and others are not. For example, the most frequent (21%) modification category named Change function declaration which involves renaming functions, changing lambda functions to normal functions, and modifying function signatures is not specific to DL. Extract class/function category which includes changes pertaining to separating new class, isolating independent parts of code, and splitting long functions is also not specific to DL. We also found that three of the modification categories are specific to DL. Update/replace ML library: This recurring modification is the second most frequent category of modification (19%). Similar to API update/replace in traditional systems, developers usually use third-party DL libraries and frameworks to implement DL functionalities. However, DL libraries are usually updated more frequently than traditional libraries [42] and DL developers need to fix either deprecated or outdated functions to keep up with the updates. For example, the code snippet in Figure 3, shows that developers had to replace API names to resolve the compatibility issue with a newer version of TensorFlow. Data preparation modification: This recurring modification is performed on the data preprocessing steps. We found that 8% of overall modifications in our dataset belonged to this group. Since a substantial part of code in DL systems is written for data preparation, and feeding to DL model any changes to the data source, preparation steps, or the model architecture requires this category of modification. Model architecture modification: This recurring modification is done on DL model architecture related code. In order to resolve model degradation problems, developers iteratively train models or deploy new model architectures. We also found that developers make modifications to improve the model architecture by untangling the components. This group of modifications is 6% of our analyzed commits. Observation 1: One third (33%) of the maintenance related modifications in DL systems are specific to DL systems and are related to the data, model, and library. Interestingly, our results highlight another category of modification that is not specific to DL, but contains some DL specific changing reasons: Replace hard-coded value: This is the recurring modification where developers replace hard-coded values with variables. Similar to traditional software, hard-coded values make it difficult to maintain software systems. We found developers frequently replace hard-coded model path, hyper-parameters, and learning rate with variables. 13% of our identified commits fall into this category. Remove redundant debugging code: Developers frequently remove unnecessary debugging code in DL systems. The software engineering community has developed a number of tools, IDEs, and techniques to help catch bugs. Unfortunately, practitioners for DL systems do not enjoy the same robust set of debugging tools available for traditional software while debugging DL models due to the opaqueness of DL models and strong coupling between model and software components [42]. Thus, many DL developers resort to using print statements for debugging. 16% of maintenance-related modifications were grouped into this category. Move code: In this category of recurring modification, developers move code between files and positions. Developers often put model training, testing, and validation related code in the same file. Later on, they end up moving each of the training, testing, and validation to separate files. We found that 6% of the modification commits belong to this category. Remove dispensable dependency: This is the recurring modification where developers remove unused or unnecessary dependencies. Resolving dependency compatibility problems or versioning conflicts can be time consuming. As a result, developers are usually reluctant to remove dispensable dependencies until they have to. This kind of modification consists of 2% of modifications commits in our dataset. Code smells in Deep Learning System RQ 2: How prevalent are code smells in DL systems? Through manual analysis of the maintenance related modifications done on real-world projects, we identified five code smells in DL systems (details in Section 3). Table 2 shows the five code smells along with their signs and symptoms ordered based on the frequency of occurrence in projects from high to low. Scattered Use of ML Library: This smell is about implementing third-party ML libraries/frameworks in a non-cohesive manner throughout the project. As a result, whenever these libraries/frameworks update, developers have to modify multiple positions in single or multiple files. Such scattered use of ML library requires additional effort from the developer while maintaining the source code. 32 out of 59 (54%) projects have at least one commit showing this problem. Unwanted Debugging Code: This smell was derived from the recurring pattern of leaving unwanted or unnecessary code in the DL system and we found 24 out of 59 DL projects have this code smell. DL systems tend to be more complicated than a traditional system and developers use debugging code for getting data shape or printing current status to understand the code. However, left uncleaned these debugging codes can impede maintainability. If many people are working on a project, individuals are more reluctant to remove code that they do not thoroughly understand since no one wants to be responsible for errors. With these redundant codes left in the system, the code will be more difficult to understand, especially for DL systems. Deep God File: This smell was derived from the recurring pattern where developers kept separating DL parts into multiple files after they had initially put some or all of them into one big file. We found 22 projects (37%) with this code smell. Deep God File usually starts small, but over time, they get bloated as practitioners may find it mentally easier to place programs into existing files. Jumbled Model Architecture: This smell was derived from the recurring pattern when DL practitioners programmed the DL Figure 6: Prevalence of identified existing code smells models, they do not clearly divide the different functional parts of the model. Instead, all parts of the model are jumbled together, which makes model code difficult to understand. We found 11 (19%) projects with this code smell. Dispensable Dependency: This smell was derived from the recurring pattern where some redundant dependencies are left in DL systems and we noticed five out of 59 projects have modification commits to remove dispensable dependencies. Many DL libraries have repetitive functions, so some practitioners might try similar functions in each library and use the one with the best performance. However, this process adds some unnecessary dependencies to the entire system. Prevalence of Python Smells We used PySmell to analyze the Python code smells in our selected commits and identified eight Python code smells that were investigated by Hadhemi et al. [25] (shown in Figure 6). The most frequently fixed Python code smells in our dataset are LM, LPL, and CCC and their respective fixing commit percentages are 6.56%, 6.32%, and 6.09%. LTCE and LLF code smell fixing occupy 3.98% and 3.04% percentage of all selected commits. And the fixing commit percentages for MNC, LC, and LSC are 1.17%, 0.94%, and 0.70%. According to Table 2, the percentage of selected commits to fix code smell of Scattered Use of ML Library and Unwanted Debugging code are respectively 13% and 17%. And for Deep God File and Jumbled Model Architecture code smells, the percentage of commits are 9% and 6%. The lowest percentage of commits is Dispensable Dependency code smell, which is 2%. By comparing the percentage of commits containing the code smells between our identified code smells and existing Python code smells we see that newly identified code smells are more frequent compared to generic Python code smells. © Observation 2: Newly identified code smells occur more frequently in our sample than generic Python code smells. To answer this question, we analyzed the survey results. We asked respondents to what extent they have encountered these code smells and their perception about the impact of these code smells on making DL systems difficult to maintain. The aggregated results are shown in Fig. 7. Code Smells Validation According to the aggregated results shown in Figure. 7-(a), respondents are familiar with the code smells we identified. 84% of the respondents expressed that they have seen Scattered Use of ML Library code smell before, which matches our repositories mining result that the most frequently occurred code smell is Scattered Use of ML Library. The respondents were also familiar with the other code smells. The ranking through mining was closely matched with the survey's ranking as Unwanted Debugging Code, and Deep God File were among the top three code smells in both rankings. According to the combined result from all the participants in Figure. 7-(b), the most impactful code smell is Scattered Use of ML Library. More than 60% of survey respondents reported that these code smells seriously impact their DL systems' maintenance. According to the developers, the other two most impactful code smells are Jumbled model architecture and Deep God File. Among them, Deep God File is also the second most frequent code smell ( Figure. When we grouped the perceived frequency of code smells based on respondents' roles shown in Fig. 9-(a), the most common code smells for ML engineers, software engineer and data scientist/engineer respondents were Scattered Use of ML Library, Dispensable Dependency, Unwanted Debugging Code. However, software engineer met Scattered Use of ML Library more often, but ML engineer encountered Dispensable Dependency and Unwanted Debugging Code more often. It is also reasonable that software engineers and Data engineers encountered less Jumbled Model Architecture since they are not primarily maintaining models, but 62% ML engineer respondents encountered Jumbled Model Architecture code smell as they are primarily working with models. We looked into the impact of these code smells for each role, shown in Fig. 9-(b), which only shows the percentage of respondents who identified the code smell having serious or very serious impact on system maintenance. The Scattered Use of ML Library code smell is considered as the most severe by all three roles, especially by ML Engineers since 88% of ML Engineer respondents think this code smell has a "serious impact" on their system maintenance. Similarly, Jumbled Model Architecture is considered as a severe code smell by all three roles, even though it's not common for software and data scientist/engineer. In our analysis, we found that Unwanted Debugging Code is a common code smell, but most of respondents do not think it is a severe issue. Observation 4: Different roles encounter code smells differently and they also have varied opinions about the impact of the code smell. We conducted Scott-Knott ESD test on the responses collected for pertaining to the impact of code smells on DL maintenance to check if there is significant difference among all newly identified code smells. Figure 8 shows that Scott-Knott ESD categorized five code smells into three different groups. Scattered Use of ML Library is categorized in the first group as the most impactful code smell; Jumbled Model Architecture, Deep God File, and Dispensable Dependency code smells are categorized into the second group; and Unwanted debugging Code is categorized into the third group. DISCUSSION AND IMPLICATIONS In this section, we discuss the results presented in the previous section and present mitigation strategies, probable root causes for code smells, and practical implications of our study for researchers, educators, tool builders, and developers. Mitigation Strategies Scattered Use of ML Library: DL practitioners should import module as an alias to shorten the ML API message chain. That way, when an ML API is updated, maintainers no longer need to modify the usage of this API call throughout the whole project. Instead, they only need to change the code in module importing parts. Jumbled model architecture: We suggest DL practitioners clearly separate the parts of the DL model with different functions, so that the model code is easier to understand and maintain. Deep God File: We recommend developers place each part of code into a proper file, and have clear boundaries, such as placing model architecture, training, testing and validating program in separate files. If there is a Deep God File, DL practitioners can employ extract class or move function refactoring operations to separate components in such files. Unwanted Debugging Code: We advocate DL practitioners remove unused debugging code in a timely manner. Dispensable dependencies: We encourage practitioners remove unnecessary dependencies in DL systems since it takes a lot of time and effort to resolve dependency and library version conflicts in DL systems. Probable root causes ML teams are composed of different roles with overlapping tasks [12]. We posit that code smells might be a product of such overlapping tasks since the overlap in responsibility leads to unclear maintenance responsibility. A general thought is that this problem is not unique to DL systems but applies to regular systems as well. That is absolutely correct. Nevertheless, this problem is exacerbated in DL systems because of the significantly distinct roles of various team members. Along with creating confusion and dissatisfaction, uncertain responsibilities can result in dropped or mishandled source code and catastrophic consequences down the line. Practitioners need to ensure the maintenance task's boundaries for different roles in DL systems. Differences in job responsibilities among team members can be another reason for accumulating code smells over time. For example, ML engineers mainly concentrate on model development rather than software deployment and maintenance. To obtain a better model performance, they may try different ML libraries and add all tried library dependencies into the system at the same time. Even though ML engineers finally end up requiring only a few of the imported libraries, the unused but imported DL libraries and their dependencies remain in the system. Such unnecessary dependencies introduce additional problems to the software engineers who try to build and maintain the DL system. Since the process used at the ML developer's end is opaque to the software engineers, it becomes difficult, even impossible in certain cases, for software engineers to remove any unused DL library dependencies. As a result, all the unused dependencies are left in the DL system and the quality of the system as a whole suffers [42]. Projects in the industry have started investigating ways to overcome these challenges. One approach is hybrid teams that include ML engineers, data scientists, and DevOps engineers [8]. Further work is needed to help DL systems identify and remove the unused dependencies. Improving cross-team communication, reducing the opaqueness in the development process used within the sub-groups along with ensuring documentation are some of the possible steps to mitigate this to some extent. Implication Implications for researchers, tool builders and educators: Our results show that DL systems have a wide variety of code smells. However, when we looked for code smell related work for DL, we found limited studies. We encourage researchers to investigate many more kinds of code smells in DL systems. Tool builders can focus on making the code smell detection tools seamlessly integrated into the existing DL development pipeline without causing major disruptions. This is important because research shows that if a workflow is disrupted, practitioners tend to stop using the tool [27]. The large variety of code smells in DL systems is also good news for educators. Educators can illustrate many design principles by showing both well-designed programs and those that exhibit code smells. Using DL systems as subject case studies is guaranteed to provide a variety of code smells. Moreover, students might also prefer examples from the DL domain given the rise and allure of DL programming. Implications for DL developers: As Table 2 shows, identified code smells are distributed in a big percentage of DL systems. Thus, it is important that developers educate themselves about the kinds of code smells that occur in DL systems, and how to mitigate them. Or even better, being conscious about code smells when programming in the first place and avoid them altogether. THREATS TO VALIDITY Our refactoring pattern mining was performed on 59 projects carefully selected by Hadhemi et al. [25]. However, these are open source projects, which means the results may not be generalizable to all DL projects, particularly closed-source projects. Nonetheless, the majority of the DL projects use Python, so we believe our code mining on these Python projects still provides significant insights on code smells in DL systems. This is our first step towards building an empirical body of knowledge. With further replication across different contexts by different research teams, we can build a body of knowledge to generalize the results. The manual analysis applied throughout the study could have introduced unintentional bias. First, we manually identified the commits that were related to maintenance activities based on commit messages and comparing the code before and after an update. Another manual analysis was conducted while grouping the frequently occurring change categories into code smells. This could have introduced bias or mistakes due to the lack of domain expertise. To address this concern, two researchers individually labeled a significant portion of the data. We established a high inter-rater agreement of 0.61 and 0.83 respectively for the two manual analyses, which according to Landis et al. [32], is considered as a substantial level of agreement and we believe we have minimized this threat. We ran PythonChangeMiner to obtain frequently changed patterns, and then we used GitcProc to exclude bug fixes. Relying on these tools can be a threat to validity. However, these tools or variant of them has been validated in other studies. We also performed a manual investigation of any refactored code that has not been labeled by GitcProc to identify if there is any systemic error. Through our manual analysis, we did not see any evidence of the systemic error. There is a possibility that our participants misunderstood the survey questions. To mitigate this threat, we conducted a pilot study with 11 developers with different background experiences and updated the survey based on the feedback. In order to clarify any confusion, we provided definitions for each of the smells. Additionally, we translated the original survey to simplified Chinese to help native Chinese readers to reduce any confusion. Our survey's language selection and translation process may be subject to bias. It might cause the group of respondents who can read Chinese and English to be over represented. However, it is important to mention that we chose to present our survey in English and Chinese because these are the top two most used languages in software development. Our survey could also have translation errors that cause the questions to deviate from the original meaning. To mitigate these risks, two of the authors (one of them is a native English speaker and the other a native Chinese speaker) discussed the survey and performed the translation together. CONCLUSIONS AND FUTURE WORK We investigated frequently occurring modifications in DL open source software repositories and identified nine modifications along with five code smells in this work. We also validated the code smells with DL practitioners through a survey. Participants identified the most impactful smells; however, surprisingly, the most frequent code smells are not necessarily the most impactful ones. Our findings also open up new directions for future research. In addition to the future directions already presented in the discussion and implication sections, future research entails exploring the evolution of the identified code smells and their effect on DL systems' overall quality.
2022-03-03T06:47:39.445Z
2022-03-02T00:00:00.000
{ "year": 2022, "sha1": "dc71ecac9a1e3dfd64a8136fae2fd758f4a0a4d1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dc71ecac9a1e3dfd64a8136fae2fd758f4a0a4d1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
145826005
pes2o/s2orc
v3-fos-license
Most exposed: the endothelium in chronic kidney disease Abstract Accumulating evidence indicates that the pathological changes of the endothelium may contribute to the development of cardiovascular complications in chronic kidney disease (CKD). Non-traditional risk factors related to CKD are associated with the incidence of cardiovascular disease, but their role in uraemic endothelial dysfunction has often been disregarded. In this context, soluble α-Klotho and vitamin D are of importance to maintain endothelial integrity, but their concentrations decline in CKD, thereby contributing to the dysfunction of the endothelial lining. These hormonal disturbances are accompanied by an increment of circulating fibroblast growth factor-23 and phosphate, both exacerbating endothelial toxicities. Furthermore, impaired renal function leads to an increment of inflammatory mediators, reactive oxygen species and uraemic toxins that further aggravate the endothelial abnormalities and in turn also inhibit the regeneration of disrupted endothelial lining. Here, we highlight the distinct endothelial alterations mediated by the abovementioned non-traditional risk factors as demonstrated in experimental studies and connect these to pathological changes in CKD patients, which are driven by endothelial disturbances, other than atherosclerosis. In addition, we describe therapeutic strategies that may promote restoration of endothelial abnormalities by modulating imbalanced mineral homoeostasis and attenuate the impact of uraemic retention molecules, inflammatory mediators and reactive oxygen species. A clinical perspective on endothelial dysfunction in CKD may translate into reduced structural and functional abnormalities of the vessel wall in CKD, and ultimately improved cardiovascular disease. I N T R O D U C T I O N Cardiovascular complications are more frequent and severe in patients with chronic kidney disease (CKD) as compared with the general population [1]. This complex association cannot be fully explained by the presence of traditional risk factors such as hypertension, hyperlipidaemia and diabetes. Alternatively, non-traditional risk factors related to a reduced kidney function provide some insights into the mechanisms of increased risk of cardiovascular events in CKD [1,2]. These CKD-specific factors, besides proteinuria, include disturbed mineral metabolism and bone disease, inflammation, oxidative stress and the accumulation of uraemic toxins. Most of these factors are associated with reduced heart function, vascular stiffness and calcification, typically and most prominently of the medial layer. When compared with the role of the medial layer, attention to disturbed endothelial structure and function in CKD lags behind. The vascular endothelium constitutes a monolayer of endothelial cells, forming the inner lining of the entire circulatory system. The preservation of endothelial barrier function is crucial for the normal functioning of the vascular system and requires tightly regulated intercellular junctions and endothelial cell adhesion to the basement membrane. From this perspective, endothelial cell dysfunction (ECD) can be viewed as a compromised regulation of these vital properties and comprises structural changes in the actin cytoskeleton, reduced proliferative and migratory capacities, breakdown of endothelial cellcell contacts and impairment of the barrier function. This progressive structural remodelling dampens the proper communication between endothelial cells and vascular smooth muscle cells (VSMCs), fundamental for vascular function, resulting in the earliest detectable changes of atherosclerosis [3]. As mentioned, CKD can also drive VSMC dysfunction or vessel structural alterations without disturbing the endothelial function [4]. However, non-atherosclerotic endothelial disturbances most likely exist as well in CKD, and are the focus of this review. Despite strong suggestions that ECD may critically impact cardiovascular health [5], in the clinical setting of CKD, there is only limited information indicating whether ECD provides important prognostic information, or actually causes future cardiovascular complications [6]. However, the data that are available strongly suggest that, also in patients with CKD, ECD contributes to cardiovascular morbidity. In patients with CKD, impaired endothelial function has been associated with arterial thickness [6,7], abnormal left ventricular structure and function [8], and importantly, excess of cardiovascular mortality in CKD [9]. However, while these few studies highlight the importance of vascular dysfunction as a marker for cardiovascular risk, the potential impact of ECD in the progression of cardiovascular complications remains to be elucidated. As a result of these limitations, there is insufficient knowledge supporting the concept of whether targeting vascular dysfunction and in particular ECD in CKD may beneficially impact cardiovascular disease and clinical outcome. In view of these considerations, we aim to review available information on the morphological and functional abnormalities in the endothelial lining during CKD, and to evaluate how CKD-related, non-traditional risk factors critically impact endothelial integrity. Finally, we discuss some plausible therapeutic strategies aimed at targeting these CKD-associated disturbances, to possibly prevent progression of endothelial injury and thereby attenuate cardiovascular disease. Dysfunctional endothelium in patients with CKD has been demonstrated in both large and small arteries [10,11]. Patients with impaired renal function frequently display some common adverse endothelial characteristics that provide a better understanding of the impact of CKD on this cell type ( Figure 1). In particular, impaired flow-mediated dilation (FMD), reflecting abnormal endothelium-dependent vasodilatory function, has been frequently reported in CKD patients, and its impairment is associated with the severity of renal damage [12,13]. This non-invasive approach to assess endothelial function measures the ability of the artery to respond, by the release of the endothelium-derived relaxing factor nitric oxide (NO), to the 5min occlusion of the branchial artery with a blood pressure cuff (reactive hyperaemia). Reduced NO bioavailability [14], however, is a critical feature and characteristic for patients with CKD [5,15]. This abnormality is accompanied by a decreased expression or limited activation of the endothelial NO synthase due to the presence of renal disease-related toxins contributing to a reduced vasodilatory capacity [16]. Importantly, FMD provides crucial information about the vasodilatory status of the endothelium, but it is not a direct assessment of the production of vasoactive molecules. A more invasive approach can overcome this limitation through the infusion of acetylcholine, which dilates normal coronary arteries in the presence of intact endothelium by stimulating NO production. In the presence of ECD, however, acetylcholine may even induce vasoconstriction through a direct effect on the underlying VSMC. In this regard, the measurement of endothelial-dependent relaxation after acetylcholine stimulation in CKD animal models reflects a valuable approach to assess vascular function and test therapeutic strategies [17,18]. Given the difficulties of assessing the structural changes of the vascular endothelium, the analysis of soluble factors is sometimes used as a non-invasive approach to explore the CKD-induced pathological consequences. During CKD, the endothelium loses its quiescent phenotype and becomes activated [5], which is exemplified by elevated levels of circulating cell adhesion molecules such as soluble Intercellular Adhesion Molecule 1 (sICAM-1), Vascular Cell Adhesion Molecule 1 (sVCAM-1), sE-selectin and platelet adhesion molecule von Willebrand factor (vWf) (as a first step of thrombus formation) in serum from patients with CKD [19,20]. Interestingly, the presence of these ECD biomarkers has been associated with a defective FMD in CKD, which suggests that these endothelial structural changes may co-exist with an impaired endothelial function [20]. In addition, the analysis of circulating endothelial microparticles (EMPs), released into the extracellular space after endothelial injury, provides further clinical information of Endothelium in CKD the endothelial damage upon CKD [21]. Similarly, as the endothelial activation markers, EMPs levels are associated with loss of FMD and increased pulse wave velocity in patients with endstage renal failure, reinforcing the hypothesis that endothelial damage results in both morphological and functional alterations [21]. Alternatively, patients with different degrees of impaired renal function also display increased levels of circulating endothelial cells (CECs) themselves [22]. This subpopulation of cells-originating from the blood vessel wall-is detached due to endothelial damage and detachment, and reflects ongoing injury. Finally, CKD reduces the number of circulating endothelial progenitor cells (EPCs), a bone marrow-derived CEC population that can be recruited to sites of endothelial injury and then mature, playing a major role in vascular repair to restore endothelial function [23,24]. In this regard, CKD not only dampens the availability of circulating EPCs but also impacts on the normal functioning of EPCs, resulting in abnormal colony formation together with impaired adhesion and incorporation, further worsening the repair capacity of the vascular system [23,25]. Mechanistically, the enhanced transcription of the abovementioned adhesion molecules, vWf or EMPs is preceded by activation of the nuclear factor-jB (NF-jB) signalling pathway [5]. In experimental studies, the most prominent changes observed in endothelial cells exposed to human uraemic serum are suggested to be mediated by NF-jB signalling, substantiating its key role in the development of ECD during CKD [26,27]. However, the harmful effects of uraemic media are not limited to activation of the NF-jB pathway but extend to NF-jBindependent structural alterations such as lower expression of Vimentin and Annexin A2, which are both involved in cell-cell and cell-matrix interactions [28]. In line with this, it was shown that uraemia modulates the expression of matrix metalloproteinases in endothelial cells leading to a remodelling of the extracellular matrix, thereby promoting endothelial detachment from the basement membrane and its subsequent loss [29]. The importance of the loss of endothelial cell-cell interactions in CKD was also recently highlighted by our group where we confirmed that uraemic plasma from pre-dialysis CKD patients was impairing the stability of the endothelial barrier function by reducing the vascular endothelial (VE)-cadherin adherens junctions on the cell surface [30]. Here, exposure to uraemic media also resulted in the re-organization of the F-actin cytoskeleton towards increased stress fibers formation [30]. Similarly, Maciel et al. [31] recently confirmed that human renal arteries from CKD patients displayed reduced VE-cadherin and Zona occludens-1 (ZO-1) protein expression and that a uraemic environment downregulated VE-cadherin and Vinculin in vitro [31]. These structural alterations make the endothelial barrier more susceptible to disruption upon electric wound or following exposure to the pro-permeability factor thrombin [30]. Finally, prolonged exposure to a uraemic environment could affect the integrity of the vascular endothelium leading to enhanced permeability and endothelial cell detachment, as confirmed in a 3/4 nephrectomized rodent model [32]. Taken together, CKD-induced disturbances of the vascular endothelium are complex and involve a large number of mechanisms including impaired cell-cell and cell-matrix interaction, which contributes to detachment from the vessel wall, increased endothelial cell activation, lost vasodilating properties and limited repair capacity of damaged endothelial surfaces, all leading to loss of endothelial barrier function. T H E I N F L U E N C E O F S P E C I F I C C K D -R E L A T E D F A C T O R S O N E N D O T H E L I A L H E A L T H Recently, many CKD-specific factors such as disturbed mineral metabolism, accumulation of uraemic retention molecules, inflammation and oxidative stress have been identified as possibly being involved in ECD. Indeed, the vascular pathological features observed following exposure to these non-traditional risk factors in experimental uraemic animal models or cell cultures resemble many clinical manifestations described in CKD patients, thus reinforcing that these specific factors may actually contribute to the pathogenesis of human ECD. Disturbances in mineral metabolism Compelling evidence suggests that the unavoidable progressive derangement in mineral homoeostasis due to progressive kidney failure may trigger or accelerate cardiovascular disease, at the level of both the medial layer and the intimal layer. Already in early CKD, the plasma concentrations of the kidneyderived protein a-Klotho decrease, while fibroblast growth factor-23 (FGF23) levels increase. The latter is probably responsible for decreased plasma 25 hydroxyvitamin D [25(OH)D] and 1,25-dihydroxyvitamin D [1,25(OH) 2 D] concentrations and all these factors, along with phosphate exposure, contribute to secondary hyperparathyroidism. The imbalance of each component worsens with advancing CKD and numerous studies established associations of these disturbances with cardiovascular calcification and heart disease. Experimental evidence, described below, also demonstrates that a disturbed mineral homoeostasis contributes to the development of a dysfunctional endothelium; however, this association is not well established in CKD patients, possibly due to a lack of clinically available tools to assess the endothelial function or structure. a-Klotho. Originally identified as an anti-ageing protein, a-Klotho is now also recognized as a major player in mineral homoeostasis. Interestingly, clinical CKD shares many biochemical and histological features with the phenotype of a-Klotho-deficient mice, including its cardiovascular manifestations [33,34]. The vascular abnormalities in a-Klotho mutant mice, such as impaired angiogenesis, insufficient endotheliumderived NO formation and reduced levels of circulatory EPCs, may contribute to the development of ECD [34]. Membranebound a-Klotho is predominantly expressed in the distal tubule of the nephron. The mechanisms responsible for a-Klotho deficiency in CKD are not fully understood but are likely to be multifactorial [35]. Following tubular production and insertion in the plasma membrane, the ectodomain of membrane a-Klotho is cleaved from the cell surface by membrane-anchored proteases and released into the circulation, where it is suggested to be continuously required to maintain vascular health [36]. In this regard, one of the first vasculo-protective activities described for a-Klotho was its role in the maintenance of endothelial homoeostasis [33,37]. Exposure of human umbilical vein endothelial cells to a-Klotho increased NO production and induced eNOS phosphorylation and inducible NOS expression [38]. Along the same line, a-Klotho has been shown to suppress the expression of the adhesion molecules ICAM and VCAM by the attenuation of NF-jB signalling pathway upon tumour necrosis factor-a (TNF-a) stimulation [39]. Another mechanism by which a-Klotho protects the endothelium was demonstrated by Kusaba et al., who showed that a-Klotho mediated the internalization of the transient receptor potential canonical 1 and vascular endothelial growth factor receptor 2 (VEGFR2) complex, thereby preventing hyperpermeability and endothelial apoptosis through an increase of calcium influx in endothelial cells incubated with VEGF [40]. Although extensive research has already provided much information on the beneficial effects of a-Klotho on endothelial damage, the relationship between a-Klotho and vascular dysfunction in patients with CKD remains poorly established. In CKD patients, lower a-Klotho levels were found to be an independent biomarker of arterial stiffness and defective FMD [41], and correlated with circulating von Willebrand factor levels [42]. However, while a deficiency of serum a-Klotho has been linked to cardiovascular complications Endothelium in CKD in some studies [41,43], this issue is still debated as Seiler et al. [2] found no relationship between soluble a-Klotho and cardiovascular outcomes in a cohort of CKD Stages 2-4 patients. Taken together, while experimental studies strongly suggest that a-Klotho preserves endothelial integrity in many different ways, there currently is no strong clinical evidence for a role of a-Klotho deficiency in CKD-mediated endothelial injury. Vitamin D. Vitamin D deficiency, defined as serum 25(OH)D concentrations <20 ng/ml (50 nmol/L), is associated with both an increased prevalence and incidence of cardiovascular morbidity and mortality in CKD [44,45]. In the kidney, 25(OH)D is converted by 1a-hydroxylase to its active form 1, 25(OH) 2 D to exert its effects on distant target tissue [46]. By binding the vitamin D receptor, 1,25(OH) 2 D activates both genomic and non-genomic pathways related to cellular proliferation and differentiation, and also on the endocrine and immune system [46]. As a consequence of CKD, there generally is a deficiency of 25(OH)D and a reduced production of active vitamin D, both contributing to reduced vitamin D actions on target tissues, including the vascular endothelium [45,46]. Vitamin D deficiency is associated with decreased FMD in patients with CKD of different stages [47,48]. In experimental models of CKD, active vitamin D analogues restored abnormal expression of aortic genes and improved endothelial function in a 5/6 nephrectomy rat model [18,49]. Similarly, active vitamin D also protected against vascular leakage and endothelial cell detachment in in vivo models of CKD [32]. As a novel and potential protective mechanism, active vitamin D was shown to improve cell-cell interaction, disrupted after exposure to human uraemic plasma, leading to preservation of the endothelial barrier function [30]. In patients with CKD Stages 3 and 4, improvement of FMD by active vitamin D under low 25(OH)D circumstances has been reported [50]. Similar results were also observed in dialysis patients with vitamin D deficiency, where active vitamin D improved FMD of the brachial arteries [51][52][53]. In contrast, however, no effect of active vitamin D on brachial artery FMD or biomarkers of inflammation and oxidative stress was found in patients with advanced CKD and type 2 diabetes, and in the majority of clinical trials among diverse populations vitamin D administration has failed to show an improvement of endothelial function [54][55][56][57]. In addition, other clinical studies showed no significant effect of oral active vitamin D on left ventricular mass index in CKD patients (the PRIMO and OPERA trials) [58,59] and no reduction of cardiovascular events in haemodialysis patients without secondary hyperparathyroidism (J-DAVID) [60]. These contradicting findings urge the search for better positioning the potential role of vitamin D administration in patients with CKD, especially in relation to disturbances in endothelial function and structure. Phosphate and FGF23. CKD impairs phosphate balance, ultimately resulting in hyperphosphataemia [61]. In clinical studies, hyperphosphataemia and even high-normal serum phosphate concentrations represent one component of the increased risk of cardiovascular complications and mortality in both the general and CKD population [62,63]. Recently, a number of studies suggested that phosphate may exert direct toxic effects on endothelial cells [64]. Specifically, in vitro experiments with endothelial cells demonstrated that highphosphate concentration increases oxidative stress and decreases NO synthesis via inhibiting phosphorylation of eNOS [65]. This finding is in line with a clinical study in healthy subjects, which demonstrated that high dietary phosphate loading impaired flow-mediated vasodilation, indicating acute endothelial dysfunction [65]. In addition, exposure of endothelial cells to high-phosphate concentration also promoted the formation of EMPs with impaired capacity of angiogenesis [66,67] and downregulated VE-cadherin and reduced ZO-1 protein levels, which are similar effects as found in endothelial cells exposed to uraemic media [31]. Importantly, in both healthy and CKD mice, it was reported that a high-phosphate diet promoted endothelial inflammation and dysfunction, and increased endothelial cell detachment [68]. To compensate for the decreased glomerular filtration of phosphate in the setting of CKD, FGF23, synthesized by osteocytes/osteoblasts, inhibits tubular reabsorption of phosphate, thereby restoring its net excretion. Besides phosphate exposure, other factors such as hyperparathyroidism and exogenous 1,25(OH) 2 D, calcium loading and inflammation also contribute to the elevation of plasma FGF23 concentration in CKD [69]. Although FGF23 may contribute to cardiovascular disease by the disturbance of mineral metabolism, FGF23 itself is independently associated with cardiovascular complications in different stages of CKD [70], and also with impaired vasoreactivity and increased arterial stiffness in patients with impaired renal function [70]. Furthermore, experimental ex vivo data suggest that FGF23 can directly impair endothelium-dependent relaxation upon acetylcholine stimulation [71]. This effect appeared to be mediated by the reduction of NO bioavailability due to an accumulation of either asymmetrical dimethyl arginine (ADMA) [71] or superoxide levels [72]. Remarkably, the presence of a receptor for FGF23 is not firmly established on endothelial cells, and therefore the molecular mechanisms that underlie have remained obscure so far. Overall, further clinical studies are warranted to delineate the pathological mechanisms linking phosphate and FGF23 with endothelial cell abnormalities in CKD patients. Uraemic toxins Progression of CKD leads to the accumulation in blood and tissues of uraemic retention solutes [73]. As a result, the cardiovascular system is constantly exposed to the potentially toxic effects of a range of uraemic retention solutes inducing, among other complications, endothelial damage [74]. One well-characterized uraemic toxin is ADMA, which is known to exert a negative impact on endothelial cell stability in both in vivo and in vitro experimental models [74]. Indeed, ADMA is considered as a circulating endogenous inhibitor of eNOS [75], and its accumulation has been associated with ECD in patients with CKD [75,76]. In CKD mice, increased serum concentration of ADMA caused attenuated endothelium-dependent vasodilation of aortic rings by inhibiting eNOS phosphorylation, by its property of being a competitor of Larginine (the precursor of NO) as substrate for eNOS [77]. Furthermore, ADMA induces stress fibers and focal adhesion 1482 M. Vila Cuenca et al. formation in a RhoA and Rho kinase-dependent pathway leading to a limited endothelial repair [78]. Importantly, ADMA also impairs the regeneration of injured endothelium by reducing the differentiation, mobilization and function of EPCs [79]. Formed by complex pathways, the covalently protein-bound toxins advanced glycation end products (AGEs) are the result of non-enzymatic glycation and oxidation of proteins, lipids and nuclear acids, and they accumulate in CKD [80]. In various cell types, AGEs exert diverse cellular responses via the multiligand cell-surface receptor for AGEs (RAGEs) [81]. The activation of RAGE in endothelial cells in vitro induced expression of adhesion molecules, increased endothelial permeability, impaired NO production and increased reactive oxygen species (ROS) formation [82]. Moreover, in patients with CKD, decreased endothelial reactivity has been correlated with increased circulating levels of AGEs [83]. Using an in vitro approach, this study demonstrated that AGEs isolated from serum of patients with CKD induced suppression of eNOS, and this effect was attenuated after RAGE blockade [83]. Recently, several studies demonstrated that other (non-covalently) protein-bound uraemic toxins such as p-cresyl sulphate (PCS) and indoxyl sulphate (IS) exert critical toxic effects on endothelial cells in CKD. In patients with CKD, PCS is the main circulating form of p-cresol and is independently associated with cardiovascular complications [84]. In addition, markers of endothelial damage such as EMPs are directly associated with free-serum p-cresol concentrations in haemodialysis patients [85]. The same study demonstrated in vitro that PCS induced a dose-dependent increase of shedding EMP, whereas this effect was prevented by inhibition of Rho kinase [85]. An in vitro study confirmed the role of the Rho-kinase pathway in PCS-mediated toxicity. Upon exposure to p-cresol, an increased endothelial permeability and barrier disruption were induced by alterations of VE-cadherin membrane distribution [86]. IS is another critical player in the development of vascular disease and is also associated independently with elevated mortality rate in patients with CKD [87]. IS is associated with worsened FMD and arterial stiffness in CKD patients [88]. This study also demonstrated that IS impaired the chemotactic motility and colony-forming ability of EPCs, suggesting that IS contributes to the pathogenesis of ECD by limiting the vascular repair capacity [88]. In addition, several in vitro studies showed that IS can directly disrupt the stability of the endothelial cells through other molecular pathways. Specifically, IS increased EMPs release and impaired endothelial wound healing capacity [89,90]. Moreover, it promotes endothelial activation by ROS-induced activation of NF-jB. Similar to p-cresol, cell culture exposure to IS resulted in endothelial gap formation by VE-cadherin disassembly and stress fiber formation [91]. Overall, uraemic toxins may impact the vasculature by disrupting the integrity of the endothelial cell barrier, promoting endothelial activation and weakening its recovery capacity by impairing the EPCs function. Interestingly, as highlighted in Figure 2, the deleterious effects induced by uraemic toxins in experimental research share many characteristics with the endothelial abnormalities present in CKD patients or cell-based assays with endothelial cells exposed to uraemic media, suggesting that they are important mediators in the development of CKD-induced ECD in patients. Oxidative stress and inflammation Numerous studies have demonstrated that CKD is associated with increased oxidative stress and inflammation [92,93]. Oxidative stress can be considered as accumulation of ROS in parallel with impaired or overwhelmed endogenous antioxidant mechanisms [94]. ROS are classically defined as partially reduced metabolites of oxygen that possess strong oxidizing capabilities [94]. The high production of ROS in CKD may contribute directly or indirectly to the pathogenesis of the cardiovascular disease by inducing endothelial injury [95]. Findings in animal models of chronic renal failure confirmed that enhanced generation of ROS leads to decreased NO bioavailability and impairment of the normal function of the endothelium [96]. Furthermore, increased levels of oxidative stress markers are associated with impaired endothelial function in CKD patients [97]. Moreover, chronic or prolonged ROS production is tightly connected to inflammatory processes [98], by activating transcription factors such as NF-jB, triggering a proinflammatory, pro-adhesion (of leucocytes) and pro-oxidant phenotype [98]. In addition, the activation of NF-jB pathway in endothelial cells is also triggered by inflammatory cytokines such as interleukin-6 and TNFa [99]. These pro-inflammatory molecules are known to be elevated in patients with CKD and FIGURE 2: Summary of the impact of different uraemic toxins in high concentrations. Effects of the uraemic toxins ADMA, AGEs, PCS and IS in endothelial function, circulatory markers, structural changes in the vascular endothelium and in the endothelial repair capacity are highlighted as follows: patients (red), in vivo animal models (blue) and cell-based assays (green). Dark circles indicate that the study was performed in a CKD setting while no circle shows studies performed by the addition of exogenous uraemic toxin. Endothelium in CKD cause ECD [8,100]. Taken together, the development of a proinflammatory and pro-oxidative state during renal dysfunction is associated with oxidative stress, vascular NF-jB activation and inflammation, thus forming a vicious cycle amplifying ECD. T H E R A P E U T I C S T R A T E G I E S T O P R O T E C T T H E E N D O T H E L I U M I N C K D Detailed knowledge of factors in CKD that induce ECD can pave the way to endothelial-protective therapeutic strategies, aiming to ameliorate cardiovascular disease in CKD. Based on the above, several options emerge and their clinical and experimental evidence are summarized in Table 1. The overarching approach might be the restoration of mineral metabolism network by correcting hormonal disturbances and counteracting the potential deleterious influence of uraemic toxins, inflammatory mediators and ROS. Recently, exogenous a-Klotho therapy has been shown to be effective in attenuating high-phosphate diet-induced renal and cardiac fibrosis and accelerated renal recovery after acute kidney injury in mice [101,102]. Although the protective effects of exogenous a-Klotho administration in uraemia-mediated ECD in animal models remain to be investigated, in vitro data suggest that the endothelial-protective properties of a-Klotho are worthy of being tested in vivo. In this context, a-Klotho protein exerts protective effects by reducing the NF-jB translocation in cultured endothelial cells upon exposure to serum of Stage 5 CKD patients [103]. Moreover, exogenous a-Klotho attenuates in vitro the endothelial damage induced by the uraemic toxin IS and modulates the FGF23mediated impaired NO synthesis and increased oxidative stress [104,105]. As a potential option to restore impaired mineral balance and protect the endothelium, vitamin D replacement has raised great expectations to treat cardiovascular complications in CKD patients. However, as mentioned previously, data regarding the beneficial effects of vitamin D supplementation on cardiovascular disease including endothelial function are conflicting. In CKD animal models, active vitamin D treatment mitigates the impact of uraemia not only in endothelial function but also in structural alterations [18,32,49]. In randomized trials, several active vitamin D analogues lead to favourable changes on the vascular function in CKD patients of Stages 3-4 undergoing haemodialysis with or without vitamin D deficiency [50][51][52][53]; mean while, other studies reported no improvement in FMD with patients of advanced CKD [57,106]. Overall, active vitamin D may potentially play different roles in protecting the vascular endothelium during CKD, but further studies are needed in this area. Given its potential role in ECD, direct neutralization of the effects of phosphate and FGF23 may be another therapeutic option to protect the development of ECD. Options to accomplish the reduction of serum phosphate concentrations include treatment with phosphate binders. As an example, the phosphate-binder sevelamer hydrochloride was shown to ameliorate the phosphate-induced ECD in uraemic mice [68]. Furthermore, in hyperphosphataemic patients with Stage 4 CKD, sevelamer improved FMD, possibly mediated by parallel declines in FGF23 levels [107]. In vitro, sevelamer was effective also in protecting against endothelial activation upon uraemic media and AGEs exposure [108]. Thus, declining serum phosphorus concentrations might lead to better endothelial function and cardiovascular health in CKD patients. Alternatively, strategies to counteract high serum FGF23 concentrations such as the application of monoclonal antibodies has already been tested and shown to be effective for improving ex vivo vasodilator responses to acetylcholine in uraemic mice [71]. However, as demonstrated by Shalhoub et al. [109], the beneficial effects achieved by the neutralization of FGF23 signalling can be outbalanced by incrementing serum phosphate. Thus, inhibiting pathological FGF23mediated pathways and lowering phosphate serum concentrations simultaneously may be a potential therapeutic strategy to reduce endothelial damage in CKD. Because of their harmful effects on endothelial cells, reducing concentrations of uraemic toxins, ROS and inflammatory cytokines in CKD patients by dialysis may promote endothelial cell health [110,111]. Indeed, ECD induced in vitro by serum from CKD patients led to remodelling of the extracellular matrix and this effect was mitigated in cells treated with serum from the same patients after haemodialysis therapy [29]. Unfortunately, most protein-bound uraemic retention molecules cannot be removed by dialysis. To overcome this limitation, as an absorbent of the uraemic toxin IS, AST-120 has shown to be effective to improve vascular relaxation in uraemic mice [112] and ameliorating the microvascular dysfunction in haemodialysis patients [113]. Other therapeutic approaches require components that may counterbalance the deleterious effects of oxidative stress, inflammation or toxicity, possibly by using anti-oxidants or inflammatory mediators [74]. Finally, patients with dialysis-dependent CKD following renal transplantation have improved endothelial function [114,115]. These benefits also include the normalization of the functions of the EPCs, contributing to a better repair [114]. Overall, future studies should focus on the effective removal of these retention solutes in uraemic patients in order to attenuate ECD and promote endothelial repair. C O N C L U S I O N S In patients with CKD, ongoing endothelial damage in the vascular system exists and is frequently overlooked. However, endothelial damage is thought to be a central driver of progressive cardiovascular complications. The pathogenesis of ECD in patients with renal dysfunction results from an imbalance between increased endothelial damage and impaired regeneration. In addition, limited vasoreactivity, in particular vasodilatory properties, exists. These processes may result from the progressive loss of the vasculoprotective factors vitamin D and a-Klotho together with an increment of ECD mediators such as FGF23, uraemic toxins, ROS and inflammatory cytokines. Therapeutic strategies aiming at a better endothelial health should be based on correcting the derangements of the mineral homoeostasis, removing the retention solutes and limiting oxidative stress. C O N F L I C T O F I N T E R E S T S T A T E M E N T None declared.
2019-05-07T13:28:55.727Z
2019-04-08T00:00:00.000
{ "year": 2019, "sha1": "978d93b66ebf0c540fa4668e3760ed95b40d0710", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ndt/article-pdf/doi/10.1093/ndt/gfz055/32568131/gfz055.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "59ca94a1246c4a6ef744dbeb68a5747bb3c2168c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218899602
pes2o/s2orc
v3-fos-license
Prenatal screening of DiGeorge (22q11.2 deletion) syndrome by abnormalities of the great arteries among Thai pregnant women Objective 22q11.2DS (deletion syndrome) is one of the common serious anomalies resulting in high perinatal morbidity and mortality rate. Nevertheless, prenatal diagnosis of 22q11.2DS in Southeast Asia has never been described and its prevalence in prenatal series has never been explored. The objective of this study was to describe the experience of prenatal diagnosis of 22q11.2DS in the Thai population and to determine its prevalence among fetuses prenatally diagnosed with abnormalities of the great arteries. Methods A prospective study was conducted on pregnant Thai women prenatally diagnosed with abnormalities of the great arteries in the second trimester. The recruited cases were investigated for fetal 22q11.2 deletion by in situ hybridization with a probe specific to the DiGeorge/VCFS TUPLE 1 region located on chromosome 22 for the locus D22S75, and 22qter for a telomere specific sequence clone as the control region. Results Five out of the 42 (11.9%) fetuses with abnormalities of the great arteries meeting the inclusion criteria were proven to have 22q11.2DS. The most common abnormalities were the tetralogy of Fallot (or variants) and right-sided aortic arch, followed by a thymic hypoplasia. Conclusion As observed in the western countries, we have documented that, among pregnant Thai women, 22q11.2DS is highly prevalent in fetuses with abnormalities of the great arteries (approximately 12%). This information is important when counselling couples to undergo prenatal testing for 22q11.2DS, since this information is vital in the patients' decision of termination or continuation of pregnancy and in a well-prepared management of the affected child. Introduction 22q11.2 deletion syndrome (22q11.2DS) is one of the most common deletion syndromes in humans with a prevalence of 1 in 2,000 to 6,000 live births [1]. It is caused by a developmental defect of the third and the fourth pharyngeal pouches and the fourth aortic arch. This leads to several presentations, including craniofacial anomalies such as cleft palate, micrognathia, eye anomalies, congenital heart defect, absent or a small thymus, and the parathyroid gland leading to hypocalcemia and immunodeficiency, musculoskeletal Kuntharee Traisrisilp, et al. Prenatal screening of 22q11.2DS anomalies, developmental delay, seizure, and behavioral or psychiatric complications. Most cases (83.3-95%) that are diagnosed prenatally are noticed by an abnormal fetal cardiac structure [2][3][4]. Other prenatal findings that are rarely reported, include renal anomalies, cerebral anomalies, and neural tube defects [3,5]. At Maharaj Nakorn Chiang Mai Hospital, the prevalence of the disease is unknown because FISH analysis for this disease is not a standard practice. In cases at this hospital, after evaluation of the fetuses by a maternal-fetal specialist, prognosis, and genetic counselling, most parents choose termination of pregnancy without further investigation (karyotyping, FISH, or microarray) due to economic limitations. Though prenatal diagnosis of 22q11.2DS has been reported several times, most cases have been reported by western countries and are rarely from other parts of the world. Little is known about the extent of this problem in our country or in Southeast Asia. Only a few reports of 22q11.2DS among the Thai population have been studied [6][7][8][9]. Noteworthy, Wichajam and Kampan [6] reported that there was a difference in clinical phenotypes and immunological features of 22q11.2DS in north-eastern Thai children compared to those in the western countries. Moreover, to the best of our knowledge, prenatal diagnosis of 22q11.2DS has rarely been described among the Asian population. However, one retrospective study of the Korean population was reported by Lee et al. [10], who demonstrated a strong association between a variety of prenatally-diagnosed conotruncal cardiac defects and 22q11.2DS. Accordingly, prenatal diagnosis of 22q11.2DS in other parts of the world including our country remains to be explored. Thus, we conducted this prospective study with an aim to describe the experience of prenatal diagnosis of 22q11.2DS in the Thai population and to determine its prevalence among fetuses prenatally diagnosed with abnormalities of the great arteries. Materials and methods A prospective descriptive study was conducted at the Maharaj Nakorn Chiang Mai Hospital between July 2015 and April 2018, with an ethical approval by the Institute Review Boards (Study code: OBG-2560-04711). The pregnant women meeting the inclusion criteria were invited to participate in the study with a written informed consent. The inclusion criteria were as follows: 1) women with fetuses prenatally diagnosed with abnormalities of the great arteries including conotruncal heart defects (TOF, tetralogy of Fallot; DORV, doubleoutlet of the right ventricle; TGA, transposition of the great arteries, the truncus arteriosus), aortic or pulmonary stenosis, coarctation or interrupted aortic arch, and right-sided aortic arch. 2) women undergoing fetal echocardiography in the second trimester, in which extra-cardiac anomalies and fetal thymus were included. The recruited cases were investigated for 22q11.2 deletion by in situ hybridization with a probe specific to the DiGeorge/VCFS TUPLE 1 region located on chromosome 22 for the locus D22S75 and 22qter for a telomere specific sequence clone as the control region using amniocentesis, cordocentesis, or postnatal peripheral blood. Demographic data of the pregnancies, prenatal ultrasound characteristics, and pregnancy outcomes were described and recorded in the study record form. All the recruited cases were followed until delivery for the final outcomes of pregnancy. In statistical analysis, descriptive statistics were used to express mean, standard deviation, and percentage, using SPSS version 21.0 (IBM Corp.; IBM SPSS Statistics for Windows, Armonk, NY, USA). right-sided arches only without any other conotruncal or cardiac anomalies, whereas 2 cases underwent a termination of pregnancy and one experienced preterm birth with death of the neonate. Two cases survived with a prolonged NICU admission. Fig. 1 Discussion Insights gained from this study demonstrate that among the Thai or probably the Asian population, the prevalence of 22q11.2DS is as high as nearly 12% of the fetuses prenatally diagnosed with abnormalities of the great arteries. It may be concluded that the prevalence is similar to that reported in western countries. Our evidence strongly suggests that pregnancies with a prenatal detection of abnormal great arteries should be encouraged to test for fetal 22q11.2DS. Owing to the fact that prenatal screening of 22q11.2DS has never been practiced in our country, our finding can probably lead to a change in our practice in Thailand and Southeast Asia, although analysis of its cost-effectiveness remains to be explored. 22q11.2DS has been documented far more in newborns than those in fetuses. The clue to a prenatal diagnosis is the presence of a congenital heart disease. The large prenatal series found that conotruncal heart defects were the most common fetal phenotype (92%) followed by a thymic hy-poplasia (86%) and a urinary tract abnormality (34%) [3]. Conotruncal malformations refer to the abnormality involving either the aortic or the pulmonary outflow tract, and is more relevant than the non-conotruncal malformations [2,11]. The most prevalent cardiac defect in literature is the TOF, accounting for 20-45%, followed by pulmonary atresia with VSD; 10-25% [12]. In our study, 4 out of the 5 cases were categorized as cases with a conotruncal defect and one case exhibited only a right-sided aortic arch. The specific cardiac defect is the main cause of neonatal death, at an average age of 3-4 months [13]. Also in adults, sudden cardiac death and heart failure are the most common causes of death, even in patients without a congenital heart disease [14]. Notably, in this study, the isolated right-sided aortic arch without other structural anomalies was found in 1 of the 5 cases with 22q11.2DS, consistent with the findings reported in western countries [15]. However, an abnormal laterality associated with 22q11.2DS may include a right-sided aortic arch, double aortic arch, cervical aortic arch, and an abnormal origin of the subclavian arteries [13]. A recent meta-analysis showed that the proportion of 22q11.2 deletion was 5.1 (95% CI, 2.4-8.6) in fetuses with a right-sided aortic arch and in the absence of other intra-cardiac or extra-cardiac abnormalities [15]. However, 5% of these had extra-cardiac abnormalities detected after birth such as a unilateral renal agenesis and a gastrointestinal malformation [15]. Thus, in cases of isolated abnormalities of the aortic arch, a follow-up study and an immediate postnatal echocardiography and an electrolyte study to detect other associated anomalies that may be overlooked in the first scan are highly recommended [1,16]. Early detection and management can improve fetal and neonatal outcomes. When Case 1 was diagnosed with 22q11.2DS at 20 weeks of gestation, the couple decided to continue pregnancy after comprehensive counselling. During prenatal follow-up, polyhydramnios developed at 31 weeks of gestation (amniotic fluid index [AFI], 32 cm) and amnioreduction was performed 1 week later due to maternal discomfort (AFI, 39 cm). The cause of polyhydramnios was unclear. Hypothetically, it was due to an airway compression by the right-sided aortic arch. However, it may be also caused by an airway abnormality [17,18], but the cause of the polyhydramnios in our case was not proved postnatally. Sacca et al. [19] reported that about 70% of the patients with a 22q11.2 deletion syndrome were identified to have airway anomalies including tracheomalacia (36%), subglottic stenosis (28%), and laryngomalacia (26%). Thus, the evaluation of airway structure and function should be considered and included in a systemic assessment among these patients since it can sometimes lead to a neonatal death. Our series also supported that absent or hypoplastic thymus (in 3 out of 5 cases), is an indication for determination of 22q11.2DS. Chaoui et al. [20] suggested that the thymusthoracic ratio of less than 0.25 was highly correlated with 22q11.2DS. Thymus size can be evaluated by measurement of the thymus diameter or thymus-thoracic ratio and comparison with reference ranges. We encourage the use of thymus diameter because our previous study showed that thymus diameter was more reproducible and simpler than the thymus-thoracic ratio [21]. In the general population, due to multiple and non-specific prenatal presentations, it may be difficult to set the guideline for 22q11.2 analysis. Prompt investigation for 22q11.2DS when congenital heart disease is diagnosed by fetal echocardiography is widely accepted especially defects with a very high risk such as an interrupted aortic arch, type B (50-80% risk), truncus arteriosus (30-50%), pulmonary atresia with VSD with MAPCAs (30-45%) [23][24][25][26]. Based on emerging knowledge, prenatal 22q11.2 analysis is also recommended when ultrasounds show other congenital heart defects combined with other structural abnormalities (as mentioned above), and increased NT in the first trimester [2,4]. This study suggested that, among the Thai and probably the Asian populations, when abnormalities of the great arteries and/or thymus hypoplasia are detected on fetal echocardiography, 22q11.2DS should be taken into account for differential diagnoses and should be selectively tested. Though such a work-up is expensive in our country, the prevalence of the disease is relatively high in cases of the great artery abnormalities and testing in such cases may be worthwhile. The strengths of this study were: 1) A prospective nature of the study enabled us to determine the prevalence of 22q11.2DS among fetuses with conotruncal anomalies. 2) Pediatric echocardiography or fetal autopsy was performed to confirm prenatal ultrasound findings. The weaknesses of this study that could limit its generalizability were: 1) The sample size was too small to allow us to make conclusions with high confidence. 2) Only Thai pregnancies were recruited. Therefore, the study population could not perfectly represent other populations even in Asia. In conclusion, in an era of a high resolution of ultrasonography, many regions have standard guidelines for prenatal anatomical screening, leading to an increasing number of prenatal diagnoses of 22q11.2DS. The main finding of the disease is the presence of abnormal great arteries and thymus hypoplasia, but it can also be associated with a variety of other anomalies. Similar to the reports in the western countries, we have documented that among Thai pregnant women, 22q11.2DS is highly prevalent in fetuses with abnormalities of the great arteries (approximately 12%). This information is important when counseling couples to undergo a prenatal test for 22q11.2DS, since this information is helpful in patients' decisions of termination or continuation of pregnancy, or in a well-prepared management of the affected child.
2020-05-21T09:18:11.645Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "a346381af41c7d37106a6a0349724aca64f59ca6", "oa_license": "CCBYNC", "oa_url": "https://www.ogscience.org/upload/pdf/ogs-63-330.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "481e9867bee4d8ed75aa7d8806af190149445584", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264153023
pes2o/s2orc
v3-fos-license
Recent Progress on Genetically Modified Animal Models for Membrane Skeletal Proteins: The 4.1 and MPP Families The protein 4.1 and membrane palmitoylated protein (MPP) families were originally found as components in the erythrocyte membrane skeletal protein complex, which helps maintain the stability of erythrocyte membranes by linking intramembranous proteins and meshwork structures composed of actin and spectrin under the membranes. Recently, it has been recognized that cells and tissues ubiquitously use this membrane skeletal system. Various intramembranous proteins, including adhesion molecules, ion channels, and receptors, have been shown to interact with the 4.1 and MPP families, regulating cellular and tissue dynamics by binding to intracellular signal transduction proteins. In this review, we focus on our previous studies regarding genetically modified animal models, especially on 4.1G, MPP6, and MPP2, to describe their functional roles in the peripheral nervous system, the central nervous system, the testis, and bone formation. As the membrane skeletal proteins are located at sites that receive signals from outside the cell and transduce signals inside the cell, it is necessary to elucidate their molecular interrelationships, which may broaden the understanding of cell and tissue functions. Protein 4.1 Family 1.Protein 4.1 in the Membrane Skeleton Originally, membrane skeletal networks were found as a two-dimensional lattice structure beneath erythrocyte membranes, as schematically shown in Figure 1.Protein 4.1R-membrane palmitoylated protein 1 (MPP1)-glycophorin C is a basic molecular complex, in addition to ankyrin-band 3, attaching the actin-spectrin meshwork structures to form erythrocyte membrane skeletons, which support the erythrocyte membrane and provide stability, especially under blood flow [1].Protein 4.1R (red cell) has 4.1-ezrin-radixin-moesin (FERM) and spectrin-actin binding (SAB) domains, and there are three other family members, namely 4.1B (brain), 4.1G (general), and 4.1N (nerve) [2,3].In this review, we summarize recent studies on protein 4.1G in the peripheral nervous system (PNS) and bone development.[4]. Protein 4.1G in PNS Protein 4.1G was identified as FK506-binding protein 13 (FKBP13) [5].We found its localization at two specific regions in Schwann cells that form myelin in the PNS: Schmidt-Lanterman incisures (SLIs) and paranodes [6].Protein 4.1G assists in organizing internodes in the PNS [7], and is essential for the molecular targeting of MPP6 [8] and celladhesion molecule 4 (CADM4) [7] in SLIs.Thus, 4.1G-MPP6-CADM4, an analogous molecular complex to the erythrocyte membranes, exists in the PNS, likely functioning to resist external mechanical forces in SLIs [9].4.1G-deficient (-/-) mice showed motor impairment, especially with advancing age, and measurement of motor nerve velocity and the ultrastructure of myelin in the sciatic nerves demonstrated abnormalities under 4.1G-/- [10,11].Considering that impairment of motor function with the tail-suspension test became worse after overwork treatment [11], careful attention is required in the rehabilitation of Charcot-Marie-Tooth (CMT) disease patients, which has been a controversial matter [12,13].The SLI is thought to have function as a suspension structure against mechanical extension, similar to a spring [14], and in the case of 4.1G deficiency, the cell membrane may be destroyed. CADM4 is probably related to the myelin abnormality under 4.1G-/-because the localization of CADM4 in SLIs disappears in 4.1G-/-nerves [15].Furthermore, CADM4-/nerves exhibited similar structural changes to those observed in human CMT disease [15,16].CADM4 depletion and subsequent disruption may be related to erbB2 because they interact with each other [17,18].Recent reports have shown that CADM1 has a role in maintaining cell-cell interspaces to promote the proper function of gap junction proteins [19,20].Other than CADM4, several proteins, such as AP3 complex, tubulin, heat shock cognate 71 kD protein, and 14-3-3 protein, have been found that relate to 4.1G, from immunoprecipitation studies in the retina [21,22].Because various proteins are associated with 4.1 families [2,23], it is necessary to further elucidate the binding proteins and functions for 4.1G in the PNS. Additionally, it remains unclear how actin-spectrin components are connected to the 4.1G-MPP6-CADM4 complex in the PNS, considering that actin abundantly forms filaments in SLIs [24].Notably, the SAB domain is spliced in the retina [22], and another actinbinding peptide sequence was found in 4.1R near the common SAB domain in epithelial cells [25].Thus, the relationship between 4.1G and the actin filaments in SLIs has not been clarified. Protein 4.1G in Bone Formation Bone structure is controlled by the balance between bone formation by osteoblasts and bone resorption by osteoclasts.Osteoblasts are differentiated from mesenchymal stem cells and preosteoblasts (osteoblast differentiation).Many factors, including hedgehog, Protein 4.1G in PNS Protein 4.1G was identified as FK506-binding protein 13 (FKBP13) [5].We found its localization at two specific regions in Schwann cells that form myelin in the PNS: Schmidt-Lanterman incisures (SLIs) and paranodes [6].Protein 4.1G assists in organizing internodes in the PNS [7], and is essential for the molecular targeting of MPP6 [8] and cell-adhesion molecule 4 (CADM4) [7] in SLIs.Thus, 4.1G-MPP6-CADM4, an analogous molecular complex to the erythrocyte membranes, exists in the PNS, likely functioning to resist external mechanical forces in SLIs [9].4.1G-deficient (-/-) mice showed motor impairment, especially with advancing age, and measurement of motor nerve velocity and the ultrastructure of myelin in the sciatic nerves demonstrated abnormalities under 4.1G-/- [10,11].Considering that impairment of motor function with the tail-suspension test became worse after overwork treatment [11], careful attention is required in the rehabilitation of Charcot-Marie-Tooth (CMT) disease patients, which has been a controversial matter [12,13].The SLI is thought to have function as a suspension structure against mechanical extension, similar to a spring [14], and in the case of 4.1G deficiency, the cell membrane may be destroyed. CADM4 is probably related to the myelin abnormality under 4.1G-/-because the localization of CADM4 in SLIs disappears in 4.1G-/-nerves [15].Furthermore, CADM4-/-nerves exhibited similar structural changes to those observed in human CMT disease [15,16].CADM4 depletion and subsequent disruption may be related to erbB2 because they interact with each other [17,18].Recent reports have shown that CADM1 has a role in maintaining cell-cell interspaces to promote the proper function of gap junction proteins [19,20].Other than CADM4, several proteins, such as AP3 complex, tubulin, heat shock cognate 71 kD protein, and 14-3-3 protein, have been found that relate to 4.1G, from immunoprecipitation studies in the retina [21,22].Because various proteins are associated with 4.1 families [2,23], it is necessary to further elucidate the binding proteins and functions for 4.1G in the PNS. Additionally, it remains unclear how actin-spectrin components are connected to the 4.1G-MPP6-CADM4 complex in the PNS, considering that actin abundantly forms filaments in SLIs [24].Notably, the SAB domain is spliced in the retina [22], and another actin-binding peptide sequence was found in 4.1R near the common SAB domain in epithelial cells [25].Thus, the relationship between 4.1G and the actin filaments in SLIs has not been clarified. The primary cilium is a hair-like immotile sensory organelle that possesses selectively distributed membrane receptors, such as G-protein-coupled receptors (GPCRs) and growth factor receptors, and ion channels on its surrounding membrane (ciliary membrane) [31].The cilium is formed in various cell types during the G 0 phase of the cell cycle.A hedgehog receptor (i.e., smoothened) is one of the typical ciliary GPCRs expressed in the stem/progenitor cells of various organs (e.g., blood vessels, bone, brain, breast, esophagus, gallbladder, heart, intestine, liver, lung, pancreas, and stomach) [32][33][34][35].Smoothened participates in the proliferation and differentiation of the cells to control organogenesis and tissue homeostasis. Preosteoblasts form primary cilia on their surface.Deletion of the ciliary components, such as intraflagellar transport 80 (IFT80), IFT140, and kinesin 3a (Kif3a), disrupts preosteoblast ciliogenesis, ciliary hedgehog signaling, and femur or tibia formation [36][37][38].Knockout of IFT20 in the cranial neural crest (CNC) disrupts ciliogenesis in CNC-derived osteogenic cells and leads to malformation of craniofacial bones [39].These studies demonstrate the importance of primary cilia in bone formation.However, 4.1G is not recognized as a ciliary component, although it promotes ciliogenesis in preosteoblasts, as observed in the 4.1G-downregulated MC3T3-E1 preosteoblast cell line and 4.1G knockout preosteoblasts on trabecular bone in mouse new bone tibia [30].In 4.1G-suppressed MC3T3-E1 cells, ciliary hedgehog signaling and subsequent osteoblast differentiation were attenuated, revealing a novel regulatory mechanism of bone formation by 4.1G. Teriparatide, PTH-(1-34), is the first anabolic agent approved by the U.S. Food and Drug Administration for the treatment of osteoporosis [40].Intermittent treatment with teriparatide facilitates osteoblast differentiation and suppresses osteoblast apoptosis [41,42].Teriparatide activates PTHR, which is a GPCR.It strongly activates adenylyl cyclase (AC), produces cyclic AMP (cAMP) through G s protein, and increases intracellular Ca 2+ through G q protein.In addition, 4.1G has been identified as an interacting protein of the carboxy (C)-terminus of PTHR [27].Overexpression of 4.1G increases the amount of PTHR on the cell surface and PTHR-mediated intracellular Ca 2+ elevation, suggesting that 4.1G augments the PTHR/G q pathway by stabilizing the plasma membrane distribution of PTHR [27].In contrast, PTHR/G s -mediated cAMP production decreases with 4.1G overexpression and increases with 4.1G downregulation [28,29].Mechanistically, 4.1G binds to the N-terminus of AC type 6 and attenuates its activity [29].These studies suggest that 4.1G alters the signal balance of PTHR, with a high 4.1G expression, G q > G s , and with a low 4.1G expression, G q < G s .It is necessary to investigate whether the regulation of the PTHR signaling balance by 4.1G is one of the mechanisms in the intermittent treatment of teriparatide.Moreover, the ciliary distribution of PTHR and its role in bone formation have been identified; PTH-related protein treatment and shear stress stimuli promote translocation of PTHR to primary cilia, and the ciliary PTHR mediates cell survival and osteogenic gene expression in osteoblastic and osteoclastic cells [43][44][45].The role of 4.1G in ciliary PTHR signaling remains unclarified. MPP Family 2.1. MPP in Membrane Skeleton In erythrocytes, the 4.1R-MPP1 (a.k.a.p55)-glycophorin C (GPC) molecular complex stabilizes erythrocyte membranes [46].MPP1 belongs to the membrane-associated guanylate kinase homolog (MAGUK) family, which is characterized by the presence of the postsynaptic density protein 95 (PSD95)/Drosophila disc large tumor suppressor (Dlg)/zonula occludens 1 (ZO1) [PDZ] domain, Src-homology 3 (SH3) domain, and catalytic inactive guanylate kinase-like (GUK) domain [47].The PDZ and SH3 domains can interact with lipids and proteins.The SH3 domain also has intramolecular and intermolecular interactions with the GUK domain.The GUK domain is thought to have low enzymatic activity, although the binding site for ATP and GMP in MPPs is intact.Except for MPP1, there are two L27 (Lin2-and Lin7-) domains, in which MPPs are capable of interacting with each other.Additionally, MPPs have a HOOK/D5 domain that binds to protein 4.1 members, and there are seven family members [48].MPP1 binds to two distinct sites within the FERM domain of the 4.1 family, and the alternatively spliced exon 5 in 4.1R is necessary for the membrane targeting of 4.1R in epithelial cells [49].In addition to the protein-protein interaction, palmitoylation helps transport MPP family proteins to cell membranes, and enzymes known as zinc finger DHHC-domain-containing palmitoyl acyl transferase (zDHHC/PATs) have roles in palmitoylation [50].In this review, we summarize recent studies on MPP6 and MPP2 in the PNS, CNS, and testis. MPP6 in PNS As mentioned previously, 4.1G-/-mice showed that protein 4.1G is essential for the molecular targeting of MPP6 and CADM4 in SLIs in the PNS, as shown in Figure 2a [7][8][9]. We evaluated what would happen if MPP6 itself was deleted [51].MPP6 deficiency also resulted in the hypermyelination of peripheral nerve fibers, although the phenotypes, such as structural changes and impairment of motor function, were weak compared with 4.1G deficiency. intermolecular interactions with the GUK domain.The GUK domain is thought to have low enzymatic activity, although the binding site for ATP and GMP in MPPs is intact.Except for MPP1, there are two L27 (Lin2-and Lin7-) domains, in which MPPs are capable of interacting with each other.Additionally, MPPs have a HOOK/D5 domain that binds to protein 4.1 members, and there are seven family members [48].MPP1 binds to two distinct sites within the FERM domain of the 4.1 family, and the alternatively spliced exon 5 in 4.1R is necessary for the membrane targeting of 4.1R in epithelial cells [49].In addition to the protein-protein interaction, palmitoylation helps transport MPP family proteins to cell membranes, and enzymes known as zinc finger DHHC-domain-containing palmitoyl acyl transferase (zDHHC/PATs) have roles in palmitoylation [50].In this review, we summarize recent studies on MPP6 and MPP2 in the PNS, CNS, and testis. MPP6 in PNS As mentioned previously, 4.1G-/-mice showed that protein 4.1G is essential for the molecular targeting of MPP6 and CADM4 in SLIs in the PNS, as shown in Figure 2a [7][8][9].We evaluated what would happen if MPP6 itself was deleted [51].MPP6 deficiency also resulted in the hypermyelination of peripheral nerve fibers, although the phenotypes, such as structural changes and impairment of motor function, were weak compared with 4.1G deficiency.The reason for hypermyelination without MPP6 was unclear.One of the MAGUK proteins, Dlg1 (SAP97), regulates membrane homeostasis in Schwann cells by interacting with kinesin 13B, Sec8, and myotubularin-related protein 2 (Mtmr2) for vesicle transport and membrane tethering [52].The binding of the phosphatase and tensin homolog deleted on chromosome 10 (PTEN) to the specific PDZ domain of Dlg1 inhibits axonal stimulation of myelination [53], and this Dlg1-PTEN complex is thought to limit myelin thickness to prevent overmyelination in the PNS [54]. The Src family of signal transduction proteins are also potentially related to the MPP family, because they interact with each other [66,67].Additionally, as there are various PDZcontaining proteins in the PNS, such as MAGUK proteins (e.g., Dlg1 and MPP6), multi-PDZ domain protein 1 (MUPP1), pals-associated tight junction protein (PATJ), claudins, zonula occludens 1 (ZO1), and Par3 [68], but the extent to which they are interdependent or have mutual redundancy remains unclear. MPPs and Lin7 2.3.1.Lin7 in PNS (Figure 2a) Mammalian Lin7 (a.k.a.Veli/Mals) that contains L27 and PDZ domains was originally identified in a protein complex with the potential to couple synaptic vesicle exocytosis to cell adhesion in rat brains, and there are three family members [69].Localization of Lin7 was found in SLIs, and MPP6 mainly transported Lin7 to SLIs in the mouse PNS [51].Interactions between the Lin7 and MAGUK families have been reported in various tissues, including MPP4 recruitment of PSD95 and Lin7c (Veli3) in mouse photoreceptor synapses [70], MPP7 formation in a tripartite complex with Lin7 and Dlg1 in MDCK culture cells, which regulates the stability and localization of Dlg1 to cell junctions [71], and MPP4 and MPP5 association with Lin7c at distinct intercellular junctions of the mouse neurosensory retina [72].The L27 domain is a scaffold for the supramolecular assembly of proteins in the Lin7 and MAGUK families [73][74][75].Originally, both Pals family proteins, MPP5 (Pals1) and MPP6 (Pals2), were identified as proteins associated with Lin7 [76].Although MPP5 was also reported in the PNS [77,78], our finding indicates that Lin7 transport in the PNS is mostly dependent on MPP6. Lin7 in the CNS (Figure 2b) In the cerebellum, high-resolution microscopic examination by Airy-confocal laser scanning microscopy revealed that the ring pattern in synaptic membrane staining and dot/spot areas inside synapses exhibited by Lin7 staining inversely correlated between MPP2+/+ and MPP2-/-synapses [79].In MPP2-/-dendrites in cerebellar granular cells (GrCs), the Lin7-stained dot/spot areas did not overlap with the microtubule-associated protein 2 (MAP2)-stained dendritic shaft, indicating that MPP2 deficiency does not directly impair microtubule-based transport.In contrast, CADM1 exhibited a ring pattern in MPP2-/-synaptic membranes, and the number of Lin7-immunostained dot/spot areas localized inside the small CADM1-immunostained small rings was higher in MPP2-/synapses than in MPP2+/+ ones.These results indicate MPP2 transports Lin7 from the dendritic shaft to postsynaptic membranes in synapses.Additionally, Lin7 was originally coimmunoprecipitated with CASK and Mint1, which bind to the vesicular trafficking protein Munc18-1 and are considered to play a role in the exocytosis of synaptic vesicles in presynaptic regions [69], whereas our findings demonstrated that Lin7 was abundantly localized at postsynaptic sites with MPP2 in GrCs in the cerebellum.2.3.3.Lin7 in Testis (Figure 2c) By immunohistochemistry (IHC), Lin7a and Lin7c were localized in germ cells, and Lin7c had especially strong staining in spermatogonia and early spermatocytes, characterized by staging of seminiferous tubules [80].Lin7 staining became weaker in MPP6-/testis according to both IHC and Western blotting, indicating a function of MPP6 in Lin7 transport in germ cells despite the unchanged histology of seminiferous tubules in MPP6deficient mice compared with that of wild-type mice.In cultured spermatogonial stem cells maintained with glial-cell-line-derived neurotrophic factor, Lin7 was remarkably localized along cell membranes, especially at cell-cell junctions.Thus, Lin7 protein is localized in germ cells in relation to MPP6, which is a useful marker for spermatogenesis. Proteins Interact with Lin7 Because MPP and protein 4.1 families are strongly related to Lin7 families, we listed the proteins associated with Lin7 from previous studies (Table 1) and categorized them into five groups.The first group is MAGUK family proteins and their relating proteins at cell-cell attaching sites, as described above in Section 2.3.1.The second group is the catenin-cadherin complex, an adhesion molecule.Aquaporin (AQP) 1 interacts with the Lin7-β-catenin complex in human melanoma and endothelial cell lines [81].β-catenin and N-cadherin also interact with Lin7 in the rat brain [82], and the small GTPase Rho effector rhotekin interacts with the Lin7b-β-catenin complex in rat brain neurons [83].In the third group, signal transduction proteins, such as the insulin receptor-substrate protein of 53 kD (IRSp53), are transported to tight junctions by Lin7 in cultured MDCK cells [84].Signal transduction protein was detected at synapses in the rat cerebellum [85], and N-methyl-Daspartate (NMDA) receptors increased in the IRSp53-knockout mouse hippocampus [86].In the fourth group, synaptic proteins, such as GluN2B, bind to Lin7, and their complexes are carried by kinesin superfamily (KIF) 17 on microtubules in hippocampal neurons [87].Interactions between the complex and PSD95 were also revealed in rat hippocampal postsynaptic regions [88]. In the fifth group, Lin7 interacts with several growth factor receptors.LET23 epidermal growth factor (EGF) receptor in Caenorhabditis elegans larval development [89] and Grindelwald tumor necrosis factor (TNF) receptor in Drosophila [90] are interesting examples, because they are related to the integration of cell signaling.Further examination of the Lin7 interaction with such receptors is necessary. Concerning Lin7 knockout mice, although mice lacking Lin7a or Lin7c were viable and fertile, double knockout of mice for Lin7a and Lin7c was lethal before sexual maturation, suggesting that the functions of Lin7a and Lin7c likely compensated each other [91].Additionally, Lin7a-and Lin7b-deficient mice are fertile and Lin7c was upregulated in mouse brain [92], indicating redundancy among Lin7 family members.Considering Lin7 in humans, disruption of cerebral cortex development by Lin7a depletion [93] and involvement in autism spectrum disorders by genetic alteration of Lin7b [94] has been reported.Therefore, target-cell-specific conditional disruption of Lin7 family proteins is required to elucidate the function of the Lin7 family.Transport of Grindelwalt (homologous to TNFR) [90] Category: Lin7-associating proteins are categorized into five groups as described in the text.BC: biochemical binding assay, IHC: immunohistochemistry, IP: immunoprecipitation, PD: pull down, YTH: yeast two-hybrid system. In the CNS, scaffolding for CADMs is more complicated, because many MAGUKs are associated with CADM1 [103,104].Although the PDZ domain of MPP2 was reported to directly interact with the C-terminus of CADM1 in rat hippocampal neurons [105], and nearly 80% of MPP2 dots overlapped with CADM1 areas by IHC and cerebellar lysate of MPP2 included CADM1 by immunoprecipitation study in our recent study in cerebellum, MPP2-/-synapses did not show reduction of CADM1 in cerebellar GrCs, as shown in Figure 2b [79].Considering that CADM1-/-mice exhibited small cerebella with a decreased number of synapses compared with wild-type mice [106], the redundancy of MAGUK and 4.1 families to locate CADM family proteins has not been clarified. MPP and Neurotransmitters MPP2 specifically localizes to the cerebellar granular layer, particularly to dendritic terminals in GrCs facing the mossy fiber (MF) terminus at the cerebellar glomerulus, as schematically summarized with MPP2-interactive proteins in Figure 3a [79], because the MF-GrC synapses are the first place to transduce excitatory electrical signals into cerebellum [107].MAGUK family proteins, such as PSD95 (Dlg4, SAP90), SAP102 (Dlg3), and Chapsyn-110 (Dlg2, PSD93), localize to both the molecular and granular layers [108].To clarify the specific localization of MPP2, localizations of various MAGUKs are demonstrated in Figure 3b-k CADMs are Ca 2+ -independent adhesion molecules, and they have binding properties to both protein 4.1 and MPPs [102].In the PNS, deficiency of the MPP6-Lin7 complex had little effect on CADM4, and cadherin and tight-junction proteins were retained [51].However, scaffolding for CADM4 in SLI is mostly dependent on protein 4.1G, as shown in Figure 2a [15,16,51].In testes, the expression and localization of CADM1 were retained in 4.1G/4.1Bdouble-/-and MPP6-/-mice, as shown in Figure 2c [8,10,80]. In the CNS, scaffolding for CADMs is more complicated, because many MAGUKs are associated with CADM1 [103,104].Although the PDZ domain of MPP2 was reported to directly interact with the C-terminus of CADM1 in rat hippocampal neurons [105], and nearly 80% of MPP2 dots overlapped with CADM1 areas by IHC and cerebellar lysate of MPP2 included CADM1 by immunoprecipitation study in our recent study in cerebellum, MPP2-/-synapses did not show reduction of CADM1 in cerebellar GrCs, as shown in Figure 2b [79].Considering that CADM1-/-mice exhibited small cerebella with a decreased number of synapses compared with wild-type mice [106], the redundancy of MAGUK and 4.1 families to locate CADM family proteins has not been clarified. As MPP2 was reported to interact with several GABA A R subunits [115] and various subunits are present in the cerebellum [119], it is necessary to consider the interdependence of the GABA A R subunits.In the thalamus of the α4-knockout mouse, δ was decreased, whereas α1 and γ2 were increased in extrasynaptic regions, suggesting compensation among GABA A R subunits [120].In addition, in the α1-knockout mouse, increases in the α3, α4, and α6 subunits, reductions in the β2/3 and γ2 subunits, and maintenance of the α5 and δ subunits were reported [121].Further studies on the balance of these GABA A R subunits under MPP deficiency are necessary. Several membrane skeletal proteins have been reported to interact with GABA A R. A giant ankyrin-G controls endocytosis of GABA A R by interacting with GABA A R-associated protein (GABARAP) in the mouse-cultured hippocampus [122].GABA A Rα5 interacts with a membrane skeletal ezrin-radixin-moesin family protein, radixin, in mouse hippocampus [123].GABA A R also interacts with neuroligin1 and CASK in inhibitory neuromuscular junctions in C. elegans [124].MPP2 may be dependent on these membrane skeletal proteins to locate GABA A R. As MPP2 was reported to interact with several GABAAR subunits [115] and various subunits are present in the cerebellum [119], it is necessary to consider the interdependence of the GABAAR subunits.In the thalamus of the α4-knockout mouse, δ was decreased, whereas α1 and γ2 were increased in extrasynaptic regions, suggesting compensation among GABAAR subunits [120].In addition, in the α1-knockout mouse, increases in the α3, α4, and α6 subunits, reductions in the β2/3 and γ2 subunits, and maintenance of the α5 and δ subunits were reported [121].Further studies on the balance of these GABAAR subunits under MPP deficiency are necessary. Several membrane skeletal proteins have been reported to interact with GABAAR.A giant ankyrin-G controls endocytosis of GABAAR by interacting with GABAAR-associated protein (GABARAP) in the mouse-cultured hippocampus [122].GABAARα5 interacts with a membrane skeletal ezrin-radixin-moesin family protein, radixin, in mouse hippocampus [123].GABAAR also interacts with neuroligin1 and CASK in inhibitory neuromuscular junctions in C. elegans [124].MPP2 may be dependent on these membrane skeletal proteins to locate GABAAR. MPP Families in Synapses MAGUK proteins become oligomers because of PDZ-SH3-GUK tandem domains, function as a molecular complex in cell membranes specifically at cell-cell adhesion areas, and occur in various tissues and organs [125,126].Particularly, there are many MAGUK family proteins in synapses, which function in postsynaptic density formation and signal transduction, and their impairment is related to some mental diseases [110,[127][128][129][130]. A recent genome-wide association study (GWAS) also demonstrated the relationship between MPP6 and various psychiatric disorders: the MPP6 gene was included in 64 genome loci for bipolar disorders compared among European ancestry [131], in 109 genome loci MPP Families in Synapses MAGUK proteins become oligomers because of PDZ-SH3-GUK tandem domains, function as a molecular complex in cell membranes specifically at cell-cell adhesion areas, and occur in various tissues and organs [125,126].Particularly, there are many MAGUK family proteins in synapses, which function in postsynaptic density formation and signal transduction, and their impairment is related to some mental diseases [110,[127][128][129][130]. A recent genome-wide association study (GWAS) also demonstrated the relationship between MPP6 and various psychiatric disorders: the MPP6 gene was included in 64 genome loci for bipolar disorders compared among European ancestry [131], in 109 genome loci associated with at least two psychiatric disorders including anorexia nervosa, attentiondeficit/hyperactivity disorder, major depression, obsessive-compulsive disorder, schizophrenia, and Tourette syndrome [132], and in 108 genome loci for schizophrenia patients [133].MPP6 was also included in 57 hard sweep genes after the initial movement of the evolutionarily recent dispersal of anatomically modern humans out of Africa, among genes related to biological processes, including ciliopathies, metabolic syndrome, and neurodegenerative disorders.[134].In addition, a GWAS for sleep disorders demonstrated novel genome-wide loci on human chromosome 7 between NPY and MPP6, and disruption of an ortholog of MPP6 in Drosophila melanogaster was identified in sleep center neurons relating to decreased sleep duration [135].In these respects, it is necessary to evaluate neurological psychological impairments in genetically modified MPP-deficient mice, which may be related to human diseases that are caused by mutation in MPP genes. Conclusions The 4.1 and MPP families are not only membrane skeletal components but are also widely distributed in various organs to transport intramembranous and signal transduction proteins.Especially, 4.1G has an obvious function in myelin formation in the PNS.There may be some interdependence and redundancy among the 4.1 and MPP families, as well as related proteins in other organs such as the CNS and testis, which brings about future challenges to examining cross-breeds of several genetically modified model mice.Considering that the molecular evolution of vertebrate behaviors may be related to the diversity of MAGUK proteins including MPPs [136], further evaluation of a wide range of molecular complexes, by proteomic and transcriptome analyses combined with genetically modified animal models, may broaden the understanding of normal morphological and physiological functions as well as physical and mental impairment. Figure 1 . Figure 1.Schematic representation of an erythrocyte membrane skeleton.The spectrin-actin network structure is connected by protein 4.1R-membrane palmitoylated protein 1 (MPP1) and ankyrin to the intramembranous proteins glycophorin C (GPC) and band 3, respectively.The concept was obtained from previous research [4]. Figure 1 . Figure 1.Schematic representation of an erythrocyte membrane skeleton.The spectrin-actin network structure is connected by protein 4.1R-membrane palmitoylated protein 1 (MPP1) and ankyrin to the intramembranous proteins glycophorin C (GPC) and band 3, respectively.The concept was obtained from previous research [4]. Figure 2 . Figure 2. Schematic representation of the relationships among membrane skeletal proteins (4.1, MPP, and CADM) in the PNS (a), CNS (b), and testis (c).Note the different interdependences among those proteins in different organs, revealed by the genetic depletion of the proteins.The picture is partially modified from a previous paper [51]. Figure 2 . Figure 2. Schematic representation of the relationships among membrane skeletal proteins (4.1, MPP, and CADM) in the PNS (a), CNS (b), and testis (c).Note the different interdependences among those proteins in different organs, revealed by the genetic depletion of the proteins.The picture is partially modified from a previous paper [51]. . Note that the gene loci of MPP2 (in mouse chromosome 11) and Dlg2 (in mouse chromosome 7) are different.Genes 2021, 12, x FOR PEER REVIEW 8 of 17 . Note that the gene loci of MPP2 (in mouse chromosome 11) and Dlg2 (in mouse chromosome 7) are different. Figure 4 . Figure 4. Comparative localization of GABAARα1 (a,f,k) with MPP2 (b,c), gephyrin (g,h) and GABAARα6 (l,m) in mouse cerebellar glomeruli.Examples of two-color overlapping regions are shown in (d,e), (i,j), and (n,o) from areas in pictures (c,h,m), respectively.Detailed count data regarding the overlap is described in the text.The right lane demonstrates a summarized schematic drawing of their localizations obtained by immunohistochemistry; it does not consider how to make GABAAR with five subunits. Figure 4 . Figure 4. Comparative localization of GABAARα1 (a,f,k) with MPP2 (b,c), gephyrin (g,h) and GABAARα6 (l,m) in mouse cerebellar glomeruli.Examples of two-color overlapping regions are shown in (d,e,i,j,n,o) from areas in pictures (c,h,m), respectively.Detailed count data regarding the overlap is described in the text.The right lane demonstrates a summarized schematic drawing of their localizations obtained by immunohistochemistry; it does not consider how to make GABA A R with five subunits. Table 1 . Associated proteins to Lin7 families.
2023-10-17T15:04:39.922Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "5c0b76d19f079e09bcadadc1ea7297e9aa52f3a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/14/10/1942/pdf?version=1697430588", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "5400a3a51542000bf9197aa9454686ba46581309", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118152882
pes2o/s2orc
v3-fos-license
The Reconstruction of Cycle-free Partial Orders from their Abstract Automorphism Groups II : Cone Transitive CFPOs In this triple of papers, we examine when two cycle-free partial orders can share an abstract automorphism group. This question was posed by M. Rubin in his memoir concerning the reconstruction of trees. In this middle paper, we adapt a method used by Shelah in \cite{ShelahPermutation} and \cite{ShelahPermutationErrata}, and by Shelah and Truss in \cite{ShelahTrussQuotients} to define a cone transitive CFPO inside its automorphism group using the language of group theory. Introduction This paper draws on the methods employed in [4], which is about reconstructing the quotients of symmetric groups as permutations groups from the quotients of symmetric groups as abstract groups. This paper uses A5, the alternating group on five elements, chosen because it's the smallest nonabelian simple group, to represent the set being permuted. This paper also uses A5 to represent the CFPO. We take the abstract automorphism group of a cone transitive CFPO and define the original CFPO. Section 2 is devoted to properly defining the CFPOs where we apply this method. Section 3 produces a long chain of first order formulae, starting with the 60-ary formula that states 'these automorphisms form a subgroup isomorphic to A5, the alternating group on five elements'. There then follows a series of formulas with the goals of: defining subgroups whose support is exactly some of the extended cones of a single point; and expressing when two of these subgroups have disjoint support. These two goals are, by far, the hardest part of this paper. Afterwards, we have the relatively simple task of representing the points of the CFPO with these subgroups, and recovering the betweenness relation. The final section examines how we can recover the order from the betweenness relation. In some circumstances, the order relation is first-order definable from the betweenness relation, but not always, and certainly not with the same formula in all circumstances. To over come this, we end this paper by giving an Lω 1 ,ω -formula that always defines the order. 1. For all φ ∈ Aut(M ), if φ preserves X set-wise then φ| X , the restriction of φ to X, is the map obtained by taking the union of the standard restriction, which is a partial automorphism, and the restriction of the identity to M \ X. Symbolically φ| X := φ|X ∪ id| M \X . This is only a total automorphism in certain circumstances which crop up often in this chapter. The Domain of the Interpretation Every element ofḡ has finite order so for all x we know thatḡ(x) is an antichain. Pick one x such that |ḡ(x)| = 1 (possible, as A5 is not the identity), so there are gi that act non-trivially. Let Path xi, xj − Since each Path xi, xj − is finite andḡ(x) is finite, S is also finite, and therefore must be a CFPOn for some n. In the previous paper we showed that there was a tree T such that Aut(S) ∼ =P Aut(T ). The root of T is fixed by every automorphism of S, and hence by every element ofḡ. 1. E contains a union of Ci and at most one element from M \ supp(f ), which we call e; 3. if e exists then E contains at least two connected components, C0 and C1, and {e} = Path C0, C1 ; and 4. if D satisfies conditions 1-3 and E ∩ D = ∅ then E ⊆ D. Proof. Condition 2 of Definition 3.4 shows that X is preserved setwise byf , so by Lemma 3.10 of Part 1. There are x and y such that Path X, M \ X = Path x, y . Both x and y are fixed byf , so x, y ∈ M \ X. Suppose one of the cones above x intersects X and one of the cones below x intersects X. Let U be the upwards extended cone and let D be the downwards extended cone.f (U ) ∩f (D) = ∅, asf fixes x, sof (U ) ∩ X satisfies Conditions 1-3, and does not contain X giving a contradiction. Therefore we may assume that X is contained in extended cones above x. Let y0 and y1 lie in different extended cones below x. The definition of extended cone guarantees that Path x, y0 ∩ Path x, y1 = {x}, so Path x, y = {x}. Lemma 3.6. Letf satisfy Alt5. If we partition supp(f ) into two collections of extended connected components, which we will call X and Y , then (fi| X ) and (fi| Y ) satisfy Alt5. Proof. First of all, we must show that this lemma makes sense, i.e.f preserves the extended connected components of supp(f ) set-wise and therefore fi| X and fi| Y are automorphisms. Since the supports of (fi| X ) and (fi| Y ) are disjoint, Comm((fi| X ), (fi| Y )) holds. We consider the positive statements of the formula A5 thatf satisfies, which are of the form fifj = f k . Since fi = fi| X fi| Y for all i we can deduce that and since (fα| X fα| Y )| X = fα| X we conclude that (fi| X ) and (fi| Y ) satisfy all the positive statements of Alt5. We now consider the negative statements, those of the form fifj = f k . Repeating the argument for the positive statements allows us to deduce is a normal subgroup off . We have just found distinct f k and f l such that f k | X = f l | X , so since A5 is simple, this means that fi| X = id for all fi ∈f , contradicting the fact that X∩supp(f ) = ∅. Proof. Suppose there is an x such that gj(x) = x for some j and fi(x) = x for all i. Therefore There are gj and g k such that gjg k (x) = g k gj(x) as A5 is non-abelian, and if we substitute h −1 j for gj we find that Lemma 3.8. Let X and Y be extended connected components of supp(f ) and supp(ḡ). If Comm(f ,ḡ) and |X ∩ Y | ≥ 1 then either X ⊆ Y or Y ⊆ X. Proof. Let {x} = Path X, M \ X and {y} = Path Y, M \ Y . These are singletons by Lemma 3.5. Suppose X Y and Y X. First suppose that x = y. This means that Path X, Y = {x}, and that X and Y are entirely contained in the upwards and downwards extended cones of x, as illustrated in X Y x = y Figure 1: If x = y in Lemma 3.8 Recall Definition 3.4, and note thatḡ(X ∩ Y ) satisfies conditions 1 and 3 because both X and Y do, and by definition it satisfies condition 2. Therefore Y ⊆ḡ(X ∩ Y ). Thus ifḡ(X ∩ Y ) ⊆ X then Y ⊆ X and we are done. Similarly iff (X ∩ Y ) ⊆ Y then X ⊆ Y and we are done. We now suppose that there is a z ∈ X ∩ Y such thatf (z) Y andḡ(z) X. Let Cz be the extended cone of x that contains z. We consider the action off andḡ on the setf (Cz) ∪ḡ(Cz). Let fi ∈f map Cz into X \ Y and let gj map Cz into Y \ X. Then figj(Cz) = gj(Cz) and gjfi(Cz) = fi(Cz) contradicting the assumption that Aut(M ) |= Comm(f ,ḡ). This is depicted in Figure 20. Figure 2: Images of C z Now suppose that x = y. Suppose x ∈ Y and y ∈ X, and let z ∈ Y . By definition, y ∈ Path z, x , and since x is an endpoint of that path, Path z, x ⊆ X, and so z ∈ X. This is depicted in Figure 21. x y z Figure 3: x ∈ Y and y ∈ X If both x ∈ Y and y ∈ X then Path x, y ⊆ M \ (X ∪ Y ). This is depicted in Figure 22. Let z ∈ Y . By definition y ∈ Path x, z and since Path x, y X, we know that z ∈ X. Similarly, if z ∈ X then z ∈ Y , contradicting the assumption that X ∩ Y = ∅. x y Path x, y X Y Figure 4: x ∈ Y and y ∈ X We therefore suppose that x ∈ Y and y ∈ X. x ∈ Path y, fi(y) for any fi, as otherwise X will not be an extended connected component. Path-betweenness is preserved by automorphisms, so gj(x) ∈ Path gj(y), gjfi(y) andf andḡ commute, and y is fixed byḡ, hence gj(x) ∈ Path y, fi(y) . By symmetry y ∈ Path x, gj(x) and fi(y) ∈ Path x, gj(x) . From these facts we can deduce the path-configuration of x, y, gj(x) and fi(y). x y Path x, y Figure 5: Path x, y Since y ∈ Path x, gj(x) and x ∈ Path y, fi(y) we may add to Figure 23 fi(y) and gj(x) to obtain Figure 24. x y g j (x) f i (y) Figure 6: Path x, y , f i (y) and g j (x) But we also know that fi(y) ∈ Path x, gj(x) , so we deduce that fi(y) = x. Similarly gj(x) ∈ Path y, fi(y) shows that gj(x) = y. This contradicts the fact thatf fixes x andḡ fixes y, so we conclude that either X ⊆ Y or Y ⊆ X. Iff * ḡ has an orbit of length 20 then it also has a non-trivial orbit of some length other than 20. Proof. Lemma 3.5 of [4] is: "Suppose thatf ,ḡ are subgroups of Sym(X ) isomorphic to A5 (in the specified listings) which centralize each other, and such that f ,ḡ is transitive on X . Thenf * ḡ has an orbit of length 20. Moreover, iff * ḡ has an orbit of length 20 then is also has an orbit of some other length greater than 1." Let {Ai : i ∈ I} be the ECC of supp(f ) and let {Bj : j ∈ J} be the ECC of supp(ḡ). Lemma 3.8 shows that if Ai ∩ Bj = ∅ then Ai ⊆ Bj or Bj ⊆ Ai. Pick one such A and B, and without loss of generality assume that A ⊆ B. Let X be a connected component of A. X := f ,ḡ (X) Each member of X is a translate of X. The 'specified listings' in Shelah and Truss' Lemma 3.5 refers to the fact that the formula A5(f ) will be different depending on how we enumerate A5. For example, we could insist that f0 is the identity, and this would give a different formula to if we insisted that f5 is the identity. Our formula A5 is fixed so we need not worry about this assumption. For each y ∈ḡ(z) there is a unique gi ∈ḡ such that gi(z) = y, so we may labelḡ(x) by elements ofḡ. In this way, we can view the action ofḡ onḡ(x) as left multiplication. We may therefore label each cone of X as (aG, bH). We defineḡ,h ∈ Aut(M ) as follows: Ifḡ andh are not well-defined then there is an fi such that fi((aG, bH)) = (aG, bH) but there is a z ∈ (aG, bH) such that fi(z) = z. Howeverf acts transitively on the (aG, bH), so |f (z)| = 60. Lemma 3.10 shows that no suchf exists. If Together, we now have (ḡ * h) =f , so the lemma is proved. If supp(ḡ) ∩ supp(h) = ∅ then Lemma 3.9 shows thatḡ * h has an orbit of length at least 20. If g * h has an orbit of length 20 then there is also another orbit of length other than 20. Since the length is other than 20, this other orbit cannot lie in the same ECC as the orbit of length 20. Therefore if supp(f ) has exactly one extended connected component and every orbit has less than 30 members then Aut(M ) |= Indec(f ). We now turn our attention to the other direction, which we also do by contradiction. Suppose supp(f ) has multiple extended connected components. We let X be one of these extended connected components and considerf | X andf | M \X . These two both satisfy Alt5 (by Lemma 3.6) and their supports are disjoint, so they satisfy Comm. Finallyf | X * f | M \X =f , showing thatf | X andf | M \X witness the fact thatf does not satisfy Indec. Lemma 3.10 shows thatf cannot have an orbit of length 60. Lemma 3.11 shows that iff has an orbit of length 30 then Aut(M ) |= ¬Indec(f ). Path supp(f ), M \ supp(f ) is a singleton, as Aut(M ) |= Indec(f ). Let Since paths are preserved by automorphisms, this translates to gi Thus if gi = id then gi(z) ∈ supp(f ), i.e. fjgi(z) = gi(z) for all j, but since z ∈ supp(f ) there is a k such that f k (z) = z. This is depicted in Figure 25. This is a contradiction. Therefore if supp(f ) ∩ supp(ḡ) = ∅ then supp(f ) = supp(ḡ). Bothf andḡ must act transitively on the same antichain of immediate successors or predecessors of x f , whichf * ḡ must also act on. Since Aut(M ) |= Indec(f ) and Aut(M ) |= Indec(ḡ), Proposition 3.12 shows that this antichain must have less than 30 members, but Lemma 3.9 showed thatf * ḡ must have an orbit of at least 20 members. Lemma 3.9 also showed that iff * ḡ has an orbit of length 20 then there was another orbit. Thereforef acts transitively on a set with strictly more than 20 elements, and hence at least 30, which contradicts Proposition 3.12. Lemma 3.14. Recall that [supp(f ) supp(ḡ)] is the formula Iff andḡ satisfy this formula then the support ofḡ is contained in the support off . Proof. The two sentences are tautologies, so the formula given here is equivalent to the one given in Definition 3.1. Suppose thatf andḡ are such that This means that supp(f ) and supp(ḡ) each have exactly one ECC, which have a non-empty intersection. We define If supp(f ) = supp(ḡ) then supp(f φ ) = supp(ḡ φ ) for all φ ∈ Aut(M ). Therefore for all φ ∈ Aut(M ) We now suppose that supp(f ) = supp(ḡ). In Case 1 we consider supp(ḡ) supp(f ). In Case 3 we consider supp(f ) supp(ḡ). If neither supp(ḡ) supp(f ) nor supp(f ) supp(ḡ) then we are either in Case 2, where x f = xg, or Case 4 where x f = xg. Case 1 Since x f is moved byḡ there is an x f such that x f and x f lie in the sameḡ-orbit and x f = x f . Let φ be an automorphism that switches x f and x f , but fixes anything that it does not have to move. If z ∈ supp(f ) then φ(z) ∈ supp(f ) and so disj(f φ ,f ). Since Path x f , x f ⊆ supp(ḡ) we know that supp(ḡ) = supp(ḡ φ ) and therefore ¬disj(ḡ φ ,ḡ) Thus φ witnesses the fact thatf andḡ do not satisfy [supp(ḡ) supp(f )]. Case 2 which cannot be empty, as x f x f . Since x f ∈ Path xg, x f and x f ∈ supp(ḡ) the support ofḡ must contain x f . However x f is clearly contained in supp(φ * ḡ), so ¬disj(φ * ḡ,ḡ). Thus φ witnesses the fact thatf andḡ do not satisfy [supp(ḡ) supp(f )]. Case 3 For a contradiction, assume that and let φ witness this. Since disj(f φ ,f ) holds, and supp(ḡ) is contained in supp(f ), we know that disj(ḡ φ ,ḡ), giving a contradiction. Now assume that Aut(M ) |= ∃φ(f φ =f ∧ḡ φ =ḡ) Let C0, C1 be two of the cones of x f that are contained in the support off and let fi ∈f map C0 to C1. Sinceḡ φ =ḡ, there is an x ∈ supp(ḡ) such that φ(x) = x. We suppose without loss of generality that x ∈ C0. If φ(x) ∈ C1 then f φ i will map x to fiφ(x) = fi(x) and sof φ =f . If φ(x) ∈ C1 then conjugation by φ will at least switch the roles C0 and C1, and sof φ =f . Case 4 In this case, x f = xg. Let C f 0 , . . . be the cones of f that are contained inf , and let C g 0 , . . . be the cones of xg that are contained inḡ. We may pick our indices such that C f i ∈ supp(f )∩supp(ḡ) if and only if C g i ∈ supp(f ) ∩ supp(ḡ). Assume that only one C f i is not in the intersection of the supports, and assume without loss of generality that this is C f 0 . Let φ ∈ Aut(M ) be such that supp(φ) C g 0 and. Then showing thatf andḡ do not satisfy [supp(f ) supp (ḡ)]. Now we assume that more that one C f i is not in the intersection of the supports, without loss of generality C f 0 and C f 1 . Let φ ∈ Aut(M ) be such that φ swaps C g 0 and C g 1 and fixes everything else point-wise. Since φ fixes supp(f ) point-wise, Aut(M ) |=f φ =f . Now consider a elements ofḡ which switches C g 0 and C G 2 . The corresponding elements ofḡ φ will switch C g 1 and C g 2 , and so Aut(M ) |=ḡ φ =ḡ. We say thatf andḡ have the same direction, or act in the same direction if ∃y ∈ supp(f ) (x f < y) ⇔ ∃z ∈ supp(ḡ) (xg < z) We say thatf andḡ have different directions, or act in different directions if then f = g andf andḡ have the same direction. Leth be a tuple such that: Then supp(h) ⊂ supp(ḡ) and supp(h) ∩ supp(f ) = ∅, giving a contradiction. Now suppose that xg ∈ supp(f ) and x f ∈ supp(ḡ). We consider two situations, where the point of Path x f , xg next to x f is in the same direction asf or in the other direction (depicted in Figure 27).f x f x 1 x 2 Figure 9: Path f, g and the Direction off This picture depicts both situations. By "the point of Path x f , xg immediate to f is in the same direction asf " we mean that x1 ∈ Path f, g , while x2 ∈ Path f, g is the other situation we need to consider. Suppose x1 ∈ Path f, g and let φ be an automorphism of M which fixes f and switches x1 with a member of supp(f ). Then φ * f witnesses the fact thatf andḡ cannot satisfy SamePD(f ,ḡ). If x2 ∈ Path f, g then any tuple that satisfies Indec, fixes f and moves x2 will do as a witness. We know that if Aut(M ) |= SamePD(f ,ḡ) then x f = xg. Iff andḡ act in different directions then we may pick any point in supp(ḡ) and any tuple that fixes that point and moves x f to find our counter-example. It remains to show that iff andḡ fix the same point and have the same direction then they satisfy SamePD. Assume without loss of generality thatf andḡ act on the successors of x f . Leth be any tuple such that [ We now have our formula that defines the domain of interpretation, however there will be a lot of pairs that satisfy RepPoint but fix the same point. Lemma 3.19. Recall that EqRepPoint(f0,f1;ḡ0,ḡ1) is the formula ) Proof. Clearly x f = xg if and only if SamePD(fi,ḡj) holds for some choice of indices. Interpreting Betweenness From now on we will adopt the convention that when a lower case letter, such as g, appears in one of our formulas, it is actually a pair (ḡ0,ḡ1) that satisfies RepPoint. We will refer to the point represented by g as xg. 1. Since the formula Temp1PB insists that x h ∈ supp(ḡ1) and x k ∈ supp(ḡ0), and since any path between something in supp(ḡ0) and something in supp(ḡ1) must pass through xg, we conclude that xg ∈ Path x h , x k . Additionally, sinceḡ0 andḡ1 point in different directions there must be both an immediate successor and an immediate predecessor of xg lying on Path x h , x k thus showing that if Temp1PB holds then the properties it was intended to describe hold. The other direction is immediate. 2. Since the formula Temp2PB holds both x h and x k are in supp(ḡ1), if xg ∈ Path x h , x k then it is either a local maximum or a local minimum, as supp (ḡ0) is an extended connected component originating at xg. If xg ∈ Path x h , x k then will prevent Temp2PB from holding. Again, the other direction is immediate. 3. If xg ∈ Path x h , x k then either xg is a local maximum or minimum, or xg lies on a chain of length at least 3, so Temp1PB and Temp2PB successfully cover every case. At this point we have recovered M up to order reversal. We may, if we wish, recover the full order using a variety of different methods, which I will detail later, but from here we can prove that the class is faithful by recovering the betweenness reduct of the CFPOs in question. Definition 3.24. B(h;f,g) is the formula 3.25. B(h; f, g) if and only if x h is between x f and xg. Proof. Let M, ≤ , N, ≤ ∈ KCone. Let Φ be the first-order interpretation comprising of: • RepPoint(x) as the formula that defines the domain of interpretation; • EqRepPoint(x, y) as the equivalence relation on the domain of interpretation; • B(z; x, y) as the betweenness relation. Reconstructing the Order It is impossible to reconstruct the order of all members of KCone with a first-order interpretation. Some members of KCone are isomorphic to their own reverse image, so the automorphism group has no way of knowing which way is 'up'. In those circumstances, it will be necessary to make an artificial choice over which way is 'up'. When reconstructing linear orders in [1], McCleary and Rubin use a parameter pair for this purpose, obtaining a formula φ(x1, x2; y1, y2), which interprets This approach is also possible in this context, but not in a first order way. Since all members of KCone embed the alternating chain, as the path between {x1, x2} and {y1, y2} grows, we require longer and longer formulas. We must use an Lω 1 ,ω formula to recover the order with this technique. Another approach would be to exploit the fact that we have insisted that Ramification order is definable when finite, so if ro ↓ (M ) < {ro ↑ (M ), ℵ0}, then we can find a first order formula that depends on ro ↓ that interprets the order. While first order, I find this far less satisfactory, as it gives lots of different formulas, each of which only work in limited circumstances. Even together they do not work everywhere. However, I will present both. Proof. By definition, x n y → Related(x, y), so if Aut(M ) |= x n y then M |= x ≤≥M y. Each of the xi are related to x, but {xi : i = 0, ..., n} forms an antichain. Suppose that none of the xi's lie above x. Since ro ↓ (M ) ≤ n this means that at least two of the xi's, say x0 and x1, are contained in the same downwards cone of x. Therefore x0 ∨ x1 < x, but the connecting set of the path from x0 to x1 must be which would imply that x ∈ Path x0, x1 , which contradicts the assumption that Aut(M ) |= x n y. Thus at least one of the xi's is above x. Suppose, without loss of generality, that x0 is above x. If any of the other xi's lie below x0 then they will be related to xi, giving a contradiction. By the above argument, all of the xi's lie in different cones. On the other hand, any n + 1 element antichain above x, where every element is contained in a different cone above x satisfies the all of the properties demanded of it, except ( i≤n ¬PathBetween(x; y, xi)) If x < y then we will be able to choose x0 such that x0 is contained in the same cone as y, so any such antichain will satisfy the formula. If y < x then any path from any of the xi's to y will pass through x, and so the formula cannot be satisfied. Abandoning First Order Logic Throughout this subsection, we assume that y1 and y2 satisfy Related. All the formulas mentioned will use y1 and y2 as parameters. We will use y1 and y2 to indicate the direction of the order, so we suppose that y1 < y2. Definition 4.4. (x1 <0 x2 ⇔ y1 < y2) is the formula that insists that x1, x2, y1 and y2 are all related and using B(z; x, y) insists that they lie in one of the configurations depicted below. Assume that Aut(M ) |= α2, so either x2 <M x1 <M y1 or y1 <M x1 <M x2, but the former contradicts our assertion that not both of x1 and x2 are related to both y1 and y2.
2015-03-12T11:50:45.000Z
2015-02-11T00:00:00.000
{ "year": 2015, "sha1": "111a57d635be0316e24384f0d5b835d8afeae874", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "111a57d635be0316e24384f0d5b835d8afeae874", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
219724811
pes2o/s2orc
v3-fos-license
Participation in a Fruit and Vegetable Prescription Program for Pediatric Patients is Positively Associated with Farmers’ Market Shopping Objectives: The primary objective was to investigate the association between participation in a farmers’ market fruit and vegetable prescription program (FVPP) for pediatric patients and farmers’ market shopping. Methods: This survey-based cross-sectional study assessed data from a convenience sample of 157 caregivers at an urban pediatric clinic co-located with a farmers’ market. Prescription redemption was restricted to the farmers’ market. Data were examined using chi-square analysis and independent samples t-tests. Results: Approximately 65% of respondents participated in the FVPP. Those who received one or more prescriptions were significantly more likely to shop at the farmers’ market during the previous month when compared to those who never received a prescription (p = 0.005). Conclusions: This is the first study to demonstrate that participation in a FVPP for pediatric patients is positively associated with farmers’ market shopping. Introduction Diets rich in fruits and vegetables are necessary to support healthy growth and development [1][2][3], and prevent chronic disease [4][5][6]. Despite this, intake among USA children, particularly those from low-income households, fails to meet recommendations [7,8]. With childhood consistently identified as a critical period for the establishment of lifelong dietary patterns [8][9][10], public health efforts should address barriers to fruit and vegetable consumption among youth. Although knowledge deficits are certainly a concern [11], general nutrition education cannot be the sole consideration since many children face persistent struggles with food access and affordability [12,13]. To directly address these challenges, some health care practices have implemented farmers' market fruit and vegetable prescription programs (FVPPs) [12,14]. Much like traditional prescriptions, physicians write the prescription, which is then exchanged for fresh produce at a local farmers' market. Farmers' market shopping is directly related to the purchase and consumption of fruits and vegetables [15,16]. Therefore, programs that successfully draw children and families to local farmers' markets have the potential to positively influence dietary intake. Unlike food shopping with children at convenience and grocery stores-which can induce requests for nutrient-poor snack foods [17]-shopping at a farmers' market with a fruit and vegetable prescription intentionally directs children to fresh, high-nutrient foods. Although farmers' market monetary incentive programs for adults are associated with increased purchasing of fresh produce from local markets [18][19][20][21], it is unclear whether farmers' market FVPPs for children have the same effect. Previous research related to FVPPs has primarily focused on programs that target income-eligible adults with diet-related health conditions, such as diabetes or heart disease [14,22]. Although few studies have examined FVPPs directed at children, early results suggest that exposure to pediatric FVPPs is associated with improvements in perceived and measured household food security [12,23], access to fresh foods [12,23], and child dietary patterns [12,24,25]. The current study is the first to investigate the relationship between participation in a farmers' market FVPP for pediatric patients and farmers' market shopping. Study Population and Design Nearly 60% of children who reside in Flint, Michigan live in poverty [26] and the community has a limited number of full-service grocery stores operating within the city [27]. In August 2015, the Hurley Children's Center (HCC), a Michigan State University-affiliated residency training pediatric clinic with more than 11,000 visits per year, relocated to the downtown Flint Farmers' Market (FFM), a move that increased the percentage of people coming by bus from the city's poorest neighborhoods for general groceries [28]. The FFM is a year-round market with over 50 vendors who sell products inside and outside of the market building. Most vendors are local farmers who sell fresh produce, but the FFM also offers a meat and poultry market, breads and baked goods, cheeses, and several food stands. The co-location of one of the largest pediatric clinics in Flint with the downtown farmers' market was an intentional effort to actively address persistent challenges with child access to fresh, high-quality foods. The HCC's patient population is approximately half female (51%), majority (73%) are African American, and over 85% have Medicaid as their insurance. Shortly after the relocation, the HCC partnered with the FFM to implement a FVPP for pediatric patients [12]. The program included one $10 prescription that may be redeemed only for fresh fruits and vegetables at the FFM. When the $10 prescriptions were introduced at the HCC in May 2016, eligibility was limited to well-child visits. Approximately one month later, the FVPP was expanded to include both well-and sick-child visits to effectively increase the number of children served by the program. One prescription for fruits and vegetables was then provided to every child at each office visit. Because the FVPP was provided only during well-child visits when it was introduced at HCC, some pediatric patients had not received prescriptions prior to enrollment in the current study. This cross-sectional study enrolled a convenience sample of 157 caregivers of children presenting for care at the HCC. To be eligible for inclusion, participants had to be 18 years of age or older, English-speaking, and have one or more children who were active patients at the HCC. Trained clinic staff recruited participants from the HCC waiting room between June and August 2017, approximately one year after the implementation of the prescription program. Data and Instrumentation After reviewing the implied consent letter, study participants completed a 42-item survey. The survey took approximately 30 min to complete, and trained clinic staff were available to assist with survey completion. Survey items included questions from previously validated instruments related to food security and food access as well as questions related to caregiver and child characteristics, participation in food assistance programs, participation in the prescription program, and farmers' market shopping. Caregivers were also asked to report their address or nearest intersection, from which we defined residence in Flint or not. Household participation in the FVPP was measured with a single question that asked caregivers whether any of their children had received a fruit and vegetable prescription from the HCC. The primary outcome of interest was farmers' market shopping during the previous month. The survey question asked, "Have you ever shopped at the Flint Farmers' Market before?", and the answer choices were "Yes, in past week", "Yes, in past month", "Yes, in past year", "Yes, over a year ago", and "Never". Binary indicators were created to specify farmers' market shopping within the previous month and year. The USA. Household Food Security Module: Six Item Short Form was used to measure financially-based food insecurity and hunger [29]. The sum of affirmative responses served as the household's raw score. Food security status was assigned based on this calculated raw score (0-1 = high/marginal food security; 2-4 = low food security; 5-6 = very low food security). To evaluate specific access to fruits and vegetables, caregivers completed four questions from the Michigan Behavioral Risk Factor Surveillance Survey (MBRFSS) related to fruit and vegetable quality and access in neighborhood stores. Responses were answered on a 5-point Likert scale (1 = "always" to 5 = "never"). Because evidence suggests that the neighborhood food environment (NFE) influences dietary habits [30][31][32], this relationship was also considered among our sample. In a previous study of Flint's NFE, a modified Nutrition Environment Measures Survey in Stores was deployed at every store in and around the city. Each store was scored, representing a composite of the availability, quality, and variety of healthy foods (including versus less healthy options) in the store. These scores were linked to the geocoded site of each store, and a kernel density analysis was run to generate a continuous, interpolated surface. Effectively, areas with a greater density of stores having better availability, quality, and affordability of healthy foods had higher NFE scores. The minimum NFE score possible was 0 and the maximum NFE score possible was 1270. In the current study, we geocoded the home location of every pediatric patient involved in the FVPP and extracted the NFE score present at that point [33]. Statistical Analysis Within the study time frame, we estimated that there would be 700 caregivers who brought children to appointments. We calculated that a sample of at least 124 caregivers would be needed to have a 95% confidence level with a margin of error of 8% to estimate our outcome of shopping at the FFM. Frequencies and percentages were calculated from demographic data to describe characteristics of caregivers who completed the survey. When examining differences between prescription program participants and non-participants, subjects were excluded if data were missing for any variable involved in the analysis. Analyses, including NFE, were conducted only for our records with a home address or street intersection within Flint. Analyses included chi-square, independent samples t-tests, and logistic regression using Statistical Package for the Social Sciences (version 24, IBM Corp., Armonk, NY, USA, 2016) with significance set at p < 0.05. Researchers received approval for the study from Hurley Medical Center Institutional Review Board (1070530-1). The study was carried out in accordance with the Ethical Principles established by the Declaration of Helsinki. Results Surveys were collected from 157 caregivers of 278 pediatric patients who ranged from 0 to 19 years of age. The mean number of children per caregiver was 2.35 ± 1.03. The majority of respondents were female (93%) and residents of Flint (74%), with approximately half (48%) reporting a high school education or less (Table 1). Most survey respondents (63%) were receiving benefits from the Supplemental Nutrition Assistance Program (SNAP). For respondents who reported receiving SNAP, 30% did not receive benefits from the Special Supplemental Nutrition Program for Women, Infants and Children (WIC) or Double-Up Food Bucks (DUFB), 20% received both WIC and DUFB, 15% received DUFB and not WIC, and 35% received WIC and not DUFB. Table 2 describes differences in caregiver and child characteristics based on participation in the FVPP. There were statistically significant differences (p < 0.05) between participants and non-participants with regard to caregiver gender, city of residence, and child race. Farmers' Market Shopping Approximately 65% of caregivers who completed the survey indicated that their child had received at least one fruit and vegetable prescription at the HCC. Participants were significantly more likely than non-participants to receive benefits through WIC (p < 0.001), but differences in SNAP participation were not significant. As shown in Table 3, caregivers who reported that their child had received a fruit and vegetable prescription were significantly more likely to report shopping at the farmers' market during the previous month when compared to caregivers whose child had never received a prescription (50.6% versus 26.8%, respectively; p = 0.005). Similarly, caregivers who reported that their child had received a fruit and vegetable prescription were significantly more likely to report shopping at the farmers' market during the previous year when compared with caregivers who reported that their child had never received a prescription (75.3% versus 53.6%, respectively; p = 0.007). A logistic regression analysis was done to examine what influences having shopped at the FFM in the last month; statistically significant characteristics (WIC participation, city of residence, caregiver gender, and child race) and having received at least one fruit and vegetable prescription were included as co-variates. The overall model fit the data (Hosmer-Lemoshow Goodness-of-fit statistic p = 0.965) and only having received at least one fruit and vegetable prescription was statistically significant (p = 0.003) when controlling for WIC (p = 0.817), city of residence (p = 0.740), caregiver gender (p = 0.374), and child race using the variables of African-American (p = 0.164), and Caucasian (p = 0.293). Food Security Nearly half of all caregivers (45%) who completed the survey indicated low or very low levels of household food security. As shown in Table 3, food security scores among caregivers who reported that their child had received a prescription (1.89 ± 2.06) were not significantly different from those who reported that their child had not received a prescription (1.75 ± 1.89). Neighborhood Food Environment The above characteristics were also cross-referenced with the neighborhood food environment (NFE). These scores were available only within the city limits of Flint, thus 102 families who had shared whether or not they had received a prescription met the criteria. Of the families included in the analysis, the average NFE score was 257 ± 238. Examining differences in NFE scores by participation in the program and redemption of prescriptions, there was no statistically significant difference in NFE scores with either, indicating that families in neighborhoods with poor food environment scores had no difference in use of the prescription program and the farmers' market as compared to families living in neighborhoods with better food environment scores. Please see Table 3. Additionally, there was no difference in NFE score by food security groups. Discussion The current study is the first to demonstrate a positive association between child participation in a farmers' market FVPP and farmers' market shopping. This relationship remained consistent when controlling for potential confounding variables, such as participation in WIC, caregiver gender, city of residence, and child race. Findings support previous evidence that monetary incentives for fresh produce from local farmers' markets are effective in increasing purchase and consumption of fresh fruits and vegetables [18][19][20][21]. Although seasonality of fruits and vegetables is also a determinant of intake [34], results of the current study indicate a significant association between participation in the year-round FVPP and farmers' market shopping in the past year. Interestingly, this suggests that seasonality of fresh produce likely did not influence participation in the FVPP. Farmers' markets, which provide easy access to fresh, high-quality foods [12,13,35], are a particularly important resource for minority children living in low-income households who are at an elevated risk for poor dietary behaviors [7,8,36]. In addition to providing early exposure to a wide variety of healthy foods, many farmers' markets also support exposure activities for children, such as cooking classes and food tastings, which show strong potential to improve diet quality [37,38]. Farmers' market-based nutrition education programs that focus on children have, in fact, been successful in increasing consumption of fruits and vegetables among participants [39]. Improved year-round access to fresh, high-nutrient foods as well as positive food experiences are notable benefits of pediatric FVPPs that necessitate a visit to a local farmers' market. Evidence suggests that higher fruit and vegetable consumption during childhood is associated with reductions in chronic diseases during adulthood [5,40], emphasizing the particular importance of programs that target children. Primary care physicians-who follow children from infancy to young adulthood-are well positioned to address food access and affordability challenges. This is crucial during childhood when dietary behaviors are established [9,10]. Uniquely different from current programs that focus largely on fruit and vegetable prescriptions as a disease-management approach for adults with diet-related chronic health conditions [14,22], the current study emphasized the critical role of fruits and vegetables in the prevention of chronic disease during formative childhood years [9,10,41]. This approach goes beyond traditional nutrition education to address persistent environmental challenges related to access and affordability of fresh produce. Previous literature has demonstrated important differences between families of low socioeconomic status (SES) and those of higher SES when addressing home food environment [42][43][44][45][46]. For example, research has shown that children of mothers at the lowest educational levels ate fewer fruits and vegetables when compared with children of mothers at the highest educational levels [44]. Furthermore, research has demonstrated that mealtime structures, including families eating together, television viewing while eating, and sources of meals (restaurants, schools, home), are important in relation to child eating patterns and that caregivers influence child eating behaviors through their own behaviors, attitudes, and feeding styles [47]. Although the current study did not specifically assess dietary patterns in relation the participation in the FVPP, previous evidence indicated that the current program was perceived as effective in improving dietary patterns of participating children [12]. Future research will examine measured changes in dietary behaviors of caregivers and children in relation to their exposure to the FVPP. With nearly half of caregivers in the current study reporting household food insecurity, results raise concerns about poor dietary patterns and food insecurity issues facing children in Flint. Furthermore, previous research in Flint has pointed to poor quality of produce available to residents who often struggle with additional challenges, such as limited transportation, that further compound access and affordability issues [12,13,48]. Because of these interconnections, we cross-referenced our data with NFE scores from previous work in Flint [33]. NFE scores were not significantly associated with program participation measures in our study, indicating that the quality of the food environment in one's home neighborhood was not a significant predictor of participation (that is, people participated regardless of the context of their neighborhood food environment). This is additionally noteworthy because the HCC is co-located with the FFM, providing easy access to the farmers' market after pediatric office visits. Future research will investigate this relationship among patients and families at a pediatric clinic that is located away from the downtown area and outside of the local farmers' market. Evidence suggests that fruit and vegetable intake is consistently and positively associated with income [8,41]; therefore, pediatric FVPPs are likely to disproportionately benefit low-income children and adolescents. Previous research in Flint has indicated that poor dietary patterns and food insecurity are pervasive issues among children living in this low-income, urban city [49]. Although the current study did not demonstrate a significant difference in food security scores between caregivers who reported that their child had received a prescription and those who did not, it is important to note that previous research has suggested that pediatric fruit and vegetable prescriptions may be an effective tool to improve dietary habits [12,25] as well as food security among low-income households [12,23]. Previous research demonstrating positive impacts of pediatric fruit and vegetable prescriptions on household food security differed from the current study in that eligibility was limited to children who were obese or overweight with distribution amounts based on household size [23]. Future research in Flint will investigate various FVPP models as well as caregiver-and child-reported changes in food security scores over time in relation to participation in prescription programs. We acknowledge study limitations, including the lack of randomization, self-reported data, and small sample size. Additionally, selection bias may exist, although our analysis showed the characteristics of the study population closely matched those of the source patient population at the HCC which consists primarily of low-income, minority children receiving public health insurance. Because we did not assess behavioral supports within the home, school, or community, we are unsure whether or how other nutrition support programs may have played a role in the FVPP. Finally, the cross-sectional study design did not allow researchers to investigate the impact of the prescription program over time and assessments related to purchase and consumption of fruits and vegetables, as well as child-report of food security were not included. Still, this was an important preliminary study to examine associations between participation in a pediatric farmers' market FVPP and farmers' market shopping. Conclusions Children, particularly those living in poverty, often fail to meet dietary recommendations related to fruit and vegetable intake [7,8]. Given the positive association between participation in a pediatric FVPP and farmers' market shopping, fruit and vegetable prescriptions written by primary care providers could have meaningful impacts on children's dietary patterns. Future research will investigate whether, and to what degree, participation in FVPPs for pediatric patients is associated with long-term changes in food security, food access, and dietary patterns of children.
2020-06-18T09:05:18.119Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "8118a4d9be3852b1cecec221f1a5702bbc816c09", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/12/4202/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "777e77494d21e3af253881b417d95d3d93816e52", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262004870
pes2o/s2orc
v3-fos-license
A Compact Substrate Integrated Waveguide H-Plane Horn Antenna with Dielectric Arc Lens : An H-plane horn antenna constructed into SIW (substrate integrated wave-guide) is proposed. It has a dielectric arc lens for better directivity and a simple microstrip transition as feed. The horn, the lens and the transition share the same substrate. The resulting formula from optical principles shows that the suitable dielectric lens can improve the directivity of the antenna significantly. A prototype was fabricated; the antenna size is 39.175 3 14 3 2 mm 3 . The frequency band is from 25.5 to 28.5 GHz. The measured gain of this antenna is about 9 dB; the bandwidth, at 10 dB return loss, is over 12%. V V C 2007 Wiley Periodicals, Inc. Int J RF and Microwave CAE 17: 473–479, 2007. I. INTRODUCTION The rectangular waveguide horn antennas have found many applications because of their excellent radiation properties such as symmetry patterns, high gain, very wide bandwidth, and easy fabrication, but their implementation in planar form seems to be difficult due to the bulky geometry and especially the 3D horn sizes. These difficulties were resolved a few years ago by the proposed substrate-integrated rectangular waveguide (SIW) [1][2][3]. The SIW structures can largely preserve the well-known advantages of conventional rectangular waveguides, namely, high Q, and high power capacity, while gaining the added advantages of microstrip lines, such as low profile, small volume, and light weight etc. The SIW structures are also superior for the design of millimeterwave circuits such as filters, resonators, and antennas etc [4][5][6][7][8][9][10][11]. In this article, an H-plane horn antenna is designed and studied as a new construction to the SIW structures. The analytical analysis with optical principles [12][13] has shown that the directivity of the horn antenna can be improved significantly by a dielectric lens. By shaping the dielectric substrate beyond the planar horn, a dielectric arc lens is thus conveniently added to improve the directivity of the antenna. To complete the integration, a simple transition to microstrip line is designed. The horn antenna, the microstrip transition and the arc lens, integrated on a single substrate, result in a very compact integrated planar antenna. One Ka-band SIW horn antenna with dielectric arc lens has been designed and implemented to experimentally demonstrate the improvement of the beamwidth of E plane radiation pattern. Figure 1 shows the geometrical structure of the proposed H-plane horn antenna constructed into SIW, which is composed of a uniform section of SIW with width W 1 and length L 1 , and one horn with length L 2 -L 1 . The substrate height is h and dielectric constant is e r . The simulated radiation pattern of E-plane and H-plane with HFSS are illustrated in Figure 2. II. THEORETICAL ANALYSIS It can be seen that the 3 dB beamwidth of E plane radiation pattern in Figure 2(a) is about 1608, i.e., very wide and resulting in poor directivity. This can be explained reasonably. As we can see, an obvious discontinuity can be observed at the radiation plane (i.e. the end plane of the SIW horn antenna), i.e., the radiation plane is the interface of two different medium-the substrate dielectric and the air. When the electromagnetic waves goes through the end plane of the substrate, outward refraction occurs, resulting in wide beamwidth of the radiation. In addition, the simulated beamwidth of H-plane is about 458. It is well known that the optical lens has a function of focus effect which one can use to improve the directivity of the SIW horn antenna. We can also use the substrate dielectric as an arc lens to integrate the substrate with the horn antenna. To demonstrate the focus function of the dielectric arc lens, the optical analyses are thus carried out below in two cases. A. Case 1 (R 1 ! R 2 ) Figure 3 is the schematic geometry of the H-plane horn antenna with an arc dielectric lens. B 1 B 2 is the original radiation plane of the horn antenna, the aslant length of the horn is R 1 ¼ O 1 B 1 , and the radius of the arc lens is R 2 ¼ O 2 B 2 , 2u 0 is the horn angle, the incidence angle at the arc lens is h i and the refraction angle is h t . At the arc dielectric-air interface of the horn antenna, the Snell's law of refraction is also valid for the electromagnetic waves: When R 1 ! R 2 , we can obtain the following equation: Substituting eq. (1) into eq. (2) results in the equation, The angle u t between the refraction wave and zaxis is, In this case, , which implies that the angle between the refraction wave and z-axis is less than the incident angle at the interface of the arc lens. It indicates that inward refraction occurs at the arc plane of the dielectric lens, or in other words, the dielectric arc lens has caused the focusing effect and can thus enhance the directivity of the SIW horn antenna. B. Case 2 (R 1 < R 2 ) When R 1 < R 2 , the analytical model of the horn antenna with dielectric arc lens is illustrated as Figure 4. Following the same analysis procedure above, the angle between the refraction wave and z-axis is shown as below: Obviously, which implies that the electromagnetic waves will radiate and propagate away from the original direction. In this case, scattering phenomenon of the waves occurs at the arc plane of the dielectric lens, and results in poor directivity of the antenna. The calculated angles between the propagation direction and the z-axis obtained from eqs. (4) and (5) Obviously, when indicating the electromagnetic waves which pass through the arc lens propagate along the original direction, no refraction occurs. However, when R 1 > R 2 , u t < u, the inward refraction occurs at the interface of the arc lens. When R 2 ¼ 27.76 mm, u t ¼ 08, very obvious inward refraction can be observed, the propagation direction of the refracted waves is along the z-axis. Furthermore, when R 2 continuously increase, i.e. the curvature of the arc lens becomes larger, u t < 08. From When R 1 < R 2 , u t > u, the angle between the scattering wave and the z-axis is bigger than the incident angle at interface of the arc lens, so the electromagnetic waves propagate away from the z-axis of the horn antenna and results in poor directivity. Therefore, when we design the SIW horn antenna, it is necessary to set the slant length of the horn to be bigger than or equal to the radius of the dielectric arc lens. III. THE STRUCTURE OF THE SIW HORN ANTENNA WITH DIELECTRIC ARC LENS On the basis of above analyses, a dielectric arc lens is fabricated in front of the horn and shares the same dielectric substrate with the horn antenna. In addition, the horn radius is bigger than the radius of the dielectric lens, to employ the focus effect of the dielectric lens and improve the directivity of the horn antenna. The simulated model of the antenna is illustrated in Figure 6. A microstrip line is used to feed the SIW horn antenna, the characteristic impedance of the microstrip line is 50 O. The tapered transition from microstrip to SIW has been optimized and the final dimensions of the transition are given below: the width of microstrip line W 2 ¼ 3.8 mm, length L 2 ¼ 2 mm, the length of the taper transition L 3 ¼ 10 mm, the width of the broadside of the transition W 3 ¼ 5.32 mm. There are still many choices of the arc radius even the precondition R 1 ! R 2 has been guaranteed. The relation between the E-plane half-power beamwidth (HPBW) of the horn antenna and the height of the dielectric arc lens has been investigated numerically by HFSS and listed in Table I. As we can see, the beamwidth of the radiation pattern is dependent of the arc height of the dielectric lens, i.e., bigger curvature of the arc lens results in narrower beam. However, the arc height cannot exceed half of the horn width, beyond which the dielectric lens is not a normal arc and would result in the deterioration of the radiation pattern. For our hardware experiment, we have chosen the arc radius R ¼ 7.175 mm, the arc height H ¼ 6 mm. The final parameters used for simulation and experiments are given below: IV. NUMERICAL AND EXPERIMENTAL RESULTS An experimental prototype of the antenna was fabricated as shown in Figure 7. The simulated and measured H-plane and E-plane radiation patterns are illustrated in Figures 8 and 9 at 27 GHz, in the middle of the SIW passband. The measured gain is about 9 dB, close to the simulation result, and the front-to-back ratio is about -15 dB. The beamwidth, of the E planes for both simulated and measured, are 758 and 658, respectively. Compared with the beamwidth of the E plane of the horn antenna without the dielectric arc lens shown in Figure 2 (the simulated E-plane beamwidth without arc lens is about 1608), the directivity has been improved significantly. However, the E-plane pattern has high sidelobes; this is expected in view of the thickness of the substrate at h ¼ 2 mm. obviously, the sidelobes would be reduced if the substrate thickness h is increased. In addition, some of the increased side lobes may be attributed to the also do a little contribution to the sidelobes. The simulated and measured beamwidth of H plane are 358 and 318, respectively. Compared with the H-plane beamwidth in the case without dielectric arc lens (the H-plane beamwidth without arc lens is c.a. 458), the change of the Hplane beamwidth resulted from the introduction of the arc lens is not so big. The decrease of the H-plane beamwidth may be resulted from the change of phase difference of the electromagnetic waves when the arc lens is added, the size change of the radiation plane may also affect the H-plane beamwidth. As shown in Figure 10, a bandwidth of 11% is measured for the return loss S 11 at the -10 dB limits. The measured return loss is a little better than the simulation; this probably owes to the some losses, conductor and radiation due to fabrication inaccuracies of the transition from the microstrip to the SIW in Figure 7 and as mentioned two paragraphs before. V. CONCLUSION A novel H-plane SIW horn antenna with a dielectric lens has been designed and implemented for improving the directivity of the horn antenna. The idea allows a completely integrated planar platform of horn antenna and its feeding structure on the same substrate without any mechanical assembly or tuning. The analysis with optics principles have been carried out, demonstrating the focus effect of the dielectric arc lens. For the purpose of good directivity, it is necessary to have the precondition R 1 ! R 2 in the design of the horn antenna and the dielectric lens. The results have indicated that the introduction of dielectric arc lens can narrow the E-plane beamwidth significantly, while having little effect on the H-plane beamwidth. Agreements between the measured results and the HFSS simulations have been observed for our fabricated sample of the integrated antenna. However, some obvious ripples have been observed in the measured E-plane radiation pattern. The increase of the thickness of the substrate and better fabrication may reduce the ripples. In addition, at the interface between the horn and the lens, the field guided between two metal plates is transformed into a field radiating in three dimensions. This may be the reason which results in the ripples of the radiation pattern in the E-plane. The further theoretical analysis will be investigated. With its compact integration and small size, this new scheme is well suited for antenna design at microwave and millimeter-wave frequencies. He was promoted to full professor in 1989, and in 1994, he was awarded one of the first two personal chairs in the University. He is the founding Director of the wireless Communications Research Center, formerly known as Telecommunications Research Center. Despite his heavy administrative load, Edward remains active in research in microwave devices and antenna designs for wireless communications. He is the principle investigator of many projects worth tens of million Hong Kong dollars. He is the author of over 300 papers, including 150 in referred journals. Edward is also active in applied research, consultancy, and other technology transfers. He holds one patent. He was the recipient of many awards in applied research, including the Grand Prize in the Texas Instrument Design Championship, and the Silver Medal in the Chinese International Invention Exposition.
2019-04-12T13:58:07.218Z
2007-09-01T00:00:00.000
{ "year": 2007, "sha1": "a2a7ffcda423aec4817232cec2d587c8169c824e", "oa_license": null, "oa_url": "https://doi.org/10.1002/mmce.20237", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9a4a9a9a247e686bf09a52389368563d2cc1bf32", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
216413269
pes2o/s2orc
v3-fos-license
Health Literacy in Younger Age Groups: Health Care Perceptions: Informed People Will Be More Prepared People Background: Young people and adolescents are increasingly using digital platforms for various purposes, including health aspects, which is not linear about whether they consider health information important or understand it. Objectives: Exploratory study with 51 individuals aged 17 to 25 years to ascertain their perception of health issues. Methodology: For this study, a 10-question questionnaire survey was elaborated and distributed online via the Facebook Social Network to 51 male and female adolescents, aged 17 to 25, living in the Greater Lisbon area, college students. Conclusions and Relevance: Young people want to know about their health, but feel that they should do this research by themselves. On the other hand, health information research and using skills demonstrate a failure in both access to reliable information and processing and understanding. Introduction Low literacy and illiteracy in health is very frequent and affects health significantly, in a "silent epidemic" of current societies [1]. Quenzel et al. [2], demonstrate the intrinsic link of literacy to health behaviors, with increasing inequalities among lower literate youth [2]. Methods This is an exploratory cross-sectional study, for convenience, applied to 51 university students between 7 th of May and 5 th of June 2018. The convenience sample (N = 51) includes individuals aged between 17 and 25 years old, male and female, living in the greater Lisbon area. The Inclusion criteria was being an university student and exclusion criteria was to be a non-university student. A total of 74.5% of young people are aged from 17 to 20 and 25.5% from 21 to 25 years. The questionnaire survey was disseminated online on social networks (via face book) and consisted of 10 closed questions (Table 1 and Table 2). Because it is a Open Access Library Journal very small sample and for convenience, there are limitations on replicability of results to similar populations. However the answers to the questionnaire given by this population bring some interesting records to develop later. The language used in the questionnaire was intentionally clear, simple to be assertive [13] [14], familiar, with 1st person treatment, straightforward questions and simple language to meet the guidelines plain writing which means clear, concise, well-organized writing that follows other best practices appropriate for the subject or field and the public and which advises a language at the 8th grade level [15]. Health-related questions are focus on: 1) How they consider and care about their health; 2) What they know about health, focusing on 2 areas of health literacy, associated with knowledge (P7), understanding (P9) and use of information (P7). The method applied was qualitative, based on a questionnaire survey. The survey is a quantitative technique that aims to achieve specific objectives by collecting data more extensively than in depth and quantitatively. The validity of the results will be verified if the questionnaire reflects the concepts (Bryman, 2012, p. 47). The objective was to evaluate the students' perception of their health and the way they take care of it. It was also intended to hear the existence or not of information search behaviors and information gathering, as well as to know if there are relational vectors, such as family in this health research interaction by these young people. Results Young people want to know about their health, but feel that they should do this research by themselves. On the other hand, health information research and use skills demonstrate a failure in both access to reliable information and processing and understanding. We obtained 51 responses from 17 -25 years old (Figure 1), male and female, resident in Greater Lisbon. When asked what it meant to them to be healthy, "84.28% responded according to the official WHO definition, i.e. they choose the option" to be healthy is to live in a good physical and mental mood, "while 19.6% replied that it is to have no complaints, such as discomfort and pain", 5.88% said that "it is the opposite of being sick". A total of 84.28% of the sample consider that to be healthy is to live with good physical and mental disposition and not only have no diseases. The "how often do you go to the doctor" question showed that approximately 53% go to the doctor only when they have complaints, although 11.76% reported going every 6 months. It would be important in a later study to check the reasons for this visit to the doctor twice a year. When dealing with adolescents and young people beyond any vaccination plan, it is important to check what type of health care these young people looks for twice a year. In the evaluation made to understand which health institutions or organizations they knew, information and knowledge was dispersed by a set of institutions that, according to the order represented that a total of 25.48% knew the INEM (National Institute of Medical Emergency), 17.64% knew the WHO (World Health Organization), the General Health System (DGS) with 15.68%, "other" with a total of 13.72%, and with 11.76%, the portal of the national health system. When asked if they feel the need to have a doctor to advise them on their health, 41.16% answered "yes" although 21.56% answered that they do not usually have questions about their own health and 21.56% chose the internet option to clarify the doubts. To know more precisely how young people are informed about their health and reading habits, the question was: Do you read about health? Approximately 47% selected "yes" and almost 43.50% selected "rarely" and about 10% answered "no never". In an exploratory study of 5 dozen young students, it represents almost a split in half between those who have reading habits and those who do not. It is necessary to further investigate the social determinants that may influence these reading and information research habits, one of the ways to improve the health literacy of the population. Interestingly, although 43.50% do not have health reading habits and therefore probably use other means (digital, audiovisual) to know the medical information available to young people, the vast majority chose favorably, selecting the option that there is a lot of medical information directed to young people (43.12%) and also feel that they have the information they need and when they need it (49%). Question 9 (Table 3) focused on personal concerns: To the following areas, which one deserves your most attention? Overall well-being came first with 33.32%, followed by stress (19.60%) and then healthy eating (15.68%) and regular sport (11.76%). Living outdoors and being informed about your health had the lowest rating, with only 3.92% choosing these two items. Finally, the 10th question focuses on whether the young student has direct family members who are health professionals? If so, do you feel that this person/s has an influence on your general health knowledge? A total of 70.56% say Discussion and Conclusion Since the target population is part of a group of adolescents and young people, it is acceptable that they do not have a "scientific" understanding of health, i.e. that they understand health only as the absence of symptoms, while ignoring that there may be silent or slow developing diseases. Although without symptoms, are latent and may affect health. There was some disagreement regarding the question "how often do you go to the doctor?" (P4) compared with the question "do you need a doctor to advise you on your health" (P6), because while over 50% only go to the doctor when they have complaints, 41.16% say they feel the need to have a doctor to advise. Further study will explore whether this doctor who advises them is a family doctor, a private doctor or possibly a doctor from an emergency department. Adolescence is a stage of human development marked by that occur both in the physical aspect and countless changes in the psychosocial sphere [16]. Health information is one of the most searched topics online and 8 out of 10 Internet users report having searched health information online at least once, making it the third most popular activity on the web followed by e-mail reading and using search engines as a common and frequent activity [17]. Access to e-Health information is present for all who have Internet, however, access to e-Health does not unambiguously guarantee the accuracy and truthfulness of discerning good quality health information [18]. On the other hand, the health professional who assists adolescents must know various aspects, such as attitudes, beliefs of adolescence in the family and social dynamics, as well as understand the role that will play in the care of adolescents and their families [16] [19]. It is imperative to stimulate the search for correct and appropriate information by young people through school programs, in partnership with health institutions, which aims to promote health literacy skills, namely the search for online health information [20]. Health information is available through various internet platforms, such as the Health Portal or Health General System, and there are ways to obtain it by different means (books, television, magazines, newspapers). Coleman [21] states that the health needs of younger people are associated with "obstacles faced by young people in their search for health counseling and treatment" (p. 532), being that there are many reasons inhibiting young people to visit a doctor or factors that generate anxiety among those who visit doctor, having concerns related with confidentiality, difficulty in getting a medical consultation and a general feeling that only a few family doctors are interested in adolescents problems [21]. [14]. We also find useful in this communicative health process, to make use of the ACP health communication model not only in contact with young adolescents, but also in the communicative approach to the contents and forms of information provided: these digital, interactive platforms. should preferably have human and digital channels of contact based on language, attitudes and assertive behaviors by health professionals and those providing this digital health information, as well as the need for clarity of language, simplifying and explaining the technical jargon by language easily understood by these young people and by the positive action that is intended to be achieved, which unfolds in clear, understandable and stimulating steps. On the other hand, information and communication technologies (ICTs) play an increasingly important role in health systems because of their potential benefits for citizens and are therefore a good way to interact with some audiences, particularly young people, who are the largest users of digital platforms [22]. Assuming that schooling is a good indicator of health literacy within the Eu- If literacy generally refers to the basic skills needed to operate in society, health literacy is more complex and requires additional skills, including those needed to find, evaluate and integrate health information in a variety of contexts, and requires health-related vocabulary knowledge and a health system culture [23]. It implies that the patient accesses, understands, evaluates, interprets, uses and manages a series of elements that are part of the health context: labels, prescriptions, forms, signs, medication, package inserts, including the system navigability itself (Health literacy: Report of the Council of Scientific Affairs, 1999). WHO (1998) understands health literacy as the cognitive skills that define an individual's ability and motivation to access, understand and use information to promote and maintain good health. Sorensen et al. [24] and, after the results of the European Health Literacy Questionnaire (2012), both underline that health literacy relates to the development of knowledge, skills and motivations of individuals to better access, evaluate, interpret, understand and use the health system in order to make informed decisions to maintain your health throughout your life cycle. Therefore, in these definitions, there is a consensus that is the ability of the individual to act on in- [11]. Rudd et al. [12] state that health literacy may be a contributing factor to the large imparity in the quality of health care that many receive (p. 1). With the digital and audiovisual media that are available today and entering In the qualitative study by Gray et al. [20], with twenty-six Focus group with 157 adolescent students, 11 -19 years old, conducted in a convenience sample in high schools in various geographic and socioeconomic contexts in the United Kingdom and the United States of America between May From 2001 to May 2002, the results pointed to students' difficulties in accessing health information online. Health literacy challenges included at the functional level [29], for example, the correct spelling of medical terms and the ability to ask questions that accurately describe symptoms. Challenges at an interactive level included the appropriate use of health information to address personal health issues within their knowledge networks. Critical literacy challenges already included discerning the relevance of information that was obtained through search engines and knowing which sites to trust. Among the results obtained, there was difficulty for young people in the level of health literacy, whether functional, interactive or critical. Even as part of the health curriculum, the Internet may offer opportunities to identify these deficiencies and help build better health skills literacy among adolescents. Before the Internet, it was difficult for non-specialist audiences to access health information because it was found mostly in medical books and magazines [17]. Since the emergence of the World Wide Web (www), referred to as the Internet, the number of health information searches online has grown remarkably. Today, the Internet represents an important source of health-related information. International studies show that up to 72% in the US [30] and up to 71% in Europe [31] of Internet users do health-related research. According to most studies, the main reasons for researching health information on the Internet are specific illnesses or health problems [30]. The literature suggests that due to the increasing dissemination and use of online health information, patients are more empowered and the doctor-patient relationship has become increasingly participatory [32]. Empirical studies have shown that informed patients are more compliant, which contributes to better health outcomes [33] [34]. In addition, health costs can be reduced because informed citizens use health services more efficiently [34]. In adolescence, doubts arise regarding a large number of situations [35] including aspects related directly or indirectly to health. Doubts include diet [36], sport [37], sexually transmitted diseases [19], hormonal changes [38], consumptions and additions [39], sleep [40], among others. Norman and Skinner [41] point out that with the huge amount of health information on the Internet, this task requires much more ability to interpret and demonstrate than simply being able to enter a disease name or medical term into any browser, like Google or Bing. The authors [41] stress that when using the Internet as a medical education resource, consumers must reach a point of critical analysis and discriminating between primary and secondary sources of health information. In order to transmit information to young people effectively, clearly and productively, it is important to use the media that is most used by all in these younger population segments where digital information reigns [28]. A study by Garcia and Hansen [28] found that online access to a credible source of health information is associated with higher levels of health literacy. The inclusion of sites with credible health information in school curricula is a promising approach to promoting health literacy in young people. The simplicity of language with the simplification of desired content and associated with what effectively young adolescents consider appealing in learning, and which necessarily relates to digital platforms, may be the keys to a better understanding and adherence to these health contents. In a wider range that bridges the gap between information research and the need for problem solving, evidence and literature show that people do not rationally process all the information that is available [42], and therefore they sometimes make "shortcuts, forms of inference that require little effort" [43]. Motivational and affective aspects must intervene as much as cognitive processes. Alvaro & Garrido [43] highlight the interdependent nature of behavioral processes and, following Mead, Vygostsky and Bartlett, [43] (p. 261), state that "the contents of the mind are not the product of information processing, but the result of interpretative processes that have a cultural origin and that we learn in the course of social interaction". So, young people, in their various roles, students, children, friends, sportsmen, patients, have to gain from a vision of health systems that is more suited to their needs, attitudes, and understanding of the universe around them. Bandura [44] with his social cognitive perspective, highlights the important role of modeling, environmental influences, where the individual in their environment and context becomes an agent of their behavior. The right paths seem to go through the following steps: 1) Incorporating a credible online health information resource into school health education curricula is important for promoting youth health literacy [28]. 2) Sources of health literacy made available through various means and in a step-by-step manner to enable greater understanding and self-efficacy not only in young people but throughout society. Health-related activities take place in a wide variety of environments (home, work, community health institutions) and may involve a wide range of activities related to family, community, economic, leisure and safety issues. A parent measuring a child's temperature, the worker reading about correct material handling procedures, the consumer calculating the difference in salt content on the labels of two canned vegetable brands, the patient reading about dental options and the seniors filling out a health insurance form are all health-related tasks in different settings for different purposes and with different types of materials (National Academies Press, 2004). 3) Credibility of sources must be ensured. The ability to define and disseminate useful eHealth information from trusted medical sources, such as government organizations (DGS, National Institutes of Health, Centers for Disease Control and Prevention), medical institutions, and experts promoting credible information on health, making an effective comparison with opinion or unreliable announcements in value and results for the health of the individual and the population, especially for the young. 4) Make consumers more critical in internet research, with better health education. 5) Schools should include health literacy subjects in their programs. Nowadays, some activities related to health promotion in children and young people are already in basic and secondary level schools [45]. 6) Development of parenting skills, teachers, librarians and health professionals to better access, understand and use health information, in addition to the often anonymized responsibility of the "system". Teenagers and young people stand out in terms of their use of digital tools (computer use, and the web through their social networks), as they make the most use of digital platforms ( [22], p. 8) but may have difficulties in various other areas of knowledge such as access, understanding and use of the information contained in these platforms and thus with deficits in health literacy. Actual results are still far from being evaluated. Much more human investment will still be needed and understanding how adolescents and young people can access, understand and use health information for their benefit and that of their peers and family and community involvement. Ethical Clearance Participants data privacy and confidentiality was maintained all the time since there were no questions including personal data, such as DOB, Telephone number, email address, home address, name, affiliation, familiar history or any other kind of sensitive data.
2020-04-27T20:40:07.677Z
2020-03-05T00:00:00.000
{ "year": 2020, "sha1": "75fc29415d3baf8532057edc18e80ea25aa38159", "oa_license": null, "oa_url": "https://doi.org/10.4236/oalib.1106187", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "879e5894a7d444fc52798491273ca5bbac7ef54b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
266088048
pes2o/s2orc
v3-fos-license
Smokeless and combustible tobacco use among 148,944 South Asian adults: a cross-sectional study of South Asia Biobank Introduction Tobacco use, in both smoking and smokeless forms, is highly prevalent among South Asian adults. The aims of the study were twofold: (1) describe patterns of SLT and combustible tobacco product use in four South Asian countries stratified by country and sex, and (2) assess the relationships between SLT and smoking intensity, smoking quit attempts, and smoking cessation among South Asian men. Methods Data were obtained from South Asia Biobank Study, collected between 2018 and 2022 from 148,944 men and women aged 18 years and above, living in Bangladesh, India, Pakistan, or Sri Lanka. Mixed effects multivariable logistic and linear regression were used to quantify the associations of SLT use with quit attempt, cessation, and intensity. Results Among the four South Asian countries, Bangladesh has the highest rates of current smoking (39.9% for male, 0.4% for female) and current SLT use (24.7% for male and 23.4% for female). Among male adults, ever SLT use was associated with a higher odds of smoking cessation in Bangladesh (OR, 2.88; 95% CI, 2.65, 3.13), India (OR, 2.02; 95% CI, 1.63, 2.50), and Sri Lanka (OR, 1.36; 95% CI, 1.14, 1.62). Ever SLT use and current SLT use was associated with lower smoking intensity in all countries. Conclusions In this large population-based study of South Asian adults, rates of smoking and SLT use vary widely by country and gender. Men who use SLT products are more likely to abstain from smoking compared with those who do not. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-17394-w. Introduction Tobacco use accounts for 8.7 million deaths globally and remains one of the leading risk factors of chronic diseases [1].In South Asia, a large proportion of male adults are current smokers; smoking prevalence is reported to be 46.2% in Bangladesh, 19.1% in India, 21.5% in Pakistan, and 29.4% in Sri Lanka [2,3].In contrast, smoking among South Asian females is far less prevalent [3].Smokeless tobacco (SLT) is used in over 140 countries globally, [4] but is particularly popular in South Asia.India alone is the home to 66.6% of the world's 356 million SLT users [4].SLT use is common among women in South Asia, especially in Bangladesh (33%) and India (18.4%). 2,5A substantial proportion of south Asian male adults use smokeless and combustible tobacco products concurrently (i.e., dual use) [5,6].A recent study found that 9.5% Indian men and 12.5% Bangladeshi men were dual users based on data from Global Adult Tobacco Survey (GATS) 2009 [5].More recent data from national representative surveys on patterns of SLT and smoking are lacking from South Asia. Current evidence from developed countries, mostly Sweden, suggests that SLT use is less harmful than smoking, [7,8] and a comprehensive substitution of smoking with smokeless might result in reduced harm from tobacco use [8].Thus, at the population level, if SLT products promote smoking cessation a net public health benefit could be expected [9].However, these assumptions have been questioned in the South Asia context [5,10,11].SLT use has been linked to several types of cancer, [12,13] cardiovascular disease, [14][15][16][17] and adverse birth outcomes when used during pregnancy; [18][19][20][21] and the effects vary by geographical regions due to the wide varieties in SLT products and ways of consumption [22].Moreover, SLT products are known to contain similar or higher nicotine level than cigarettes, [23] and they may be used in situations where smoking is not allowed, possibly prolonging nicotine addiction and making quitting smoking more difficult.The toxicity profiles and health effects of SLT products found in South Asia (often homemade or manufactured by small business) are less clear compared with Swedish snus [8]. Evidence is mixed on whether SLT use facilitates smoking cessation.Studies from Sweden revealed that SLT may be as effective as or even more effective than medicinal nicotine formulations in helping smokers quit [24][25][26].A declining smoking prevalence coinciding an increased use of snus was documented in Sweden [27,28].However, studies from the US reported that SLT use does not promote smoking cessation, [29,30] indicating that the relationship may be dependent on specific cultural and historical factors [29].Limited data are available on SLT use and smoking cessation in the lower-and middle-income countries (LMICs), particularly from South Asia where the vast majority of the world's SLT users reside [31]. Monitoring patterns of smokeless and smoking patterns and understanding the association of SLT use on smoking intensity, quit attempts, and cessation is important for evaluating the implications of SLT products on population health in this region.Using large-scale population data from the South Asia Biobank (SAB) Study, comprising 148,944 male and female adults from Bangladesh, India, Pakistan, and Sri Lanka, we aimed to (1) describe patterns of SLT and combustible tobacco product use stratified by country and sex, and (2) assess the associations of SLT with attempts to quit smoking, smoking cessation, and smoking intensity among men. Data source The SAB is a large comprehensive biobank of South Asian individuals in Bangladesh, India, Pakistan, and Sri Lanka that was established to examine risk factors of diabetes, cardiovascular disease, and other chronic diseases.A detailed description of study design and data collection was previously published [32].The data from 149,051 adults ages 18 years and above used for the present study were collected between 2018 and 2022.Participants were recruited from 244 surveillance sites that were centered on local primary community health care units in four South Asian countries.To identify resident population, government census data and available household listings were used, together with house-to-house visits by research teams and local primary care workers. • Contemporaneous data on smokeless and combustible tobacco use patterns in South Asia are limited.Little is known whether SLT use is associated with smoking cessation in South Asia where the vast majority of the world's SLT users reside. • This study provided information on patterns of SLT use and smoking in four South Asia countries using a large population-based sample of 148,944 adults collected between 2018 and 2022.We observed that SLT use was associated with higher smoking cessation and lower smoking intensity among men.Strengthening SLT product regulation may have important population health implications with the changing tobacco use landscape in South Asia. Keywords Smokeless tobacco, Smoking behavior, South Asia The Bangladesh and Sri Lanka samples were drawn from nearly all divisions/provinces/states, India samples were drawn from New Delhi and Chennai, and Pakistan samples were drawn from Punjab. Residents who were pregnant women, temporary residents (resided for less than 12 month), planned to leave the surveillance site within next 12 months, and had terminal illness were excluded.The overall response rate across all surveillance sites was 67.9% in the early stage of data collection [32].A rich set of demographic, lifestyle (derived from the WHO STEP questionnaire), clinical, environmental data were collected, along with biological samples.After excluding respondents with missing data on age (n = 2), smoking (n = 12), and with sexual identity of "other" (n = 93), the analytic sample for the present study was 148,944 (Supplementary Figure ). The study was approved by the Imperial College London Research Ethics Committee and also the relevant ethics committees of partner institutes in Bangladesh, India, Pakistan and Sri Lanka.Informed written consent was obtained from all participants. Variables Current SLT use was defined by a yes response to the question, "Do you currently use any snuff, chewing tobacco or betel daily?".Similar to smoking, ever SLT was determined by a yes response to the question "In the past, did you ever use snuff, chewing tobacco, or betel daily?" or a yes response to the previous question regarding the current use status.Respondents who reported using betel without tobacco were considered non-users.The intensity of SLT use among current users was determined by the question "On average, how many of the following products do you use each day?".In cases of multiple product use (8.4% of all SLT users), the numbers of each product were added up.The participants were then divided into three categories (0-3, 4-5, 6 + sessions/day). Current smoking was defined by a yes response to the question "Do you currently smoke any tobacco products daily, such as cigarettes, cigars or pipes?"Non-current smokers were asked whether they ever smoked any tobacco products daily in the past.Those who answered yes to the follow-up question along with who currently smoked were considered ever smokers.An attempt to quit smoking among current smokers was recorded by the question "During the past 12 months, have you tried to stop smoking?".Smoking cessation was determined if a participant ever smoked any tobacco products daily and currently does not smoke (i.e., a former smoker) and quit smoking for at least one year.Smoking intensity among current smokers was determined by the question, "On average, how many of the following products do you smoke each day?".In cases of multiple product use (3.7% of all smokers), the numbers of each product were added up. Based on reported current vs. non-current use of smokeless and combustible tobacco products, respondents were also classified into one of four use patterns: nonuse, exclusive SLT use, exclusive smoking, and dual use.Covariates included age (in years), gender (men, women), and education (no formal schooling, primary school, secondary school, college and above). Statistical analysis We began by conducting descriptive analysis of sociodemographic characteristics, smoking and SLT use patterns stratified by gender and country.All estimates were weighted by poststratification sample weights, which were calculated for each country based on age and sex distribution as estimated by the United Nations Population Prospects 2022 Revision, [33] using STATA IPF-WEIGHT procedure [34].Using an iterative proportional fitting algorithm, first proposed by Deming and Stephan, this approach performs a stepwise adjustment of survey sampling weights to achieve known population margins [34,35]. Random effects multiple variable-adjusted logistic regression models were used to obtain odds ratios and 95% confidence intervals, quantifying the associations of SLT use with attempting quitting and smoking cessation, adjusting for covariates.Linear mixed models were used to assess the associations between SLT use and smoking intensity among current smokers.Surveillance site was incorporated as a random effect to account for clustering and correlation within geographical locations [36].All models were adjusted for age, sex, education, and study site, by being incorporated as covariate, or by stratification, or as a random effect.We opted for a conservative covariate adjustment strategy to avoid conditioning on potential mediators, consistent with previous studies [30,37,38].P-values for trends of SLT use intensity were calculated treating SLT use intensity as a continuous variable.Country-specific results were presented given significant heterogeneity by country. Due to the small number of smokers among the female sample, the analyses assessing the association between SLT use and smoking behavior were restricted to male only.In addition, Participants from Pakistan were excluded from the analysis due to the small number of current SLT users and unstable estimations (for example, among male current smokers, only 24 adults currently using SLT products).While smoking cessation applies to ever smokers, making quitting attempt was applicable to current smokers only. Exploratory analyses were conducted to assess potential differences by SLT product type (i.e., betel, chew, snuff, and multiple types) via assessing improvement in model fit and similar procedures of testing heterogeneity by country.Analyses were conducted using Stata 17 (STATA Inc.), statistical significance was defined as p < 0.05 (two-tailed). Results Sample characteristics and tobacco use behaviors, stratified by sex and country, are reported in Table 1.On average, participants from Sri Lanka appeared to be older compared with other countries, and participants from India had the highest proportion of adults with a college and above education.Among men, participants from Bangladesh had the highest prevalence of current smoking (39.9%), current SLT use (24.7%), youngest age of smoking initiation (18.9), and highest proportion of attempting to quit smoking in the previous 12 months (55.6%).Bangladeshi women had the highest prevalence of SLT use (23.4%).The mean age of first SLT use tended to be higher than that of smoking.The predominant SLT product type used by current SLT users varied by country.While the most commonly used SLT product in Bangladesh and Sri Lanka was betel, snuff was used by about 70% of Indian adults. The difference by sex in current smoking was substantial: while the proportion of men who were currently smoking ranged from 11.5% in Pakistan to 39.9% in Bangladesh, only 1% or less women were current smokers in the region.In addition, women tended to have an older age of tobacco use initiation. Table 2 shows patterns of smoking and SLT use, including concurrent use, stratified by sex and country.Substantial differences in patterns of tobacco use across countries were observed.Among men, the prevalence of exclusive smoking (ranged from 11.3% in Pakistan to 31.6% in Bangladesh) tended to be higher than that of exclusive SLT use (ranged from 1.1% in Pakistan to 16.5% in Bangladesh).Bangladeshi men had the highest rate of dual use (8.3%);only 0.2% Pakistani men were dual users.Consistent with the low rates of smoking, exclusive smoking and dual use among women were rare. Among adult men who ever smoked, the majority were current exclusive smokers, and the proportions of male adults who may have transitioned from smoking to exclusive SLT were 13.1%, 10.2%, 6.6, and 0.6% in Bangladesh, Sri Lanka, India, and Pakistan, respectively.On the other hand, among those who ever used SLT products, the majority were current exclusive SLT users.Exclusive smoking among ever SLT users, which may indicate transitioning from SLT use to exclusive smoking, seems uncommon in both male and female samples.The highest proportion of exclusive smoking among ever SLT users were found in Indian men (5.4%).It was rare among adults in Pakistan and Sri Lanka. Turning to SLT use and smoking intensity among male current smokers (Table 4), ever and current SLT use was associated with lower smoking intensity in all three countries.Higher SLT use intensity was associated with lower intensity of smoking in Bangladesh (p-value for trend < 0.001) but not observed in India and Sri Lanka.Compared with non-current SLT use, smoking intensity among current SLT users was 1.38 units lower (95% CI, -1.74, -1.03) in Bangladesh, 1.62 units lower (95% CI, -2.33, -0.92) in India, and 0.47 units (95% CI, -0.90, -0.55) lower in Sri Lanka. In a secondary analysis testing differences by SLT product types, we found a significant improvement in the Discussion and conclusion In our analysis of a contemporary large population-based data of 148,944 South Asian adults, rates of smoking and SLT use varied widely by country and sex.Bangladeshi men and women had the highest rates of current smoking and SLT use.Among adult males, ever SLT use was associated with a higher likelihood of abstaining from smoking.Higher SLT use intensity was associated with higher smoking cessation.Smoking intensity appeared to be lower among SLT users compared with nonusers.Tobacco use, in both smoking and smokeless forms, is highly prevalent in South Asia and remains a major risk factor for chronic diseases [1].Dual use of both smoking and SLT represents an important public health challenge, Abbreviations: SLT, smokeless tobacco; OR, odds ratio a Responded "yes" to the question "During the past 12 months, have you tried to stop smoking?The question is applicable only to current smokers b Reported ever smoked tobacco product, but do not currently smoke, and abstained from smoking 1 or more years c Unweighted number of individuals made quitting attempt (smoking cessation) and weighted row percentages d Reporting using smokeless tobacco currently or in the past e Reported using smokeless tobacco currently.Smokeless tobacco use intensity is derived from answer to the question "on average, how many of the following products do you use each day".If a respondent currently uses multiple smokeless tobacco products, the numbers are added up.The reference group is "never/ former SLT use" f All estimations adjusted for age and education attainment; study site was incorporated as a random effect to account for clustering and dependence within geographical locations.Pakistan sample was excluded from the analysis due to unstable estimation resulting from small number of current smokeless tobacco users because dual use may be motivated by intentions to reduce or quit smoking or to circumvent smoking prohibition laws and the health effects of dual use could be synergistic [16,39].Our results suggested that dual use is rare among female adults and varies widely among male adults across countries, ranging from 0.2% in Pakistan to 8.3% in Bangladesh.Few studies have reported the concurrent use patterns of smoking and SLT use in South Asia.A study based on the GATS 2016-2017 reported a dual use prevalence of 6.3% among male Indian adults [6]. Another study based on the 2009 GATS data reported a dual use prevalence of 12.5% among male Bangladeshi adults [5].Our estimations appeared to be lower compared with these numbers, which may be explained by a secular decline (as our data were collected between 2018 and 2022), different definition of current use (current use in GATS studies included both current daily and non-daily use, while our study included current daily use only), and sample compositions.The continuous monitoring of these trends is important.Our results show SLT use is associated with smoking cessation.While little evidence is available from this region, our findings are consistent with several published studies [39].However, previous studies also indicated that dual users may be less likely to abstain from all tobacco product due to continuous SLT product use [39].Consistent with this observation, our descriptive results showed that very small proportions of adults were former SLT users, indicating low rates of SLT product cessation among ever SLT users. A closely related question regarding public health implications of SLT use is whether SLT facilitates a Abbreviations: SLT, smokeless tobacco a Smoking intensity is defined by the average number of combustible tobacco product smoked per day among current smokers b Reporting using smokeless tobacco currently or in the past c Reported using smokeless tobacco currently.Smokeless tobacco use intensity is derived from answer to the question "on average, how many of the following products do you use each day".If a respondent currently uses multiple smokeless tobacco products, the numbers are added up.The reference group is "never/ former SLT use" d All estimations adjusted for age and education attainment; study site was incorporated as a random effect to account for clustering and dependence within geographical locations.Pakistan sample was excluded from the analysis due to unstable estimation resulting from small number of current smokeless tobacco users e Predicted means (95% CI) are computed from predictions of respective fitted models pathway into smoking (i.e., "gateway effect") [40].While this question is best examined with longitudinal studies, our cross-sectional data provided some important insights.First, the average age of first SLT use was greater than that of first smoking, and 85% of our sample started smoking at the same time or prior to SLT use.Second, current exclusive smoking among former SLT users was much less common than current exclusive SLT use among former smokers.These findings favored a general tendency of switching from smoking to SLT use rather than the other way around in this sample of South Asian adults. Taken together, these findings are consistent with the previous population-based studies, showing that SLT use is replacing smoking in many South Asian countries [5,41].In Bangladesh and India, for example, exclusive SLT use has increased along with declining smoking rates, while dual use has remained relatively stable, or declined slightly [26].Despite the findings showing a positive association between SLT use and smoking cessation in the present study, using SLT as a harm reduction approach to tobacco control in South Asia remains uncertain in this context.Several studies have cautioned considering SLT as a safer alternative to smoking in the South Asian context [5,10,11].SLT products found in South Asia are often homemade or manufactured by small business, a virtually unregulated market [22].These SLT products may contain higher harmful and potentially harmful constituents relative to Swedish snus, [22,42] and the relative risks of SLT use tended to be higher in South Asia compared with Europe and North America [43].Coupled with high prevalence, SLT use accounts for a substantial burden of disease in South Asia [44].Out of a global disease burden of 348,798 deaths and 8,691,827 disabilityadjusted life years (DALY) attributable to SLT use, India alone account for 70%, Pakistan for 7% and Bangladesh for 5%. 44In addition, SLT as a replacement for smoking is irrelevant for South Asian women, given SLT is the predominant tobacco product among them [45]. Tobacco smoking is among the leading causes of premature death globally [1].The negative health effects of smoking are numerous to smokers, as well as to those who are exposed to secondhand smoke [13,46].The toxicity profiles and health effects of SLT products used in South Asia are less clear compared with Swedish snus, which some reports estimated to be approximately 5% as harmful as cigarettes [7].The population health implication of the changing tobacco use landscape in South Asia with increased SLT use along with declining smoking prevalence will at least partly dependent on how SLT products are regulated.Careful profiling of harmful and potentially harmful constituents of SLT products, establishing standards for allowable levels of harmful ingredients are pertinent regulatory actions to protect public health, especially for the socioeconomically disadvantaged groups which are disproportionately affected by SLT use [4,47].More research is needed to quantify the levels of acute and long-term exposure to tobacco harmful and potentially harmful constituents and the health effects associated with SLT use in this region. The study provided an important update on SLT and smoking patterns in South Asia, and among the first, we assessed the association between SLT use and smoking intensity and cessation in South Asia.A major strength of the study is the large sample size collected between 2018 and 2022 with standardized measurements across all four South Asian countries.The number of respondents with missing data on study variables was small.Nevertheless, the study has several limitations.First, SAB used a modified version of WHO STEP questionnaire to measure tobacco use behavior.The original questionnaire does not allow identification of nondaily tobacco users, which could have several implications for the interpretation of the results.To begin with, current tobacco use was defined as "daily use", which may have resulted in lower estimations of smoking and SLT prevalence, including dual use, by excluding non-daily users.Nevertheless, previous studies (e.g., Sreeramareddy & Aye 2021 [48]; Mutti et al. 2016 [49]) have shown that among current tobacco users, current daily use is the predominant pattern in south Aisa.For example, among male current smokers in Bangladesh about 91% were daily users in the 2017 GATS survey; similarly among current smokeless tobacco users, about 94% were daily users in India and Bangladesh.Moreover, nondaily SLT users, albeit a minority among current users, may have different characteristics and patterns of tobacco use.A previous study from Bangladesh showed that daily dual users were more likely than nondaily dual users to report past attempts and future intentions to quit [50].Second, tobacco use behaviors were self-reported, misclassification was likely unavoidable, which may have led to attenuated associations in regression analysis.Third, as a cross-sectional study with a conservative covariate adjustment strategy, the associations between SLT and smoking behaviors reported are unlikely to represent causal relationships.Further longitudinal studies are needed to confirm and elaborate these findings.Fourth, the SAB samples were not nationally representative, especially the India and Pakistan samples were drawn from limited number of locations, the findings may not apply to all adults in these countries. In this large population-based study of South Asian adults, rates of smoking and SLT use vary widely by country and gender.Men who use SLT products were more likely to attempt quitting smoking and to abstain from smoking compared with those who do not, and cessation from smoking was positively associated with SLT use intensity.Continuous monitoring of patterns of SLT use and tobacco smoking is necessary.Given the potential health risks associated with SLT use, strengthening SLT products regulation and promoting SLT cessation is important to protecting public health in South Asia. Table 1 Sample characteristics and tobacco use behaviours, SAB 2018-2022 Abbreviations: SLT, smokeless tobacco a Numbers show percentages, unless indicated otherwise.All estimations, other than sample size, were weighted by post-stratification sample weights, calculated for each country based on age and sex distribution as estimated by the United Nations Population Prospects 2022 Revision b Estimated for ever smokeless tobacco users c Estimates for current users; smokeless tobacco product type was suppressed for female adults in Pakistan due to small numbers Table 2 Current smoking and smokeless tobacco use patterns in South Asia, SAB 2018-2022 Abbreviations: SLT, smokeless tobacco a Numbers show percentages.All estimations, other than sample size, were weighted by post-stratification sample weights, calculated for each country based on age and sex distribution as estimated by the United Nations Population Prospects 2022 Revision b Reported smoking currently or in the past c Reporting using smokeless tobacco currently or in the past Table 3 Association of current smokeless tobacco use and smoking behaviour among male participants former or currently smoke, SAB 2018-2022 Table 4 Association of smokeless tobacco use and smoking intensity among adult male participants currently smoke, SAB 2018-2022
2023-12-09T14:35:20.407Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "7f00c05570baa1ea976204d59c3a693fff9db82e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7f00c05570baa1ea976204d59c3a693fff9db82e", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
221115617
pes2o/s2orc
v3-fos-license
“Metal-Free” Fluorescent Supramolecular Assemblies for Distinct Detection of Organophosphate/Organochlorine Pesticides The “metal-free”, easy-to-prepare fluorescent supramolecular assemblies based on anthracene/perylene bisamide (PBI) derivatives have been developed for the distinct detection of organophosphate (CPF) and organochlorine (DCN) pesticides in aqueous media. The supramolecular assemblies of anthracene derivative show rapid and highly selective “on–on” response toward organophosphate (CPF), which is attributed to the formation of CPF-induced formation of “closely packed” assemblies. A detection limit in the nanomolar range is observed for CPF. On the other hand, the inner filter effect is proposed as the mechanism for the “on–off” detection of DCN using supramolecular assemblies of the anthracene derivative. This is the first report on the development of fluorescent materials having the potential to differentiate between organophosphate and organochlorine pesticides. The assemblies of anthracene derivative 2 also act as “enzyme mimic” as organophosphate pesticide show a preferential affinity for assemblies of derivative 2 over acetylcholinesterase enzyme. Further, the real-time applications of supramolecular assemblies have also been explored for the detection of CPF and DCN in spiked water and in agricultural products such as grapes and apples. S12 Fluorescence spectra of derivative 2 in THF and water and concentration dependent 1 H NMR of derivative 2. S13 UV-vis spectra of derivative 2 in presence of Cu 2+ ions, XPS spectra of derivative 2 and fluorescence spectra of derivative 2 in presence of Cu 2+ ions. S14 Bar diagram of derivative 2 with different biomolecules and metal ions and different amines. S15 UV-Vis spectra of derivative 2 with CPF, CIE coordinates of derivative 2 alone and with CPF and Detection limit of derivative 2 with CPF. S16 Detection limit of copper ensemble of derivative 2 with CPF, fluorescence spectra of copper ensemble of derivative 2 with CPF. S17 UV-Vis spectra of derivative 2 with DCN, Fluorescence spectra of derivative 2 with different excitation wavelength and detection limit of derivative 2 with DCN. S18 Bar diagram of derivative 2 with different pesticides, Fluorescence life time spectra of derivative 2 alone and with CPF in water. S19 Overlay 1 H NMR of derivative 2 with CPF, XRD diffraction pattern of derivative 2 alone and with CPF. S23 Fluorescence spectra of derivative 3 with CPF and detection limit of derivative 3 with CPF. S24 Fluorescence spectra of derivative 4 with CPF, detection limit of derivative 4 with CPF and fluorescence spectra of derivative 4 with DCN. S25 Fluorescence spectra of derivative 5 with CPF, detection limit of derivative 5 with CPF and fluorescence spectra of derivative 5 with DCN. S26 Detection limit of derivative 5 with DCN, plot of F/F 0 of derivative 2 with CPF in water and apple and with DCN in water and grapes, bar graph of F/F 0 values for DCN residues in grapes. Table S1: Comparison table of sensing of pesticides with other literature reports. General Experimental Methods and Instrumentations 1.1 Physical Measurements UV-vis spectra were recorded on a SHIMADZU UV-2450 spectrophotometer, with a quartz cuvette (path length: 1 cm). The cell holder was thermostat 25°C. The fluorescence spectra were recorded with HORIBA Scientific Fluoromax-4 spectrofluorometer and one of the fluorescence spectra was recorded with SHIMADZU-5301 PC spectrofluorometer. TEM images were recorded from Transmission Electron Microscope HR-TEM-JEOL 2100. The time-resolved fluorescence spectra were recorded with a HORIBA timeresolved fluorescence spectrometer. 1 H and 13 C NMR spectra were recorded on a JOELFT NMR-AL 400 MHz and BRUKER-AVANCE-II FT-NMR-AL 500 MHz spectrophotometer using CDCl 3 , DMSO and D 2 O as solvent and tetramethyl silane, SiMe 4 as internal standards. S5 Biomolecules such as spermine, spermidine, glutathione, cysteine, homocysteine, hydrazine, H 2 O 2 , ClOand amines were freshly prepared in distilled water. In each titration experiments, 3 mL, 1μM solutions of derivative 2 were filled in a quartz cuvette (path length, 1 cm) and biomolecules were added into the quartz cuvette by using a micro-pipette. Calculation of detection limit 1 The calculations of detection limit was based on the fluorescence titrations. To determine S/N ratio, the emission intensity of all the derivatives (2, 3, 4 and 5) without additions of pesticides (CPF and DCN) was measured 10 times and standard deviations of blank measurements was determined. The detection limit was calculated by using following equation: Where SD is the standard deviation of blank solution measured by 10 times and S is slope of the calibration curve. 2 Apples and grapes were chosen for evaluating potential of CPF and DCN in real samples. After washing with water, these were chopped and crushed to make a homogenate. The 10g of homogenate was mixed with 10 ml of methanol and was filtered twice using fine paper to remove the insoluble particles. Then different volumes (0, 10, 30, 50, 70 and 100 µl) of CPF and DCN solution was mixed with 1ml solution of above homogenate of apple and grapes juice respectively and their fluorescence spectra was recorded. S6 Grapes were used to measure the residue level of DCN with time. For this, solution of DCN (10 -2 M) was used was spiked on skin of grapes and stored for overnight at room temperature. Then samples were prepared by using method outlined above. The samples were prepared each day for four consecutive days and their fluorescence spectra was recorded every day. Synthesis of derivative 2 9, 10-dibromoanthracene 6 and 4-formyl phenyl boronic acid 7 in dioxane were added in a two neck rbf followed by addition of K 2 CO 3 in distilled water (1mL) and Pd (0) and reaction was refluxed under nitrogen for overnight. Then after evaporating solvent under vacuum residue was extracted using CHCl 3 /water and dried over anhydrous Na 2 SO 4. Then after removing organic layer under pressure, residue was purified using column chromatography using hexane/CHCl 3 , 1:9 to furnish derivative 2 as yellow solid in 50% yield. 1 Synthesis of derivative 8 Derivative 8 was synthesized according to previous reported method. 3 Synthesis of derivative 3 Derivative 8 and phenyl boronic acid 7 in Dioxane were added in a two neck rbf followed by addition of K 2 CO 3 in distilled water (1mL) and Pd (0) and reaction was refluxed under nitrogen for overnight. Then after evaporating solvent under vacuum residue was extracted using CHCl 3 /water and dried over anhydrous Na 2 SO 4. Then after removing organic layer under pressure, residue was purified using column chromatography using hexane/CHCl 3 , 1:8 to furnish derivative 3 as dark reddish solid in 52% yield. 1 Synthesis of derivative 4 and 5 Derivative 4 and 5 were also synthesized according to previous reported method. 3 Scheme S2: Synthesis of derivative 3.
2020-08-06T09:08:20.193Z
2020-07-31T00:00:00.000
{ "year": 2020, "sha1": "31a9e48e54b05b20d27c67f2f82759f57ef7bc4a", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c02315", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e96d7df2eab79c8389ce2d14251ad8dde7908a33", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
248368565
pes2o/s2orc
v3-fos-license
A barcoding pipeline for mosquito surveillance in Nepal, a biodiverse dengue-endemic country Background Vector-borne diseases are on the rise on a global scale, which is anticipated to further accelerate because of anthropogenic climate change. Resource-limited regions are especially hard hit by this increment with the currently implemented surveillance programs being inadequate for the observed expansion of potential vector species. Cost-effective methods that can be easily implemented in resource-limited settings, e.g. under field conditions, are thus urgently needed to function as an early warning system for vector-borne disease epidemics. Our aim was to enhance entomological capacity in Nepal, a country with endemicity of numerous vector-borne diseases and with frequent outbreaks of dengue fever. Methods We used a field barcoding pipeline based on DNA nanopore sequencing (Oxford Nanopore Technologies) and verified its use for different mosquito life stages and storage methods. We furthermore hosted an online workshop to facilitate knowledge transfer to Nepalese scientific experts from different disciplines. Results The use of the barcoding pipeline could be verified for adult mosquitos and eggs, as well as for homogenized samples, dried specimens, samples that were stored in ethanol and frozen tissue. The transfer of knowledge was successful, as reflected by feedback from the participants and their wish to implement the method. Conclusions Cost effective strategies are urgently needed to assess the likelihood of disease outbreaks. We were able to show that field sequencing provides a solution that is cost-effective, undemanding in its implementation and easy to learn. The knowledge transfer to Nepalese scientific experts from different disciplines provides an opportunity for sustainable implementation of low-cost portable sequencing solutions in Nepal. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13071-022-05255-1. During the past 2 decades, the world witnessed a surge in dengue fever (DF) cases following the spread of dengue virus (DENV) vectors as a consequence of globalization, trade and travel, land use change and deforestation [5][6][7]. While historically DF epidemics were limited in number and occurred in only few countries, DF is now endemic in > 100 countries [8], with the number and frequency of epidemics dramatically increasing [9]. This trend will most likely continue, as global warming is enhancing the suitability of previously unoccupied habitats for vector species [10]. Arbovirus vector species have already established in the regions deemed too cold for overwintering in Europe, the Americas and Asia [11][12][13][14][15], including the highlands of Nepal. The first DF case in Nepal was reported in 2004 [16]. The number of infections has increased steadily since then, and Nepal witnessed its largest DF epidemic so far in 2019 with > 14,000 confirmed cases [17], although underreporting is likely [18]. The distribution of DF is negatively influenced by increasing elevation with the highest risk of infection at < 500 m above sea level (asl) [17]. Alarmingly, during the 2019 outbreak, the capital city Kathmandu, with 1.4 million inhabitants, at an elevation of 1400 m asl, was especially hard hit [19], while only sporadic cases were reported earlier [20]. Cases have also been reported from even higher elevations (2100 m asl), and the most likely driving factor for the distribution of vector species in the regions of higher elevation is increasing temperature associated with anthropogenic climate change [4,17]. The most important vector species of DENV are the yellow fever mosquito Aedes aegypti (Linnaeus, 1762) (Diptera: Culicidae) and the Asian tiger mosquito Ae. albopictus Skuse, 1894. Both species are distributed throughout the tropical and subtropical regions, although Ae. albopictus has a markedly wider distribution range that extends into temperate regions because of their higher ecological plasticity and cold tolerance [21][22][23]. The increasing spread of both species in the temperate regions and subalpine zones of Nepal is probably driving the escalation of DF epidemics [24][25][26]. DF, however, is not the only VBD in this region, and the increasingly alarming situation regarding its spread must not influence the financial and human resources allocated to control other vector-borne diseases such as malaria, lymphatic filariasis, visceral and cutaneous leishmaniasis, chikungunya and Japanese encephalitis [25,27]. With the exception of leishmaniasis, the disease agents are transmitted by mosquito species belonging to the genera Aedes Meigen, 1818, Culex Linnaeus, 1758, and Anopheles Meigen, 1818, with oftentimes several pathogens sharing the same vector species [28]. For all discussed diseases, entomological data on occurrence and distribution ranges of vectors are paramount to assess the risk of outbreaks and inform early warning systems, which will provide sufficient time to prepare medical health care professionals and generate awareness in the potentially afflicted populations. Classically, species identification is done via distinct morphological traits. However, this requires extensive entomological training and expertise and is rather time consuming [29]. Alternatively, next-generation sequencing techniques can be relatively cheap and less timeconsuming and offer simultaneous identification of numerous mosquito individuals [30]. NGS sequencing thus can aid classical morphological species identification provided that a reference sequence database exists [31]. Recent studies show that with a portable MinION sequencer (Oxford Nanopore Technologies, UK), barcoding can be conducted under field conditions [32], while simultaneously sequencing many individuals [33,34]. Thus, field sequencing provides a fast, accurate and cost-effective alternative for morphological species identification. This technique offers accessibility of sequencing in resource-limited settings, such as in developing countries or in remote areas [32,[35][36][37]. Access to classical sequencing approaches in those settings can be limited by a lack of funding, a lack of infrastructure or logistical issues. These limitations apply to the situation in Nepal [38], especially to the survey of mosquitoes in regions that are oftentimes difficult to reach and make timely analysis of samples impossible [39,40]. Therefore, our aim was to establish a barcoding pipeline ( Fig. 1) for mosquitoes that is applicable in the field and supports current entomological efforts in reliably identifying vector species. As a secondary objective, we provided training to health care professionals and researchers in Nepal on the implementation of the pipeline. Mosquito samples used for barcoding The testing of the barcoding pipeline was conducted with four batches of mosquito samples that were obtained from different sampling campaigns and geographic areas with the aim to encompass different mosquito life stages (adults and eggs) and history of sample storage ( Table 1) As the pipeline only works on individuals or pools of individuals of the same species, pools of ten morphologically pre-sorted eggs each were used to test the barcoding pipeline. The samples NP1 and BEL had been morphologically identified to species level, whereas the species identity of the samples NP2 and GER was unknown. All the morphologically identified samples were analyzed blindly during PCR amplification, and sequencing steps and the species status were verified afterwards. The species identities of samples NP2 and GER as identified through the Oxford nanopore barcoding pipeline were verified by Sanger sequencing. Oxford nanopore sequencing workflow For DNA extraction from undamaged adult mosquitos (NP2, BEL), two legs of each mosquito were used, leaving the remaining individual intact for morphological analysis or further Sanger sequencing to verify Oxford nanopore results. The legs were placed in 20 µl QuickExtract solution (Lucigen, Middleton, WI, USA) and heated to 65 °C for 15 min and 98 °C for 2 min. DNA from homogenized adults (NP1) was extracted with the DNeasy Blood and Tissue kit (QIAGEN). We adapted the protocol to fit the lesser volume of used homogenate by using 50 µl of the homogenate, adding 10 µl proteinase K, 100 µl buffer AL and 100 µl ethanol. Elution was done in 50 µl Buffer AE to increase DNA concentration. DNA extraction from egg samples (GER) was similarly conducted with the DNeasy Blood and Tissue kit (QIA-GEN) with the following modifications. (i) The eggs were manually cracked using a pipette tip or a toothpick. (ii) Samples were incubated in proteinase K solution overnight for at least 12 h. (iii) The elution step was done with either 30 µl or 50 µl Buffer AE, which was pre-warmed to 56 °C. We chose cytochrome c oxidase subunit I [41] as a marker for barcoding, as it represents the most commonly used locus with the most extensive database available. To allow for a cost-effective protocol, individuals were multiplexed during library preparation and sequencing. To be able to obtain individual-based sequences, PCRs were done separately for each mosquito, using primers with individual marker sequences (hereafter tags). We used a dual-indexing approach after Srivathsan et al. [33] for tagging forward and reverse primers that allowed for marking individuals with unique tag combinations. The PCR reaction contained 5 µl GoTaq G2 Colorless Mastermix (Promega, Mannheim, Germany), 0.3 µl tagged forward and reverse primers (10 pmol/µl), respectively, 2.4 µl nuclease free water and 2 µl isolated DNA. PCR conditions were as follows: initial denaturation for 5 min at 94 °C, followed by 35 cycles of denaturation at 94 °C for 30 s, annealing for 60 s at 45 °C and extension for 60 s at 72 °C, followed by a final extension step for 5 min at 72 °C. PCRs were additionally tested with the Bento Lab DNA workstation to assess usefulness, especially regarding field condition. All sequencing runs were conducted with the MinION Mk1B sequencer (Oxford Nanopore) using R9 flow cells. We used the Ligation Sequencing Kit (SQK-LSK109) according to the corresponding protocol. However, we omitted DNA fragmentation and adjusted the magnetic bead-washing step so that the volume of added beads always matched the volume of the DNA solution (1:1) to avoid size selection against short reads. The PCR products were end-repaired by using the NEBNext Ultra II End-Repair/dA-tailing Module (New England Biolabs, Ipswich, MA, USA) and incubated for 5 min at 20 °C and for 5 min at 65 °C. This was followed by a clean-up step with AMPure XP magnetic beads (Beckman Coulter, Brea, CA, USA). Adapter ligation was conducted using NEBNext Quick Ligation Module (New England Biolabs) and the AMX adapter mix included in the Ligation Sequencing Kit. For the last AMPure bead clean-up, the volume of used beads was adjusted to 100 µl to match the reaction mix. The library was loaded onto the R9 flow cell, and sequencing was conducted and monitored using the MinKNOW software (Oxford Nanopore). Basecalling was conducted in parallel using the integrated Min-KNOW basecaller with the fast option. Bioinformatic pipeline For the analysis of the data resulting from Oxford Nanopore sequencing (ONS), we used the miniBarcoder pipeline by Srivathsan et al. [33,34]. Briefly, sequences were curated using minibarcoder.py, a script that encompasses the identification of primers, demultiplexing of sequences, alignment of sequences and subsequent majority consensus building. The demultiplexing step identifies matching sequences by their combination of forward and reverse tags. Following this, consensus sequences were aligned back to the original read set and error corrected using graphmap [42] and racon [43] with the script racon_consensus.sh. Resulting barcodes were further treated by an amino acid correction that specifically targets frame shifts, using the script aacorrection.py. The resulting consensus barcodes were used for species identification. We used two different approaches to identify mosquito species: a first step was to compare sequences to the GenBank database with BLAST or, when we suspected mismatched entries (i.e. when different species matched our sequences equally well), against the BOLD database. As an alternative identification approach, sequences (preferably those that were verified by morphological identification [44][45][46][47][48][49]) of species, that are common to the region from which the samples originated were downloaded from the NCBI database (for accession numbers, refer to Fig. 2, Additional file 1: Fig. S1) and aligned to the sequences obtained by ONS. Libraries from NP2 and GER samples were consecutively run on the same flow cell. Even though we followed the recommended washing protocol in between runs, significant carryover of the NP2 library into the GER run took place. As we partly used the same identifying tags in both runs, the resulting demultiplexed datasets for those samples of the GER run contained two different species and no reliable consensus could be built with the pipeline described above as the samples contained too much variability. We thus used the output from the demultiplexing step of the pipeline (minibarcoder.py), i.e. a set of sequences that contain sequences from one NP2 sample and one GER sample that were marked with the same combination of identifying tags, to build alignments for each of the identified samples. Alignments were visually assessed and split into multiple alignments based on sequence similarity. Of those separated alignments, consensus sequences were used to determine which sequences belonged to the NP2 run, which was performed first following the above described bioinformatic pipeline. The remaining sequences then had to belong to the GER sample, and the consensus sequence was used to determine species identity via BLAST and for a combined phylogeny with Sanger sequences to verify the results. Verification of accuracy of mosquito barcoding The accuracy of obtained sequences of the samples NP2 and GER was verified with the more accurate Sanger sequencing technique and compared to the sequences generated by ONS. Phylogenies with both resulting sequencing types were used to analyze the congruence of respective sequences. For NP2 samples, three legs of all individuals that were already sequenced with ONS were used for DNA isolations for Sanger sequencing. The PCR was conducted with untagged primers, and the reaction contained 5 µl GoTaq G2 Colorless Mastermix (Promega, Mannheim, Germany), 0.4 µl of each primer, 3.2 µl nuclease free water and 1 µl DNA. Cycler conditions were the same as for the PCR used for ONS. Sanger sequencing was conducted by BaseClear (Leiden, The Netherlands). Resulting forward and reverse sequences were aligned, and their consensus was aligned to the Oxford Nanopore sequences using Geneious (v. 10.1.3; Biomatters, New Zealand) with the default MUSCLE alignment algorithm. Reference sequences from common Nepalese mosquito species were added to the alignment, and a phylogenetic tree was built using the PhyML online tool with default settings and 100 bootstraps. The resulting tree was visualized using iTOL [50]. For GER samples, the same DNA extracts were used for Sanger sequencing as for ONS. The PCR reaction mix consisted of 1 µl 10 × reaction buffer (Projodis, Butzbach, Germany), 1 µl MgCl2, 1 µl dNTP mix (20 µm of each; Projodis), 0.1 µl MOLPol DNA Polymerase (Projodis), 0.2 µl of each primer, 5.5 µl ddH 2 O and 1 µl DNA. PCR conditions were 94 °C for 2 min followed by 35 cycles of 95 °C for 30 s, 48 °C for 1 min, 72 °C for 1.5 min and a final elongation at 72 °C for 110 min. The sequencing reaction conditions were 95 °C for 1 min, followed by 30 cycles consisting of 96 °C for 10 s, 50 °C for 10 s and 60 °C for 2 min. Capillary sequencing was performed on a 3730xl DNA Analyzer (Applied Biosystems, Waltham, MA, USA) at the SBiK-F laboratory centre. Resulting sequences were aligned to their respective Oxford nanopore sequences using Geneious Prime alignment with standard settings, and a phylogenetic tree was built, as described for samples NP2. Low-quality Sanger sequences (NP-A1, NP-A2, NP-A4, NP-C4, NP-D4, NP-E3, NP-G2,) were excluded from the alignment before the construction of the phylogenetic tree. Research capacity building for mosquito barcoding The objective of the transfer of knowledge was to equip the participants with the methodology to perform molecular surveys of mosquitoes for species identification in field and low resource settings (see Fig. 1). Due to the COVID-19 pandemic, previously planned in-person training was adapted to an online course format with an accompanying handbook (Additional file 2). Six Nepalese specialists from different health research-related fields (microbiology, molecular medicine, health sciences, molecular parasitology) participated in the webinar. All the participants had prior experience of the required laboratory techniques. However, none of them had experience in working with the MinION nanopore sequencer or the Unix command line. Four webinar sessions were conducted. The first session covered laboratory techniques from DNA isolation to library preparation and included an exercise on tagged primer design. The second session covered the theory behind the bioinformatics pipeline. After the second session the participants were provided with an installation manual of the bioinformatical programs (Additional file 3) to be installed in their computers. During the third session, questions about the software installation and general Unix commands were discussed. In the second part of the third session and the first part of the fourth session, the participants were able to try out the pipeline with a mock dataset. The second part of the fourth session was again used to discuss questions regarding the complete pipeline (Additional file 4). A successful transfer of knowledge to the Nepalese participants of the Webinar was assessed by a questionnaire (for detailed questions see Additional file 1: Material S1). Specifically, the participants were asked to rate how well they were able to follow and participate in the different parts of the course. We furthermore asked the participants to rate how confidently they could apply what they learned with or without additional help. Lastly, they were asked to rate the helpfulness of the learned methodology to increase entomological knowledge and to support the vector-borne disease control efforts. Sequencing output Each of the sequencing runs was stopped after enough data (amounting to a mean 20 × coverage per sample) had been produced to ensure reliable species identification (after 4-5 h). The obtained coverage varies between samples and sequencing runs (see Table 2). The samples from NP1 show the lowest coverage, which is in line with an observed low yield after DNA isolation and PCR (not shown). However, even from those samples, enough coverage was obtained for species identification. Furthermore, dry stored adults proved to yield enough DNA for analyses, similar to the samples stored in ethanol at room temperature for several years. Accuracy of species identification The species identification based on the Oxford Nanopore sequences was highly reliable. The accuracy of Oxford Nanopore sequences from the NP2 (Fig. 2) samples proved to be 100% in line with the less error-prone Sanger sequences. Regarding the GER egg pools, our adapted Oxford Nanopore barcoding pipeline mostly yielded the same results as the Sanger sequencing approach. A notable exception is the sample GER-G2, which was identified as Ae. japonicus (Theobald, 1901) with Sanger sequencing, while the Oxford Nanopore barcoding pipeline yielded two distinct alignments, which were identified as Ae. japonicus and Ae. geniculatus (Olivier, 1791; Additional file 1: Fig. S1). Morphological inspection of eggs prior to sequencing identified some of the eggs as Ae. geniculatus. The accuracy testing of species identification showed contrasting results for the samples NP1 and BEL, which were morphologically identified prior to ONS. For BEL samples, we found that the species identification based on sequencing and a subsequent BLAST step perfectly matched the morphology-based results (Additional file 1: Table S1). For NP1 samples, the congruence was much lower: of 15 sequenced samples, only 6 matched the morphologically identified species (40%; Table 3). Furthermore, the exact species identity of the samples NP1-2 and NP1-14 could not be resolved conclusively, as the entries in GenBank and BOLD are ambiguous (very high matches for both An. subpictus Grassi, 1899, and An. jamesii Theobald, 1901); however, neither was identified as their originally assigned species (see Table 3). Transfer of knowledge for mosquito barcoding in Nepal All the participants stated they were able to follow the lecture on DNA isolation, PCR protocols and the sequencing part of the pipeline. In the bioinformatics analysis, two third of the participants opted they were able to follow almost everything and one third that they were able to follow most parts. The question on whether participants were able to participate in the exercises was rated similarly. None of them had trouble with contents of the webinar or trouble to participate. The most timeconsuming part of the webinar was the exercise on the analysis pipeline. Here, two thirds of the participants opted that they were able to comprehend everything, while one third said they were able to comprehend most parts. One third of the participants were confident about applying the methods they learned without any additional help, and two thirds were somewhat confident. One third was again confident about applying what they learned only with the help of the provided handbook, while two thirds were mostly confident. When the participants could rely on help from the other participants, two thirds were confident that they could apply what they learned, while one third was somewhat confident. All the participants stated that the methodology would be helpful to increase entomological knowledge and support vector-borne disease control. In general, the feedback on the parts of the webinar concerning laboratory techniques was more positive compared to the feedback on the bioinformatics part. Personal feedback from participants showed that this was due to their previous experience with laboratory techniques and little to no experience with bioinformatic analyses. Discussion Portable field sequencing has been shown by other studies to be reliable for the identification of species [32,35,36]. Here we show that this technique is suited to identify mosquitoes at different stages of their life cycle and from different storage techniques. It is promising that even the pooled egg samples yielded enough DNA for reliable identification, which is useful especially when oviposition traps are used for monitoring. Our main aim was to aid in building entomological capacity in a country with several endemic vector-borne diseases and some on the rise. By hosting a webinar on the sequencing technique, hands-on protocols and ensuing bioinformatic analysis for Nepalese specialists with backgrounds in medical and biological sciences, we succeeded in the first important step to establish a field pipeline on next-generation barcoding in this country. Species identification The accuracy of Oxford Nanopore based barcoding can be seen from both the correct identification and a high congruence compared to Sanger sequencing on a sequence level (Fig. 2). Indeed, given the higher rate of sequencing failures for the egg samples with the Sanger technique, the Oxford Nanopore approach might prove more robust, despite labor-intensive post-processing steps. Regarding the ambiguous results for the sample GER-G2, we assume that this egg pool consisted of a mixture of Ae. japonicus and Ae. geniculatus eggs. Since Sanger sequencing only results in a single output sequence, it is not possible to identify multiple species within a single sample. With Oxford nanopore sequencing, on the other hand, the output reflects the amplicons within the library and thus allows for the identification of mixed samples. This was however not possible using the pipeline described by Srivathsan et al. [33], which would Table 3 Overview of identified species (NP1) using either classical morphological identification or ONS followed by a BLAST against the GenBank database or by a BOLD search The percentage of matching bases from the BLAST is given for the first shown result. Matching success of the two methods for species identification is 40% [44] result in a single sequence output. Instead, we aligned a subset of sequences, visually split the alignment based on sequence similarity and thus were able to identify two major subgroups of sequences that were used to identify both species. While this was not within the scope of this study, this example shows the potential of using nextgeneration sequencing for non-targeted species identification, e.g. for the identification of endosymbionts as also exemplified in Sonet et al. [51]. Especially in a potential VBD outbreak setting it would be highly advantageous to be able to not only identify mosquito species but also simultaneously detect a range of potentially harmful pathogens. Similar pipelines exist already to identify host, their ectoparasites and pathogen [52] or to identify different host species from the blood meals of mosquitoes [53] and triatomine bugs [54], but those need to be adapted to the specific vectors, pathogens and sequencing technique. Due to our experiences with substantial contaminations from one sequencing run into the next, despite using the recommended flow cell washing steps, we advise against reusing a flow cell with different samples that are tagged with the same identifier sequences. In those cases, the described pipeline will yield empty results, as there will be too much sequence variability for the consensus calling step to work. However, since the pipeline worked without problems for samples that were tagged with unique identifiers not present during the first run, we do not see an issue with reusing a flow cell, given that there is no overlap in identifier combinations. One, however, needs to account for the reduced sequencing output for the second set of samples, since sequences from the first run that are still present on the membrane will compete for available nanopores. Given the high accuracy and correct identification of other sequences that were identified with the Oxford Nanopore pipeline and the fact that we compared results to verified barcodes (Table 3), we assume that the individuals of the NP1 samples were not correctly identified by morphological assessment prior to homogenization. This again shows how genetic barcoding can aid in the correct identification of vector species. Especially in regions with high biodiversity, such as Nepal [55], the correct morphological identification of species can be difficult and needs extensive training. Most of the mismatches that occurred are known to be notoriously hard to discriminate morphologically because they belong to the same complex or group [56][57][58][59]. Morphological identification of similar specimens is even more challenging when samples and their discriminating features are damaged during trapping or transport. We were largely able to rely on morphologically verified entries of the barcode of life project [31] or GenBank to identify the sequencing results. However, it needs to be stressed that reliable reference databases are crucial to identify species correctly in the same way that trained and experienced entomologists are necessary to identify species morphologically [29] (Table 4). Application of barcoding pipeline for mosquitoes in field settings The barcoding pipeline provides an opportunity to sequence large amounts of arthropods on a single flow cell of the Oxford Nanopore MinION sequencer [33,34]. We optimized the barcoding pipeline for mosquito species and were able to show that high quality sequences can be obtained from different life stages of mosquito species and differently stored samples. Given the very limited need for equipment, we are confident that this pipeline can be conducted in lowresource settings, provided access to electricity, as has been shown by projects that sequenced in remote rain forests [32,35,60], the desert [61] or even the International Space Station [37]. Especially when using adult [60,62]). Moreover, the relative simplicity of the pipeline provides an opportunity for easy and quick access to new users who have never previously worked with sequencers, as demonstrated by Watsa et al. [62]. Research capacity building for entomological surveillance The aim of this study is to enhance research and surveillance capacity in the framework of VBDs in the biodiverse and dengue-endemic country Nepal. However, a barcoding pipeline for mosquito surveillance can only be sustainably applied if in-country research capacity meets the basic requirements. The current development of scientific infrastructure (increase of R&D budget, implementation of high-tech equipment) and expert knowledge in Nepal is encouraging [38]. However, with the present resources, especially in light of the additional burden of the ongoing pandemic, it remains a challenge to adequately tackle rapidly expanding VBDs such as dengue [63]. In addition, entomological expertise, which is urgently needed for vector control programs, is lacking in Nepal [64]. These challenges are augmented by Nepal's topography, with remote and poorly accessible regions [39]. All of this calls for easy to establish, cost-effective and mobile solutions to enable scientists to collect data onsite. NGS barcoding is currently the best solution for this, as it is able to handle large sample sizes [33], while being mobile and applicable in even the remotest locations [32,35,37,60,61], and provides comparably cheap sequencing costs of < 0.57 USD per sample for DNA isolation, PCR, library preparation and sequencing, when pooling ~ 3500 samples per flow cell [33]. The only challenge when pooling this number of samples is the laborintensive PCR step, which leads to a trade-off between field-applicability and upscaling ability. After pooling the PCR products, the barcoding pipeline will yield results within a few hours, allowing for rapid identification, for example during outbreaks. Conclusion While the identification of mosquito species is a crucial part in assessing the risk of outbreaks of several VBDs and quality control of interventions, the implementation of the barcoding pipeline has the potential for more large-scale and sustainable impact and capacity building. There is an enormous potential for upscaling of the barcoding pipeline and simultaneous sequencing of 4000 individuals, as shown by Srivathsan et al. [33]. The barcoding pipeline thus provides a cost-effective solution to aid classical morphological species identification and can be applied on-site. The training of medical professionals and researchers from different fields provides an opportunity for a long-term implementation of portable sequencing techniques in Nepal and for the application of sequencing techniques in several related research fields outside of the scope of this study.
2022-04-25T13:29:25.538Z
2022-04-24T00:00:00.000
{ "year": 2022, "sha1": "d3519b4ce2a8b459d29ce595b6e52e13f520df2c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "140c122e3d5e06bbe26860a4948bb240085524f5", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6197503
pes2o/s2orc
v3-fos-license
Model of the Reticular Formation of the Brainstem Based on Glial–Neuronal Interactions A new model of the reticular formation of the brainstem is proposed. It refers to the neuronal and glial cell systems. Thus, it is biomimetically founded. The reticular formation generates modes of behavior (sleeping, eating, etc.) and commands all behavior according to the most appropriate environmental information. The reticular formation works on an abductive logic and is dominated by a redundancy of potential command. Formally, a special mode of behavior is represented by a comprehensive cycle (Hamilton loop) located in the glial network (syncytium) and embodied in gap junctional plaques. Whereas for the neuronal network of the reticular formation, a computer simulation has already been presented; here, the necessary devices for computation in the whole network are outlined. Introduction and Hypothesis A model of synaptic information processing based on glial-neuronal interactions has already been published in this journal [1]. Here, I attempt to elaborate this model for the glial-neuronal interactions in the reticular formation of the brainstem. Whereas the anatomical structure of the neuronal system in the reticular formation has already been identified [2,3], the glial network is as yet unknown. It is only certain that astrocytes do occur in this system [4]. My hypothetical model is as follows: Since astrocytes determine the function of the neuronal system in the reticular formation, astrocytes must be interconnected via gap junctions building a network, called syncytium. As already hypothesized [5], the glial syncytium may generate intentional programs whose realization is dependent on information from the neuronal system computed from the inner and outer environment. In the case of the reticular formation, the neuronal system computes so-called modes of behavior (eating, sleeping, working, etc.) which must be rapidly generated dependent on the environmental situation. These may guarantee the maintenance of the elementary organization of a living system. The applied formalism uses exchange relations between neighbored values in the sense of permutations in an nvalued system. This allows the generation of integrative circles that comprise all values once, so-called Hamilton loops. The neuronal system in the reticular formation may be comparable to a stack of poker chips, each embodying a Hamilton circle. The glial syncytium builds plaques of gap junctions. Each plaque may embody all necessary gap junctional channels for generating Hamilton loops. These genetically or environmentally determined intentional programs command the neuronal system as to which Hamilton loop is to be selected in correspondence to the behavioral mode. In a robot brain, these double functions can be implemented as a command and an executive computer. Jellema and coworkers [6] have proposed a perception system working according to an abductive logic. This system can be implemented in a robot brain. With concern to our graph-theoretical formal approach to a simulation of the reticular formation, Humphries et al. [7] have developed a formal model of the reticular formation that is comparable to our model (permutographs), but it does not refer to the glial system. The Concept of the Modes of Behavior According to Iberall and McCulloch [8], a living system like man is highly dynamic. In order to produce an integrated behavior, it must be capable of generating stable system states, the so-called modes of behavior. This concept has been somewhat neglected in Brain and Behavioral Sciences, whereas it adopts a pivotal role in the brain model presented here. We do not normally think of human behavior as modal, though most people would agree that their quality of consciousness is unitary and they can only do one thing well at a time [9]. This may be identified as a dynamic action mode of the system, such as ''the system sleeps''. In Table 1, the essential modes of behavior or action modes are listed which will have a time constant of the order of the female menstrual period. Although the list itself could be questioned, we would like to focus on the exploratory power of this scientific approach. McCulloch [9] has associated the ability of the brain to integrate its functions with the reticular formation in the brain stem, in the sense of an ''integrative matrix'' [2,3]. Over time, however, the reticular formation seems to have attracted the interest of scientists in its role as an activating or arousal system. In the 1980s, we further elaborated McCulloch's theory of reticular formation [10]. The actual molecular enlightenment of the circadian and ultradian oscillators (rhythms) as well as the undeniable influence with which the glial system acts on the neuronal system is a challenge to reconsider the integrative decision function of the reticular formation using the principles of musical composition as a paradigm. The reticular formation operates by an abductive logic [10][11][12][13]. Abduction is the selection of the appropriate program from a repertoire in accordance with a rule for analyzing program requests. These programs are general in the sense that all are principally adapted for the processing of environment information; however, at the same time, they are highly specialized for the processing of specific environment information. When specific environment information acts on the system, the system can decide or select to which program the information belongs, that means, which program is best suited for information processing. The repertoire of these programs represents a heterarchic system (circular system) which is equipped with a ''redundancy of potential command'' [14], because every program in itself is capable of ruling the whole system for a certain time. When this abductive selection and commanding system are transferred to our brain model, a glial-neuronal compartment corresponds to one respective program structure. These program structures are genetically determined, and the activity of the programs alters with different timescales. Therefore, the brain permanently operates in different system states which correspond not only genetically but also in relation to the environment and to intentions [15]. These program structures or compartments may also be regarded as hypotheses or intentions which are tested in the environment. Since conditions in the environment can quickly change or remain unchanged, the brain must either change its multicompartmental program structure or ''freeze'' the biorhythm on a determined program structure. In any case, the program structure that best suits the environment information will command. Compartments in which the environment information does not fit will be ''switched off'' or rejected temporarily. As it seems to be not only a question of the synchronization of the functions of the total system but also of a spatiotemporal structuring in relation to the environment, the term harmonization could be justified. Generation of Intentional Programs Within the Glial Syncytium First of all, if one speaks of intentional programs, one has to define the formalism on which these programs are based. The Formalism of Negative Language According to Guenther [16], a negative language can be formalized in an n-valent permutation system. Generally, a permutation of n things is defined as an ordered arrangement of all the members of the set taken all at a time according to the formula n! (! means factorial). Table 2 shows a quadrivalent permutation system in a lexicographic order. It consists of the integers 1, 2, 3, 4. The number of permutations is 24 (4! = 1.2.3.4 = 24). The permutations of the elements can be generated with three different NOT operators N 1 , N 2 , N 3 that exchange two adjacent (neighbored) integers (values) by the following scheme: Generally, the number of negation operators (NOT) is dependent on the valuedness of the permutation system minus 1. For example, in a pentavalent permutation system four negation operators (N 1 , N 2 , N 3 , N 4 ) (n = 5-1 = 4) are at work. Glial Gap Junctions Could Embody Negation Operators In situ morphological studies have shown that astrocyte gap junctions are localized between cell bodies, between processes and cell bodies, and between astrocytic end-feet that surround brain blood vessels. In vitro junctional coupling between astrocytes has also been observed (Fig. 2). Moreover, astrocyte-to-oligodendrocyte gap junctions have been identified between cell bodies, cell bodies, and processes, and between astrocyte processes and the outer myelin sheath. Thus, the astrocytic syncytium extends to oligodendrocytes, allowing glial cells to form a generalized glial syncytium, also called ''panglial syncytium'', a large glial network that extends radially from the spinal cord and brain ventricles, across gray and white matter regions, to the glia limitans and to the capillary epithelium. Ependymal cells are also part of the panglial syncytium. Additionally, activated microglia may also be interconnected with astrocytes via gap junctions. However, the astrocyte is the linchpin of the panglial syncytium. It is the only cell that interconnects to all other glia. Furthermore, it is the only one with perisynaptic processes. Gap junctions are channels that link the cytoplasm of adjacent cells and permit the intercellular exchange of small molecules with a molecular mass \1-1.4 kDa, including ions, metabolites, and second messengers. IP3 is the most important since this initiates the calcium wave in the attached cell after it transverses the gap junction channel [19]. In addition to homologous coupling between cells of the same general class, heterologous coupling has been observed between astrocytes and oligodendrocytes. Newman [20] has demonstrated that gap junctions interconnect Muller cell to Muller cell and Muller cell to regular astrocytes in the retina. Homologous and heterologous coupling could serve to synchronize the activities of neighboring cells that serve the same functions. Such coupling could extend the size of a functional compartment from a single cell to a multi-cellular syncytium, acting as a functional network. Gap junctions are now recognized as a diverse group of channels that vary in their permeability, voltage sensitivities, and potential for modulation by intracellular factors; thus, heterotypic coupling may also serve to coordinate the activities of the coupled cells by providing a pathway for the selective exchange of molecules below a certain size. In addition, some gap junctions are chemically rectifying, favoring the transfer of certain molecules in one direction versus the opposite direction. The main gap junction protein of astrocytes is connexin (Cx) 43, whereas Cx32 is expressed in oligodendrocytes in the CSN as well as another type of connexin, Cx45. Heterologous astro-oligodendrocyte gap junctions may be composed of Cx43/ Cx32, if these connexins form functional junctions [21]. Recent experimental results suggest roles of glial gap junction-mediated anchoring of signaling molecules in a wide variety of glial homeostatic processes [22]. Gap junctions are showing properties that differ significantly from chemical synapses [23][24][25]. The following enumeration of gap junctional properties in glial syncytia may support the hypothesis that gap junctions could embody negation operators in the sense of a generation of negative language in glial syncytia: First, gap junctions communicate through ion currents in a bidirectional manner, comparable to negation operators defined as exchange relations. Bidirectional information occurs between astrocytes and neurons at the synapse. This is primarily chemical and based on neurotransmitters. It is not certain that all glial gap junction communications are bidirectional due to rectification. This is a poorly understood area because of extremely severe technical difficulties, especially in vivo [26]. Second, differential levels of connexin expression reflect region-to-region differences in functional requirements for different astrocytic gap junctional coupling states. The presence of several connexins enables different permeabilities to ions and molecules and different conductance regulation. Such differences of gap junctional functions could correspond to the different types of negation operators. Third, neuronal gap junctions do not form syncytia and are generally restricted to one synapse. Fourth, processing within a syncytium is driven by neuronal input and depends on normal neuronal functioning. The two systems are indivisible. It is important to emphasize that neuronal activity-dependent gap junctional communication in the astrocytic syncytium is long-term potentiated. This is indicative of a memory system as proposed in neuronal synaptic activity by Hebb over six decades ago [27]. Fifth, the diversity of astrocytic gap junctions results in complex forms of intercellular communication because of the complex rectification between such numerous combinatorial possibilities. Sixth, the astrocytic system may normally function to induce precise efferent (e.g., behaviorally intentional or appropriate motor) neuronal responses. Admittedly, the testing of this conjecture is also faced with experimental difficulties. Since gap junctional plaques play a central role in glial networks, let me describe some further details. Electrophysiological analysis of the rate at which functional gap junctional channels accumulate at cell-cell interfaces indicates that plaque formation is a cooperative selfassembly process [28]. Connexin protein has a half life of only 1,5 to 3,5 h. Because gap junction assembly appears to be a cooperative self-assembly process, reducing the rate of connexin degradation would lead to a large increase in gap junction formation and intercellular communication [29]. Most importantly, it has been hypothesized that a high turnover rate in combination with a low percentage of functional channels (about 10 % in a plaque) coupling [29] may enable this relative number of cells to compute circles serving as intentional programs. Now, let us tie gap junctional functions and negative language together. Negation operators represent exchange relations between adjacent values or numbers. So they operate like gap junctions bidirectionally. Dependent on the number of values (n) that constitute a permutation system, the operation of different negation operators (n-1) is necessary for the generation of a negative language. With concern to gap junctions, they also show functional differences basically influenced by the connexins. Therefore, different types of gap junctions could embody different types of negation operators. Furthermore, a permutation system represents-like the glial syncytium-a closed network generating a negative language. So we have a biomimetic interpretation of the negative language. But what makes that language so intentional? Glial Generation of Cyclic Pathways in Neuronal Networks Now we are confronted with the question what part of the permutation system proposed could be embodied by the neuronal network. It is hypothesized that the neuronal network could embody the permutations of a permutographic system. For example, a quadrivalent permutation system may be interpreted as a neuronal network. In Table 3, only the 24 permutations (1234,…,4321) are shown. Each permutation formalizes a neuron with a specific computational quality. In parallel, the permutations determine how neurons can be interconnected according to the rule of manyvalent negation operators (N 1 , N 2 , N 3 ) building a neuronal network that embodies a permutation system. Figure 3 shows an example of a pentavalent permutograph [18]. The numbers in circles designate the permutations (n = 5! = 120). The interconnecting lines represent negation operators (1, 2, 3, 4). As already supposed, the glial syncytium could compute various sequences of negation operators in order to test their feasibility in the neuronal permutographic network. This is similar to a kind of intentional pathfinding in neuronal networks. From a biocybernetic point of view, living systems are self-referring systems [30]. On the highest level, they are capable of self-reflection or self-observation. Formally speaking, our brain is permanently generating such reflection cycles. A cycle is not hierarchically ordered, but follows the rule of heterarchy (A-B-C-D-A) [31]. Therefore, the pathfinding of glial intentional programs in neuronal networks is only successful if it results in a closed pathway in form of a cycle. In the case of a cycle that passes all neurons once in the network, we speak of a Hamilton loop. Such loops may occur in the neuronal system associated with gap junctions of the glial syncytium. With concern to the realization of glial intentional programs, there are several possibilities. First, a sequence of negation operators is erroneous, since it is unable to find a cycle. Second, a successful finding of a cycle is not reinforced by appropriate sensory information, so that the intentional program is unfeasible with regard to the environment. Third, a cycle generated by a glial intentional program corresponds to a neuronal network that is activated by sensory information. Fourth, humans are able to reject a feasible intentional program, since another program has priority for a period of time. Here, one can see a parallel to Edelman's ''Neural Darwinism'' [32]. He proposed a multi-draft hypothesis where several intentional possibilities are generated, but only the one with the best response is actually generated. Fifth, the possible cyclic pathways in superastronomic complex neuronal networks offer glial intentional programs the chance to find new cyclic pathways in the sense of creativity. In other words, the neuronal system is interpreting the intentional possibilities generated in the astrocytic syncytium. Sixth, supposing that the glial syncytium also has a memory function similar to the neuronal system [33], it could ''self-imprint'' already successful intentional programs in the syncytium, which implies a form of learning. This has been experimentally verified by Pasti et al. [34] who showed that calcium waves in the glial syncytium undergo a form of long-term potentiation based on neuronal activation. Experimentally verified knowledge of glial-neuronal interaction may -at least partly -support this hypothetical model of intentional glial-neuronal interaction. First of all, the communication between astrocytes and neurons occurs bidirectionally [35]. Additionally, a bidirectional feedback between astrocytes and neurons at each synapse results in the coding and integration of calcium waves, as they travel through the glial syncytium. Therefore, each perisynaptic astrocytic filopodal process (several may be present at each synapse) is a member of the syncytium. This gives a huge global distribution form of information processing throughout the brain [26]. Most important to the proposed model of glial-neuronal interaction are experimental findings concerning synaptic activation of astrocytes evoking feedback neuronal synchronization [36]. These researchers observed in hippocampal slices how two or more slow inward currents recorded in the same neuron can have strikingly different kinetics suggesting the presence of multiple release sites from either one or many astrocytes impinging onto an individual neuron. By cooperating with the excitatory synaptic inputs to recruit specific subsets of neurons in the neuronal network, the activation of extrasynaptic NMDA receptors by astrocytic glutamate may represent a flexible mechanism that favors the formation of dynamically associated assemblies of neurons. In fact, glial intentional programs could operate in neuronal networks based on such mechanisms. In other words, successful glial pathfindings in neuronal networks could be interpreted as the formation of dynamically associated assemblies of neurons. Additionally, the glial syncytium is self-organized [37]. Most importantly, one astrocyte can establish through its filopodal processes contact with approximately 145.000 synapses, each of which acts as a subcellular microdomain for information processing via calcium signaling and bidirectional feedback [38]. Additionally, each microdomain independently responds to various combinations of neurotransmitter signals. This occurs at low neuronal activation. Intracellular calcium signals with associated intercellular syncytial transfer of information occur with increasing neuronal synaptic activation [39]. But the possible memory-based learning effect in glial syncytia is extremely difficult to study. However, the role of gap junctions in memory formation can be interpreted as follows: Gap junctions could register already generated cyclic pathways in the syncytium (formalized as a sequence of negation operators). Depending on a positive feedback from the neuronal network to the glial syncytium based on feasible intentions in regard to environmental information, gap junctions could strengthen their structure embodying a memory mechanism. If that would be the case, then we have a double memory function of gap junctions: a local embodiment of memories, on the one hand, and a pathway memory determined by gap junctions, on the other hand. This has already been experimentally verified [40]. At this point, one could argue that neuronal mechanisms per se may compute intentional behavior, so that it is not necessary to refer to the glial syncytium. For example, mirror neurons are premotor neurons that fire when the subject performs an object-directed action, and they also fire when the subject observes someone else performing the same class of actions. Because action implies a goal, it has been proposed that mirror neurons provide a neural mechanism for understanding the intentions of others [41]. However, here we deal with the neural computation of intentions of others, and not how intentions may be generated in the brain per se. Note that only the latter problem Table 4 Günther matrix consisting of 24 Hamilton loops Permutations 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 2 2 3 3 4 4 1 1 3 3 4 4 1 1 2 2 4 4 1 1 2 2 3 3 3 4 2 4 2 3 3 4 1 4 1 3 2 4 1 4 1 2 2 3 1 3 1 2 4 3 4 2 3 2 4 3 4 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1 Number of Permutations 1 2 3 4 5 is the topic of the present paper which hypothesizes that the glial syncytium may play a decisive role. Embodiment of Hamilton Loops in Glial Gap Junctional Plaques The underlying formalism has already been described. It is assumed that each glial gap junctional plaque embodying a Hamilton loop value is excited in the neuronal system (nplaque chips) dependant on the environmental information computed by the perception systems. In Table 4, a socalled Guenther matrix is computed consisting of 24 Hamilton loops. Formally, it can be shown that it is possible to start on any location of a 4-valued permutation system to generate a Hamilton loop [42]. Figure 4 depicts a plaque which embodies all Hamilton loops (drawn as squares; for sake of clarity only 4 squares are shown). Note that McCulloch interpreted the modes of behavior as pairs of opposites, for example wakefulnesssleeping or eating-to void (urinate-defecate). Formally speaking, this affords a glial gap junctional network of 88 Hamilton loops consisting of 44 loops in one direction and 44 in the opposite direction. From the biology of the neuronal system, we know that gap junctional plaques decay within a time span of hours (about 4 h) and then reorganize again. It may be important that the embodiment of Hamilton loops is redundant. We assume that modes of behavior necessary for the maintenance of the living organism (like eating) are manifoldly recorded such that a plaque structure consists not only of about 24 Hamilton loops but also of 44, as formally computed. The same may hold for the Hamilton loop with a reverse run. In this manner, each Hamilton loop embodies a structure of 88 Hamilton loops. Outline of the Implementation of the Reticular Formation in a Robot Brain I have already simulated a computer system for the neuronal networks of the reticular formation of the brainstem [10]. Here, the glial networks of the reticular formation are additionally outlined. Accordingly, a system for the simulation of the whole reticular formation is described. The system is comprised of a central processing unit, a command computer structured on the basis of a permutograph with a plurality of storage modules [10], with the storage modules corresponding to the elements, and the connection between the storage modules to the edges of the permutograph (not shown in Fig. 5). The connections establish internal circuits which correspond to the negation sequences of the permutograph in the form of Hamilton loops, each of which is associated with a behavior pattern of the reticular formation. The command computer is controlled by input computers in which a preprogrammed intended action is related to environmental information. The relation computer integrates the different types of perception systems [43]. Originally, the command computer has been positioned in the neuronal network, but this seems not to be necessary if one attributes the generation of intentional programs to the glial network or to glial gap junctional plaques. Hence, in the neuronal network, only an executive computer is at work to execute a mode of behavior (Fig. 5). The Integrative Function of the Reticular Formation Since the reticular formation is interconnected with all other brain regions, especially the limbic system and the cerebral cortex, it is able to integrate its generated action programs with the actual information of the perception and motor systems [45]. Let me give the example of the action modes ''look,'' ''forward,'' ''stop,'' and ''retreat.'' This program sequence is established by a storage module associated with Hamilton loop HL1 to HL4. This run is monitored by the timing control unit. During the program run the perception computer and the relation computer (Fig. 5) constantly provide new information which is compared with the intended actions, in the following manner: Suppose that during the execution of the intended action program, the perception system detects an obstacle. An object stands in the way (program 3), so it is necessary to retreat (program 4). Then look for a new path (program 1) and move forward (program 2). If following weighting in the relation computer this new program sequence is Fig. 4 Intentional programs embodied by gap junctions building a plaque. Four Hamilton loops (HL 1 …HL 4 ) building a gap junctional plaque consisting of n-Hamilton loops (HL n ) are depicted as described in the text. Each Hamilton loop represents an intentional program. Geometrically, a gap junctional plaque is drawn in squares identified as having priority, the relation computer interrupts the command execution in the command computer and switches the latter to the new program sequence (for technical details, see [10] ). Conclusion A model is proposed based on glial-neuronal interactions in the reticular formation of the brainstem. Formally, a new logic of relations called permutograph is applied. This graph-theoretical formalism uses exchange relations between neighboring values. This model may enable the implementation in a robot brain, as outlined above. The original simulation of the neuronal system in the reticular formation is further elaborated for glial networks. The networks build gap junctional plaques that may embody n-Hamilton circles, each of which represents a mode of behavior generated in the glial system and executed in the neuronal system of the reticular formation. In this way, the whole body could execute various integrative behaviors. Admittedly, the glial network of the reticular formation has as yet not been experimentally identified in brain research, although pertinent technical progress is promising. However, robotics may offer a real alternative. If we implement the model proposed here in a robot brain, it should be able to produce different modes of behavior. In this way, we could learn if we are right or wrong. Since intentional programming is an essential feature of living systems, such robots may also show a ''touch of subjectivity.'' Computer system for simulating the reticular formation based on glialneuronal interactions. The neuronal system essentially consists of the perception computer integrated by a relation computer [44] and an executive computer for the motor system executing the result of the neuronal and glial system in the environment. The glial network is implemented as an intended action computer system and a timing control unit. The computed intentional programs are transferred to a command computer which commands which intentional program is selected for the executive computer
2016-05-04T20:20:58.661Z
2014-06-03T00:00:00.000
{ "year": 2014, "sha1": "209f4df610e86568fddf06559984f1a04d11911f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12559-014-9260-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "001f33b3010ef4563f8bd8ac380a545c67e9c208", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
201308266
pes2o/s2orc
v3-fos-license
Multi-messenger tests of cosmic-ray acceleration in radiatively inefficient accretion flows The cores of active galactic nuclei (AGNs) have been suggested as the sources of IceCube neutrinos, and recent numerical simulations have indicated that hot AGN coronae of Seyfert galaxies and radiatively inefficient accretion flows (RIAFs) of low-luminosity AGNs (LLAGNs) may be promising sites of ion acceleration. We present detailed studies on detection prospects of high-energy multi-messenger emissions from RIAFs in nearby LLAGNs. We construct a model of RIAFs that can reproduce the observational features of the current X-ray observations of nearby LLAGNs. We then calculate the high-energy particle emissions from nearby individual LLAGNs, including MeV gamma rays from thermal electrons, TeV--PeV neutrinos produced by non-thermal protons, and sub-GeV to sub-TeV gamma rays from proton-induced electromagnetic cascades. We find that, although these are beyond the reach of current facilities, proposed future experiments such as e-ASTROGAM and IceCube-Gen2 should be able to detect the MeV gamma rays and the neutrinos, respectively, or else they can place meaningful constraints on the parameter space of the model. On the other hand, the detection of high-energy gamma rays due to the electromagnetic cascades will be challenging with the current and near-future experiments, such as Fermi and Cherenkov Telescope Array. In an accompanying paper, we demonstrate that LLAGNs can be a source of the diffuse soft gamma-ray and TeV--PeV neutrino backgrounds, whereas in the present paper, we focus on the prospects for multi-messenger tests which can be applied to reveal the nature of the high-energy neutrinos and photons from LLAGNs. Blazars are also believed to be capable of strong neutrino emission [30][31][32]. Recently, IceCube reported the detection of a high-energy neutrino coincident with a flaring activity of a blazar, TXS 0506+056 [33]. Thanks to the ensuing multi-messenger followup campaign (see [34]), the broad-band spectral energy distribution during the flaring period has been determined, which enables one to model the neutrino emission in detail [35][36][37][38][39]. The Ice-Cube Collaboration also found a neutrino flare from this object during 2014 -2015, by re-analyzing their archival data [40]. However, this neutrino flare is not accompa-nied by a corresponding GeV gamma-ray flaring activity [41], which challenges the theoretical modeling of the neutrino emission [36,[42][43][44]. Note that the coincident detection and the archival neutrino flare do not, however, mean that the blazars are the dominant source of the diffuse neutrinos. The stacking analyses of the blazars detected by Fermi result in a non-detection [45][46][47][48], which implies that their contribution is less than ∼ 10 − 30 % of the total astrophysical neutrinos. Also, the absence of event clustering in the arrival distribution of neutrinos indicates that the contributions from flaring blazars should be less than ∼ 10 − 50% [36,49]. Another constraint is provided by the extragalactic gamma-ray background detected by Fermi [50]. When astrophysical neutrinos are produced through pion decay, gamma rays are also produced simultaneously. The generated gamma-ray luminosity is comparable to the neutrino luminosity, and the TeV-PeV gamma rays are cascaded down to the GeV-TeV energy range during their propagation towards Earth. In order to avoid overproducing the observed extragalactic gamma-ray background, the neutrino spectral index should be smaller than 2.1 − 2.2 [51], which is in tension with the best-fit spectrum of the observed neutrinos in the shower analyses [4,5,52,53]. Also, the neutrino flux at 1-100 TeV is higher than that above 100 TeV [4,5], although this might be due to the strong atmospheric background [54]. If such a "medium-energy excess" is real, the serious ten-sion with the gamma-ray background is unavoidable, suggesting that the main sources are opaque and hidden in high-energy gamma rays [55]. This argument disfavors many astrophysical scenarios as the origin of these neutrinos, including starburst galaxies [51,[56][57][58][59][60][61][62][63][64], galaxy clusters [51,[65][66][67][68][69], and radio-galaxies [70,71]. We consider high-energy neutrino emission from the vicinity of supermassive black holes (SMBHs) in active galactic nuclei (AGNs) [72][73][74][75][76][77][78]. A luminous AGN hosts a geometrically thin, optically thick accretion disk that produces copious UV photons [79][80][81], and the ratio of the observed UV to X-ray luminosity is very high [82][83][84]. Such target photon fields lead to a hard neutrino spectrum at PeV energies [85,86]. The accretion shock has been considered, but the existence of such a shock has not been supported by numerical simulations so far. On the other hand, recent studies on magnetorotational instabilities suggest that particle acceleration via magnetic reconnections and turbulence is promising in AGN coronae, and Ref. [87] showed that the mysterious 10 -100 TeV component in the diffuse neutrino flux can be explained by the AGN core model of radio-quiet AGNs. It was found that the Bethe-Heitler process is critically important, which led to robust predictions of MeV gamma rays via proton-induced cascades. Low-luminosity AGNs (LLAGNs), however, have different spectral energy distributions, in which an UV bump is absent [88]. This indicates that there is an optically thin, hot accretion flow instead of an optically thick disk. Remarkably, plasma properties of hot AGN coronae and radiatively inefficient accretion flows (RIAF; [89,90]) in LLAGNs seem similar in the sense that the plasmas are expected to be collisionless for ions. It is natural to consider the same type of proton acceleration in both Seyfert galaxies and LLAGNs. Ref. [91] considered the stochastic acceleration expected in such RIAFs of LLAGNs, and showed that the neutrinos produced by the accelerated protons can account for the diffuse astrophysical neutrino background (see also Refs. [92,93] for neutrino emissions from LLAGNs). The LLAGN model can avoid the gamma-ray and the point-source constraints, thanks to its compact emission region and high number density, although Ref. [91] did not provide details of the resulting gamma-ray spectra. In this paper, we describe a refined LLAGN model, and show how multi-messenger information on neutrinos and gamma rays can be used as a test of the proposed LLAGN model. We estimate the physical quantities in the RIAFs of several nearby LLAGNs including the photons from the thermal electrons in Section II. We then estimate the high-energy proton spectra in Section III, and calculate the high-energy neutrino spectra and their detectability in Section IV. We calculate the gamma rays from proton-induced electromagnetic cascades in Section V. Finally, we summarize the results and discuss their implications in Section VI. We note that our refined model can reproduce the diffuse MeV gamma-ray and the TeV -PeV neutrino backgrounds simultaneously without over-shooting the Fermi data, which is shown in an accompanying paper. In this paper, we focus on the detection prospects of individual nearby LLAGNs. II. PHYSICAL QUANTITIES IN RIAFS We consider a RIAF of size R and mass accretion rateṀ around a SMBH of mass M BH . We use the notation Q x = 10 x in cgs units, unless otherwise noted. To represent the physical quantities in the RIAF, it is convenient to normalize R by the Schwarzschild radius: is the Schwarzschild radius, G is the gravitational constant, and c is the speed of light. The mass accretion rate is normalized by the Eddington accretion rate: According to recent magnetohydrodynamic (MHD) simulations (see e.g., Refs. [94][95][96][97][98][99]), the radial velocity, the sound velocity, the scale height, the number density, the magnetic field, and the Alfven velocity in the RIAF are estimated to be where V K = GM BH /R is the Keplerian velocity, α is the viscous parameter [79], m p is the proton mass, β = 8πP g /B 2 is the plasma beta, and P g = m p n p C 2 s is the gas pressure. We assume pure proton composition for simplicity. The magnetic field strength in the hot accretion flows depends on the configuration of the magnetic field: β ∼ 10 − 100 for standard and normal evolution (SANE) flows, whereas β ∼ 1 − 10 for magnetically arrested disks (e.g., [95,97,100,101]). We use β ∼ 3.2 as a reference value because lower β plasma are suitable for producing non-thermal particles [102]. For the viscous parameter α, SANE models tend to give a lower value, α ≃ 0.03 [98,99], while observations of X-ray binaries and dwarf novae suggest α ≃ 0.1 − 1 (see Ref. [103] and references therein). Here, we set α = 0.1 as a reference value. Although cooling processes have little influence on the dynamical structure in the RIAF, the thermal electrons supply target photons for photohadronic interactions and γγ two-photon annihilation. We calculate the characteristics of the target photons in the RIAF using a method similar to Ref. [91]. We consider synchrotron, bremsstrahlung, and inverse Compton emission processes. The calculation method of the emission spectrum due to each process was discussed in the Appendix of Ref. [91]. Note that this treatment is valid only for flows with Thomson optical depths τ T ≈ n p σ T R < 1, where σ T is the Thomson cross section. As long asṁ 10 −2 α 2 ∼ 10 −4 α 2 −1 , the balance between the cooling rate and heating rate of the thermal electrons determines the electron temperature, Θ e = k B T e /(m e c 2 ), where m e is the electron mass and k B is the Boltzmann constant [90,104]. Then, the electron heating rate is equal to the bolometric luminosity from the thermal electrons. If the Coulomb collisions with the thermal protons are the dominant heating process, the heating rate is proportional to n 2 p , which leads to L bol ∝ṁ 2 . Then, the bolometric luminosity is phenomenologically given by (see, e.g., Ref. [105,106]) whereṁ crit is the normalized critical accretion rate above which the RIAF solution no longer exists [107,108] and ǫ rad,sd ∼ 0.1 is the radiation efficiency of the standard thin disk. The critical accretion rate can be expressed as a function of α [104, 109]. Following Ref. [109], we representṁ crit ∼ 3α 2 ≃ 3 × 10 −2 α 2 −1 . Note that the dissipation processes in collisionless accretion flows are still controversial. If the electrons are directly heated by plasma dissipation processes induced by kinetic instabilities [110][111][112][113][114][115][116][117][118], the electron heating rate may be proportional toṁ, leading to L bol ∝ṁ as assumed in Ref. [91]. In reality, the scaling relation may be located between the two regimes. In this paper, we use Equation (1) for simplicity. Observations give us the X-ray luminosity, L X , which is connected toṁ in our model. Using the bolometric correction factor, κ bol/X , the X-ray luminosity is related to the bolometric luminosity as According to the X-ray surveys, κ bol/X is higher for a higher L bol or λ Edd , where λ Edd = L bol /L Edd is the Eddington ratio. At the low-luminosity end, κ bol/X becomes almost constant, κ bol/X ∼ 5 − 20 [84,119,120]. Using Equations (1) and (2) with a constant κ bol/X , we can writeṁ as a function of observables: where we use κ bol/X = 15 and ǫ rad,sd = 0.1. Thisṁ is less thanṁ crit . Hence, typical LLAGNs with L X 10 42 erg s −1 can host RIAFs. We calculate spectral energy distributions of nearby LLAGNs listed in Table A.3 of Ref. [121], which pro-vides M BH , L X , luminosity distance (d L ), and declination angle (δ) for 70 LLAGNs. The mass accretion rate of the listed LLAGNs is estimated using Equation (3) with κ bol/X = 15. We find that 7 of them have standard disks, i.e.,ṁ >ṁ crit , while the others host RIAFs. Figure 1 shows the target photon spectra from 4 LLAGNs whose parameters and resulting physical quantities are tabulated in Table I and II, respectively. The values of the other parameters are tabulated in Table III. The four LLAGNs differ in M BH andṁ. NGC 3516, NGC 4203, and NGC 5866 have M BH close to 10 8 M ⊙ , while NGC 3998 hosts a SMBH of M BH ∼ 10 9 M ⊙ .ṁ is close to the critical accretion rate for NGC 3516,ṁ ∼ 0.1ṁ crit for NGC 4203 and NGC 3998, andṁ ∼ 0.01ṁ crit for NGC 5866. For all the LLAGNs, the synchrotron emission peaks in the radio band. For LLAGNs witḣ m 10 −2ṁ crit , the inverse Compton emission of the synchrotron photons produces infrared to MeV photons. For lowerṁ cases, the bremsstrahlung emits MeV photons due to inefficient Comptonization. The inverse Compton emission spectrum is hard and smooth for higherṁ, while it is soft and bumpy for lowerṁ due to a high value of electron temperature and a low value of Compton-Y parameter, y ≈ τ T (4Θ e + 16Θ 2 e ) (see Table I for the values ofṁ, Θ e , and τ T ). A high value of M BH with fixeḋ m lowers the peak frequency of the synchrotron emission due to the weak magnetic field, and increases the entire luminosity because of a high net accretion luminosity, Next, we compare the X-ray luminosities obtained by our calculations and observations. Figure 2 shows the relation between the observed 2 − 10 keV X-ray luminosity, L X,obs , and the X-ray luminosity calculated by our model, L X,calc in the same band. Intriguingly, our simple model is in a good agreement with the observations forṁ > 10 −3 . The two luminosities match within a factor of 1.7 in this sample. We stress that we do not adjust the X-ray luminosity but we calculate photon spectra with the one-zone model usingṁ estimated by Equations (2) and (3). For a lower value ofṁ < 10 −3 , the synchrotron emission is more efficient than the inverse Compton emission. This causes a higher value of κ bol/X , resulting in L X,calc < L X,obs as seen in the figure. For nearby low-ionization nuclear emission-like regions (LINERs), the bolometric correction factor is estimated to be κ bol/X ∼ 50 [122]. L X,calc is higher with such a higher value of κ bol/X , since it leads to a higher value ofṁ. Hence, a higher value of κ bol/X is more consistent with our model withṁ < 10 −3 . Nevertheless, we use κ bol/X = 15 because LLAGNs withṁ < 10 −3 do not affect the detectability of high-energy neutrinos as shown in Section IV. The bright LLAGNs are detected by the Swift BAT, most of which show hard X-ray spectra. Thus, very interestingly, our model is consistent with the BAT data in terms of luminosity. In addition, RIAF models generally predict that a higherṁ object has a harder photon spectrum in the X-ray band owing to a higher value of the Compton-Y parameter, which is consistent with the observed anti-correlation between the X-ray spectral index and the Eddington ratio [123,124]. Recently, Ref. [125] estimated the cutoff energy in Xray spectrum in NGC 3998 to be around 100 keV using the NuSTAR and XMM-Newton data. However, they just measured a slight softening of the spectrum, which can be reconciled by our RIAF model. The Compton scattering makes a few bumps in the broad-band spectrum, which causes a softening in the X-ray band for NGC 3998 as seen in Figure 1. Here, we do not compare our model to observations in detail, because they are beyond the scope of this paper. In order to obtain the electron temperature more concretely, we need to detect a clear cutoff feature above 100 keV. We plot the photon spectra due to thermal electrons above 10 keV with the sensitivity curve of the proposed future satellite, e-ASTROGAM [126] in Figure 3. The MeV gamma rays will be easily detected for NGC 3516 and NGC 4258, although it is not expected for NGC 3031. The other proposed MeV gamma-ray satellites, AMEGO [127] and GRAMS [128], have similar or better sensitivity in this range. The MeV observations of nearby LLAGNs will provide not only the electron temperature in RIAFs for the first time, but also the crucial test for the LLAGN contribution to the MeV gamma-ray background (see the accompanying paper). 133-135] , or electric potential gaps in the black hole magnetosphere [136,137]. We examine three cases of nonthermal proton spectra. One is the stochastic acceleration model (model A), in which we solve the diffusion equation in momentum space. The others are the powerlaw injection models (models B and C) in which we consider an injection term with a single power-law with an exponential cutoff. Such a power-law model mimics a generic acceleration process. A. Plasma condition For stochastic acceleration via turbulence to work, the relaxation time in the RIAF needs to be longer than the dissipation time, i.e., the plasma is collisionless. The relaxation time due to Coulomb collisions is estimated to be (e.g. Refs. [138,139]) where ln Λ ∼ 20 is the Coulomb logarithm. Interestingly, the relaxation time is independent of the normalized radius, R. The dissipation time in the accretion flow is represented as t diss ∼ α −1 R/V K [106,140]. In the RIAF, this timescale is of the order of the infall time: Equating these two timescales, we obtain the critical radius within which the flow becomes collisionless (see also Ref. [105]): As long asṁ ṁ crit with a fixed value of α 0.1, the RIAF consists of collisionless plasma at R 10R S . Hence, one may naturally expect non-thermal particle production there. On the other hand, another accretion regime with a higher luminosity, such as the standard disk [79] and the slim disk [141], are made up of collisional plasma because the density and temperature there are orders of magnitude higher and lower than that in the RIAF, respectively. Therefore, particle acceleration is not guaranteed due to the thermalization via Coulomb collisions. B. Stochastic acceleration model (A) In the stochastic acceleration model, protons are accelerated through scatterings with the MHD turbulence. The proton spectrum is obtained by solving the diffusion equation in momentum space (e.g., Refs. [142,143]): FIG. 2. Relationship between the observed X-ray luminosity, L X,obs , and the X-ray luminosity obtained by the model calculation, L X,calc . The green squares are LLAGNs witḣ m > 10 −3 , while the blue circles are those withṁ < 10 −3 . The dotted line represents L X,obs = L X,calc , and cyan band indicates L X,obs /1.7 < L X,calc < 1.7L X,obs , in which all the green squares are located. where F p is the momentum distribution function (dN/dε p = 4πp 2 F p /c), D εp is the diffusion coefficient, t cool is the cooling time, t esc is the escape time, andḞ p,inj is the injection term to the stochastic acceleration. Considering resonant scatterings with Alfven waves, the diffusion coefficient is represented as [144][145][146] where r L = ε p /(eB) is the Larmor radius, ζ ≈ 8π P k dk/B 2 is the turbulent strength parameter, and q is the power-law index of the turbulence power spectrum. The acceleration time is given by t acc ≈ ε 2 p /D εp . We use a delta-function injection:Ḟ p,inj =Ḟ 0 δ(ε p − ε inj ), wherė F 0 is normalization factor. We normalize the luminosity of the non-thermal protons so that the proton luminosity is a constant fraction of the accretion luminosity: where L εp = ε p t −1 loss dN/dε p is the differential proton luminosity (t −1 loss = t −1 cool + t −1 esc is the total loss rate) and ǫ p is the non-thermal proton production efficiency. We use the Chang & Cooper method to solve the equation [147,148], and calculate the time evolution until steady state is achieved. Note that the normalization is different from that used in Ref. [87], where we normalized the injection such thatḞ 0 = f inj L X,obs /(4π 2 ε 3 inj R 3 ). Here, f inj is the efficiency of the injection to the stochastic acceleration, and f inj needs to be much smaller than ǫ p . C. Power-law injection models (B and C) For models B and C, we consider a generic acceleration mechanism, and the steady-state proton spectrum, N εp = dN/dε p , is obtained by solving the transport equation: whereṄ εp,inj is the injection function. We consider a power-law injection with an exponential cutoff: whereṄ 0 is the normalization factor, s inj is the injection spectral index, and ε p,cut is the cutoff energy. We normalize the injection by We can get an analytic solution of the transport equation (cf., Ref. [149]): This solution includes exponential term, so we need to carefully treat the numerical integration. In the rest of this paper, we show the results using Simpson's rule and 115 grid points per energy decade. We computed the numerical integration with the trapezoidal rule and/or with 50-200 grid points per decade, and confirmed that the error is reduced to less than 30% using Simpson's rule with 100 grid points per energy decade. The maximum achievable energy of protons is determined by the balance between acceleration and loss. We phenomenologically write the acceleration time as where η acc is a parameter for the acceleration timescale. Since the infall is the most efficient loss process for majority of the LLAGNs, we estimate the cutoff energy by t acc = t fall . This treatment approximates the cutoff energy within an error of a factor of a few. D. Escape and cooling timescales High-energy protons escape from the RIAF via advection or diffusion. The advective escape time is equal to the infall time given by Equation (6). The diffusive escape time depends on the magnetic field configuration. According to MHD simulations, the magnetic fields in RIAFs are stretched to the azimuthal direction. The non-thermal protons' mean free path perpendicular to the magnetic field is much shorter than that along the field line (e.g., Refs. [99,133]). In the turbulence with a power spectrum of P k ∝ k −q , the parallel mean free path and the perpendicular diffusion coefficient are estimated to be (e.g., Refs. [145,146,150,151] The Larmor radius in the RIAF is estimated to be with our fiducial parameter set (see Table III) and ε p,15 = ε p /PeV. Then, we obtain λ /r L ≃ 2.3 × 10 4 , leading to D ⊥ /D ≃ 1.9 × 10 −9 . Hence, we ignore the diffusive escape process in this paper, i.e., we use t esc = t fall . The value of D ⊥ could be larger due to possible cross-field diffusion. To understand the behavior of high-energy protons in configuration space, much more elaborate calculations would be required, which are beyond the scope of this paper (see Ref. [99] for related discussion). As the proton cooling processes, we take into account pp inelastic collisions, photomeson production, proton synchrotron processes, and the Bethe-Heitler process. The pp cooling rate is where σ pp and κ pp are the cross section and inelasticity for pp interactions, respectively. σ pp was given in Ref. [152], and κ pp is set to be 0.5. The photomeson production rate is where γ p = ε p /(m p c 2 ), ε p,th ≃ 145 MeV is the threshold energy for the photomeson production, ε γ is the photon energy in the proton rest frame, and σ pγ and κ pγ are the cross section and inelasticity for photomeson production, respectively. We use fitting formulas based on GEANT4 for σ pγ and κ pγ (see Ref. [11]). The Bethe-Heitler cooling rate is also estimated by Equation (21) using σ BH and κ BH instead of σ pγ and κ pγ , respectively. We use the fitting formulas given in Refs. [153] and [154] for σ BH and κ BH , respectively. The synchrotron cooling rate is estimated to be The total cooling rate is given by the sum of all the cooling rates. Figure 4 shows the loss and acceleration rates as a function of proton energy for NGC 3516, NGC 4258, and NGC3031, which haveṁ ∼ 0.9ṁ cr ,ṁ ∼ 0.3ṁ cr , anḋ m ∼ 0.04ṁ cr respectively. For NGC 3516, t fall and t pp are comparable in the entire energy range. The photomeson production is effective above ε p 30 PeV. The synchrotron and Bethe-Heitler losses are always subdominant in the range of our interest. On the other hand, for NGC 4258 and NGC 3031, the infall timescale is always dominant below the cutoff energy due to lowerṁ. Note that the critical energy at which t acc = t loss is very low for model A, compare to the other models. Such a lower critical energy is required to achieve a cutoff energy similar to the other models (see Figure 3) because the stochastic acceleration results in a hard spectrum with a gradual cutoff (cf. Refs. [91,155]). To understand the parameter dependences of each timescale, we write t −1 pγ ∼ n εγ κ pγ σ pγ c, where n εγ ≈ L εγ /(2πR 2 cε γ ) is the differential photon number density and L εγ is the differential photon luminosity. Then, if we fix the parameters in Table III, the parameter dependence of the loss rates are t −1 Interestingly, all the loss rates are proportional to M −1 BH , while they have a differentṁ dependence. For the case withṁ ∼ṁ crit as in NGC 3516, t −1 pγ t −1 pp and t −1 pp ∼ t fall below the cutoff energy. Since a lower value ofṁ makes t fall shorter and t pγ longer relative to t pp , we can approximately use t fall as the energy loss timescale, and pp collisions are the main channel of neutrino production forṁ ṁ crit . We describe analytic estimates with this approximation in Section IV. Figure 3 shows the resulting proton spectrum, , and the injection proton spectrum, E p F Ep,inj = ε 2 pṄεp,inj /(4πd 2 L ), where E p is the energy in the observer's frame. Since we focus on the very nearby objects, we ignore the effect of redshift, i.e., E p ≈ ε p . The parameter sets are tabulated in Tables I and III. We choose these parameter sets so that our model can reproduce the diffuse MeV gamma-ray and TeV-PeV neutrino intensities (see the accompanying paper). We also tabulate the total proton luminosity, L p = L εp dε p , and pressure ratio of the non-thermal to thermal components, P CR /P g = ε p N εp dε p /(6πR 2 Hm p n p C 2 s ). To achieve the observed diffuse neutrino intensity, we need P CR /P g ∼ 0.1 for models A and B, while P CR /P g ∼ 0.5 for model C. In model A, the stochastic acceleration model leads to a hard spectrum below the critical energy, which is Above the critical energy, the spectrum gradually becomes softer. For NGC 3516, the photomeson production is efficient above ε p ≃ 10 6 GeV, which makes a sharp cutoff. For NGC 4258 and NGC 3031, the cooling processes are inefficient. This leads to a more gradual cutoff, resulting in a higher peak energy than that for NGC 3516. In models B and C, the resulting spectra are very similar to the injection spectra, because the infall is the dominant loss process. In this case, the proton number spectrum in the RIAF is written as N εp ≈Ṅ εp,inj t fall , leading to L εp ≈ ε pṄεp,inj . For NGC 3516, we can see a slight difference between the two spectra due to the pp cooling. Note that we cannot observe this flux of protons on Earth because of the energy loss processes and deflection by interstellar and intergalactic magnetic fields. A. Meson cooling We numerically calculate the neutrino production through both photomeson and hadronuclear interactions. The neutrinos are produced by decay of pions and muons. In general the high-energy neutrinos can be suppressed by meson and muon cooling, when their lifetimes are longer than the cooling time. Here, we estimate the hadronic cooling time for pions and synchrotron cooling for pions and muons. The hadronic cooling rate for pions is estimated to be t −1 πp ∼ n p σ πp κ πp c, where σ πp ∼ 50 mb and κ πp ∼ 0.8 are the pion-proton interaction cross section and inelasticity, respectively. The critical energy above which the pion hadronic cooling is efficient is where m π and τ π0 are the mass and decay time of pions, respectively. Thus, we can safely ignore the pion hadronic cooling. The synchrotron cooling time for a particle i is written as t i,syn ≈ 6πm 5 i c 5 /(m 2 e σ T cε 2 i B 2 ), where m i and ε i are the mass and energy of the particle. Equating the lifetime and synchrotron cooling time, we can estimate the critical energies above which the synchrotron cooling is effective to be ε ν,πsyn = 3πm 5 π c 5 /(8m 2 e σ T B 2 τ π ) ≃ 1.0 × 10 17 R Here, m µ and τ µ0 are the mass and decay time of muons, respectively. Since we are interested in TeV -PeV neutrinos, we will ignore the cooling effect by mesons and muons. B. Neutrino spectrum To calculate high-energy neutrino spectra from pp interactions, we use the method given by Ref. [156], where the pp-neutrino spectrum, L pp,εν = ε ν t −1 pp dN/dε ν , is given by where H ν (ε ν /ε p , ε p ) is the spectral shape of the neutrinos from mono-energetic protons of ε p (see Ref. [156] for details). This method is valid only for ε ν > 100 GeV. Since our scope is to discuss the detection prospects by IceCube-like detectors, we focus on neutrinos above 100 GeV. For pγ neutrinos, we approximately calculate the spectrum using the semi-analytic formalism of Refs. [17,29], including the physical processes described in the previous section. Ignoring the effects of the meson cooling, the pγ-neutrino spectrum is given by where ε ν ≈ 0.05ε p and f pγ ≈ t −1 pγ /t −1 loss . The neutrino flavor ratio at the sources is (ν e , ν µ , ν τ ) = (1, 2, 0) owing to the inefficient muon and pion cooling. The neutrinos change their flavors to (ν e , ν µ , ν τ ) = (1, 1, 1) during the propagation to the Earth through neutrino oscillation, and thus, the muon neutrino flux is a factor of 3 lower than the total neutrino flux. Figure 3 shows the resulting muon neutrino fluxes, where L εν = L pp,εν + L pγ,εν . Since the pp neutrino decay spectrum is softer than the parent proton spectrum for models A and B, these two models give similar neutrino spectral shapes. The neutrinos produced by pp interaction are dominant for the low energy range, but the photomeson production gives a comparable contribution around the cutoff energy for the cases withṁ 0.01 (NGC 3516 and NGC 4258). For NGC 3031,ṁ is too low to effectively create neutrinos via photomeson production. C. Analytic estimate We can approximately derive analytic estimates of the neutrino flux from LLAGNs for the power-law injection cases. When infall is the dominant loss process, we can write N εp ≈ t fallṄp,inj , as discussed in the previous section. Then, the proton luminosity is approximated to be and the normalization is determined by L εp dε p = ǫ pṁ L Edd ∝ ǫ pṁ M BH . The neutrino production efficiency is given by where we use σ pp ∼ 60 mb and κ pp ∼ 0.5 for the estimate, which corresponds to the values for ε p ∼ 1 − 10 PeV. f pp becomes unity around the saturation accretion rate, With our reference parameters, this accretion rate is very close to the critical accretion rate,ṁ crit . The all-flavor differential neutrino luminosity is approximated to be where ε ν ≈ 0.04ε p . Interestingly, the neutrino luminosity is proportional to L X and ǫ p , and independent of the other parameters. The differential muon neutrino energy flux is computed using Equations (12), (25), (26), (27), and (29). This method approximates the peak ppneutrino flux within an error of factors of 2 and 1.3 for s inj = 1.0 and 2.0, respectively. D. Detectability of neutrinos from nearby LLAGNs We evaluate the number of through-going muon track events following Refs. [49,157]. We estimate the differential detection rate of through-going tracks: where E ν is the incoming neutrino energy, E µ is the muon energy, N A is the Avogadro number, A det is the muon effective area, σ CC is the charged-current cross section, τ νN is the optical depth to neutrino-nucleon scatterings in the Earth, and the denominator in the right-hand side indicates the muon energy loss rate (see Ref. [49] and references therein). This method can reproduce the effective area reported by Ref. [158]. We evaluate the background including both the conventional and the prompt atmospheric muon neutrinos. Figure 5 for a ten-year operation with IceCube and IceCube-Gen2 for NGC 3516, NGC 4258, and NGC 3031. IceCube cannot detect signals from individual objects due to lower effective area. IceCube-Gen2 can detect the signals from NGC 4258, while it is challenging to detect NGC 3516. Although NGC 3516 has a neutrino flux comparable to that of NGC 4258, the higher declination causes the lower N µ (> E µ ) due to the Earth attenuation, especially in Model B. The neutrino emission from NGC 3031 is too faint to be detected even with IceCube-Gen2. Since the neutrino flux is roughly proportional to the X-ray flux, we place the LLAGNs listed in Ref. [121] in order of the X-ray flux, as shown in Table I, and estimate the number of track events above E µ by stacking them. Figure 6 shows the resulting event number for a 10-year operation with IceCube-Gen2 and IceCube by stacking 10 LLAGNs and 30 LLAGNs. With IceCube-Gen2, we expect 3 -7 events above 30 TeV where the background is negligible. Interestingly, the neutrinos from the ten brightest LLAGNs will be sufficient for the detection, because stacking more LLAGNs leads to an increase of the atmospheric background. With the current IceCube experiment, the effective area and angular resolution are 10 2/3 times smaller and 3 -5 times larger than those of IceCube-Gen2, respectively. Then, the event number is about 4 -5 times lower and the background rate is 10 -30 times higher, making the detection of neutrinos more challenging, as seen in the figures. V. CASCADE GAMMA-RAY EMISSION Hadronuclear and photohadronic processes produce very-high-energy (VHE) gamma rays through neutral pion decay and high-energy electron/positron pairs through charged pion decay and the Bethe-Heitler process. The VHE gamma rays are absorbed by soft photons through the γγ → e + e − process in the RIAF, and produce additional high-energy electron/positron pairs. The high-energy e + e − pairs also emit gammarays through synchrotron, inverse Compton scattering, and bremsstrahlung, leading to electromagnetic cascades. We calculate the cascade emission by solving the kinetic equations of photons and electron/positron pairs (see Refs. [87,159,160]): where n i εi is the differential number density (i = e or γ),ṅ (xx) εi is the particle source term from the process xx (xx = IC (inverse Compton scattering), γγ (γγ pair production), syn (synchrotron), or ff (bremsstrahlung)), N inj εi is the injection term from the hadronic interaction, and P yy is the energy loss rate for the electrons from the process yy (yy = IC (inverse Compton scattering), syn (synchrotron), ff (bremsstrahlung), or Cou (Coulomb collision)). We calculate the cascade spectra using spherical coordinates, while the other calculations are made in cylindrical coordinates. The effect of geometry has little influence on our results. Here, we approximately treat the injection terms of photons and pairs from hadronic interactions. The injection terms for photons and pairs consist of the sum of the relevant processes:ṅ inj εγ =ṅ εe . We approximate the terms due to Bethe-Heitler and pγ processes to be where ε γ ≈ 0.1ε p and ε e ≈ 0.05ε p for photomeson production, and ε e ≈ (m e /m p )ε p for Bethe-Heitler process. For the injection terms from pp interactions, see Ref. [160]. We plot proton-induced cascade gamma-ray spectra in Figure 3. A sufficiently developed cascade emission generates a flat spectrum below the critical energy at which γγ attenuation becomes ineffective. The optical depth to the electron-positron pair production is estimated to be where ε γ is the gamma-ray energy, , and H(x) is the Heaviside step function [161]. We tabulate the values of the critical energy, ε γγ , at which τ γγ = 1 in Table II. We can see flat spectra below the critical energy. Note that the tabulated values are approximately calculated using a fitting formula, while the cascade calculations are performed with the exact cross section. We overplot the Fermi LAT sensitivity curve in the high galactic latitude region with a 10-year exposure obtained from Ref. [126]. The predicted fluxes are lower than the sensitivity curve for all the cases. The Cherenkov Telescope Array (CTA) has a better sensitivity above 30 GeV than LAT, but the cascade gamma-ray flux is considerably suppressed in the VHE range due to the γγ attenuation. For a loweṙ m object that has a higher value of ε γγ , such as NGC 5866, the cascade flux is too low to be detected by CTA. Therefore, it would be challenging to detect the cascade gamma rays with current and near-future instruments, except for Sgr A*. Sgr A* has two distinct emission phases: the quiescent and flaring states (see Ref. [162] for review). The X-ray emission from the quiescent state of Sgr A* is spatially extended to ∼ 1", which corresponds to 10 5 R S for a black hole of 4 × 10 6 M ⊙ [163]. Hence, our model is not applicable to the quiescent state. On the other hand, the flaring state of Sgr A* shows 10 − 300 times higher flux than the quiescent state with the time variability of ∼ 1 h [164]. This variability timescale implies that the emission region should be 10 2 R S . However, the value ofṁ for the brightest flare estimated by Equation (3) is less than 10 −4 . Since our model is not applicable to such a low-accretion-rate system (see Section II), we avoid discussing it in detail. The detailed estimate should be made in the future (see Ref. [165] for related discussion). VI. SUMMARY We have investigated high-energy multi-messenger emissions, including the MeV gamma-rays, high-energy gamma-rays, and neutrinos, from nearby individual LLAGNs, focusing on their multi-messenger detection prospects. We have refined the RIAF model of LLAGNs, referring to recent simulation results. Our one-zone model is roughly consistent with the observed X-ray features, such as an anti-correlation between the Eddington ratio and the spectral index. RIAFs withṁ 0.01 emit strong MeV gamma rays through Comptonization, which will be detected by the future MeV satellites such as e-ASTROGAM, AMEGO, and GRAMS. We have also calculated the neutrino and cascade gamma-ray spectra from accelerated protons. We considered three models for the proton spectrum. In model A, we considered stochastic acceleration by turbulence and solve the diffusion equation in momentum space. In models B and C, we do not specify the acceleration mechanism and assumed an injection term with a power-law and an exponential cutoff. Using such proton spectra, we have numerically calculated the neutrino spectra, taking account of the relevant cooling processes and the decay spectra. Since pp inelastic collisions provide the main channel for high-energy neutrino production, the neutrino spectrum follows the proton spectrum. Close to the cutoff energy, ε ν ∼ 100 TeV, the photomeson production is as efficient as pp interactions, leading to a comparable contribution to the neutrino flux. With a few to 10 LLAGNs stacked, a 10-year operation of IceCube-Gen2 will enable us to detect a few to several neutrinos from LLAGNs, otherwise they will put meaningful constraints on the parameter space. On the other hand, the cascade emission is difficult to detect with Fermi or CTA. Bright objects have a lower γγ cutoff energy, while objects with a higher value of the cutoff energy are too dim to produce a detectable signal. AGN coronae and RIAFs are thought to be promising sites of particle acceleration, and accompanying papers suggest the AGN cores as the main origin of the mysterious 10 -100 TeV component in the diffuse neutrino flux observed in IceCube [87]. The model predicts that both Seyfert galaxies and LLAGNs are promising sources of high-energy neutrinos and MeV gamma rays. Our studies suggest the relevance of multi-messenger searches for LLAGNs whether the 10 -100 TeV neutrinos mainly come from Seyfert galaxies or LLAGNs.
2019-08-22T14:58:14.000Z
2019-08-22T00:00:00.000
{ "year": 2019, "sha1": "f4bc96fef30333ea36c895ccdf3ad606aa16799b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1908.08421", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b40026fc977a046a7c641b4dc96509916dfb1355", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268316929
pes2o/s2orc
v3-fos-license
A COMPARATIVE STUDY ON EVALUATING THE OUTCOME OF DISPLACED ISOLATED MEDIAL MALLEOLUS FRACTURE MANAGED WITH TENSION BAND WIRING (TBW) VERSUS MALLEOLAR SCREWS FIXATION Objective: The Ankle fractures are becoming more prevalent as a result of increased road traffic accidents and sports injuries. There are various modalities of treatment available for Medial Malleolus fractures. Undisplaced fractures are managed conservatively with slab or cast and displaced fractures are fixed with screws, k wires, anchors, tension wiring and plates. The main objective of the study is to compare the clinical outcomes of Tension band wiring versus Malleolar screws in managing Displaced Isolated Medial Malleolus fractures. Methods: This is a cross-sectional study conducted in the Department of Orthopaedics in Kurnool Medical College with 35 patients from November 2022 to November 2023 over one year with displaced isolated Medial Malleolus fractures. Postoperatively the patients are evaluated based on clinical and radiological examinations at one, three, and six months, respectively. Results: The patients are evaluated with Baird and Jackson scoring system postoperatively, where Excellent score: 8(47%) in group 1 and 7(38.8%) in group 2; Good score: 8(47%) in group 1 and 8(44.4 %) in group 2; Fair score: 1(5.8%) in group 1 and 2(11.1%) in group 2; Poor score: 0 in group 1 and 1(5.5%) in group 2. Hence excellent and good results are obtained in 16(94%) patients in group 1(TBW) and 15(82.2) patients in group 2(Malleolar Screws). Conclusion: Tension band wiring can be a better option than Malleolar screws in fixation of Displaced Isolated Medial Malleolus fractures. INTRODUCTION The Ankle joint is a modified hinge synovial joint, which transmits the body weight to the ground and helps in Ambulation [1,2].It is known as the mortise joint [3], which is formed by the Tibial plafond (mortise) and the Dome of Talus (tenon) and supporting medial and lateral collateral ligaments and distal tibiofibular syndesmosis.The Ankle joint is a complex joint which has bony and ligamentous parts which are more prone to injuries following trivial or low-impact injuries [1].Ankle injuries show a bimodal distribution of age [2].Among all those structures, in this, we have concentrated on isolated medial malleolus fractures.The management of medial malleolus fracture is based on the type or pattern of fracture and socioeconomic status.The Denis-Weber classification system is more practical system of classifying ankle fractures [4,5].The Lauge-Hansen classification is mechanistic classification for Ankle fractures [5,6].The treatment varies for both un-displaced and displaced fractures, where un-displaced fractures are managed conservatively with slab or cast.Displaced fractures will have more chances of soft tissue interposition may result in non-union; hence it needs surgical fixation. The main aims of surgical management are accurate reduction of fracture, maintenance of medial joint space, anatomical reduction of Talus beneath the Tibial plafond and addressing soft tissues. Various surgical modalities for treating Medial Malleolus fractures are available, which include Tension Band Wiring (TBW), screws/plate fixation, k wires and suture anchors.TBW is a clinically accepted method for displaced Medial Malleolus fractures if the distal fragment is too small.Malleolar Screws fixation is done for stabilization of vertical shear fracture of the Medial Malleolus.In this study, the clinical and radiological outcome of Malleolar fractures is evaluated by comparing the management of the TBW and Malleolus screws. Surgical techniques Under spinal anaesthesia, the patient is placed in the supine position; surgical parts are scrubbed and draped.An anteromedial incision of 6 cm was given over the fracture site which is slightly curved anteriorly at the distal end.The main advantages of this incision are 1.Articular surfaces are completely visualized.2. Tibialis posterior tendon and overlying sheath are preserved.3. The Saphenous vein and nerve are protected.Then, the fracture site is exposed.Saline wash is given to remove debris and hematoma and interposing soft tissue is released.Fracture ends are reduced with the help of reduction clamps and checked under the C-arm. Tension band wiring (TBW) Two 1.8 mm Kirschner wires are inserted perpendicular to the fracture line, anterior and posterior to the clamp.A 3.5 mm drill hole is made 3 cm proximal to the fracture site and an unicortical screw is passed.An 18 Gauge, stainless steel wire is applied between the Kirschner wires and cortical screw, in fig. of eight fashion and tightened.Fracture compression and ankle movements are checked. Malleolar screw fixation Two 1.8 mm Kirschner wires are inserted perpendicular to the fracture line, anterior and posterior to the clamp.After drilling with two 3.2 mm drill bit, two 4.5 mm malleolar screws were passed and tightened for compression of the fracture.Ankle movements are checked. A Wound wash was given and closed in layers and dressing done. Post-operatively intravenous antibiotics should be given for 5 d followed by oral antibiotics until suture removal supported by limb elevation.Post-op X-rays were taken (AP, Lateral and Mortise views).Below Knee slab was kept until swelling subsides.Nonweight bearing for 6 w followed by physiotherapy was done to get functional improvement in movements. Follow-up All the patients are reviewed at one, three, six-month intervals.Clinical and radiological examinations of the patients were done and evaluated based on Baird and Jackson scoring. RESULTS On clinical examination of the patients, there is no significant difference between the groups in terms of Age (mean age-37 y), Gender (Male predominance), Side involved (Right side), Etiology or Mode of injury (RTA). On Radiological examination of both groups confirms that there is an anatomical reduction with stable fixation in all the 35 patients who are treated with TBW and Malleolar screws fixation.A series of radiographs, which are taken at 4-8 w, 12-16 w and 20-24 w, the average time for fracture union is 12.3 w in group 1(TBW) and 14.5 w in group 2(Malleolar Screws) patients. No patients have non-union or loss of reduction in the groups.The chances of wound infection over the operated site are slightly higher in patients managed with Malleolar Screws fixation compared to patients managed with TBW. DISCUSSION Medial Malleolus fracture is an intra-articular fracture; hence, acute reduction is indicated to avoid complications.In our study, we found that the mean age of incidence of Medial Malleolus fractures is 37years; similar observations are seen in Mohammed et al. study [7], with male predominance and more common on the right side, with the most common mode of injury as RTA. Postoperatively, the mean time for fracture union in both TBW and Malleolar screws fixation was evaluated and we found the average time for fracture union(weeks) is 12.3 w in TBW and 14. [12,13], also observed a higher percentage of outcomes in patients managed with TBW compared to patients managed with Malleolar Screws fixation. In our present study, even after proper post-operative care, 2 patients from group 1(TBW) and 3 patients from group 1(Malleolar Screws fixation) developed wound infections.Similarly, in Shenkeshi et al. study [11], wound infections are observed in 3 cases from patients managed with TBW and 2 cases from patients managed with Malleolar Screws fixation.The study limitations are Loss of follow-up of patients, Selection bias and short period.The major limitation of the study is loss of follow-up of patients during the study period. CONCLUSION Based on the results of our present study, Medial Malleolus fractures occur at a mean age of 37 y with slight Male predominance with the most common mode of injury found to be RTA.As far as surgical management is concerned, patients managed with TBW have better outcomes with minimal complications compared to Malleolar Screws fixation. is a cross-sectional study conducted in the Department of Orthopedics, Government General Hospital, Kurnool on 35 displaced Isolated Medial Malleolus fracture patients from November 2022 to November 2023.The study was approved by the Institutional Ethical Review Committee (IEC-KMC-GGH dated on 27/08/2018), and written informed consent was obtained from all the participants.Inclusion criteria of the study are Age>18 y, Displaced Isolated Medial Malleolus fractures, Patients fit for surgery, Patients willing to study and follow up.Exclusion criteria of the study are Age<18 y and>60 y, Pathological fractures Associated with Compound grade 2 and 3 injuries, Tri-malleolar fractures, Patients not willing for surgery, Patients unfit for surgery. Table 2 : Mode of injury After receiving the patient, basic information about the patient is recorded (name, age, sex, time, place and mode of injury).The General condition of the patient is assessed and routine investigations are done.A Plain radiograph of the ankle joint was taken (AP, Lateral and Mortise view) and below knee, slab is applied and limb elevation is maintained until surgery. Pre-op Preparations: After keeping the patient in nil per oral for 8 h, informed written consent was taken.Prophylactic I. V. antibiotics 30 min before the surgery.Xylocaine test dose and TT Injection were given.
2024-03-11T17:47:19.342Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "4b35051b982fe66ffbc519c85fee5dc7f3ccec1b", "oa_license": "CCBY", "oa_url": "https://journals.innovareacademics.in/index.php/ijpps/article/download/50207/29680", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee236c8b13c23b3303503cc8ce5c99d96fd755e2", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
259983110
pes2o/s2orc
v3-fos-license
Numerical method to solve impulse control problems for partially observed piecewise deterministic Markov processes Designing efficient and rigorous numerical methods for sequential decision-making under uncertainty is a difficult problem that arises in many applications frameworks. In this paper we focus on the numerical solution of a subclass of impulse control problem for piecewise deterministic Markov process (PDMP) when the jump times are hidden. We first state the problem as a partially observed Markov decision process (POMDP) on a continuous state space and with controlled transition kernels corresponding to some specific skeleton chains of the PDMP. Then we proceed to build a numerically tractable approximation of the POMDP by tailor-made discretizations of the state spaces. The main difficulty in evaluating the discretization error come from the possible random or boundary jumps of the PDMP between consecutive epochs of the POMDP and requires special care. Finally we extensively discuss the practical construction of discretization grids and illustrate our method on simulations. Introduction A large number of problems in science, including resource management, financial portfolio management, medical treatment design to name just a few, can be characterized as sequential decisionmaking problems under uncertainty. In such problems, an agent interacts with a dynamic, stochastic, and incompletely known process, with the goal of finding an action-selection strategy that optimises some performance measure over several time steps. In optimal stopping problems, for instance when the agent has to decide when to replace some component of a production chain before full deterioration, the policy typically does not influence the underlying process until replacement when the process starts again with the same dynamics. However in many decision-making problems an important aspect is the effect of the agent's policy on the data collection; different policies naturally yielding different behaviors of the process at hand. In this paper we focus on the numerical solution of a subclass of impulse control problem for Piecewise Deterministic Markov Process (PDMP) when the process is partially observed, in the hard case when jump times are hidden. PDMPs are continuous time processes with hybrid state space: they can have both discrete and Euclidean variables. The dynamics of a PDMP is determined by three local characteristics: the flow, the jump intensity and the Markov jump kernel, see [10]. Between jumps, trajectories follow the deterministic flow. The frequency of jumps is determined by the intensity or by reaching boundaries of the state space, and the post-jump location is selected by the Markov kernel. General impulse control for PDMPs allows the controller to act on the process dynamics by choosing intervention dates and new states to restart from at each intervention. This family of problems was first studied by Costa and Davis in [9]. It has received a lot of attention since, see e.g. [1,8,14,15], and was further extended for instance in [16,17]. In these papers, the authors define a rigorous mathematical framework to state such control problem and establish some optimality equations such as dynamic programming equations for the value function. Numerical methods to compute an approximation of the value function and an -optimal policy are also briefly or extensively described for instance in [9,12,13]. They rely on discretization of the state space, either with direct cartesian grids or with dynamic grids obtained from simulations of the interjump-time-post-jump-location discrete time Markov chain embedded in the PDMP. In all the papers cited above, the process is supposed to be perfectly observed at all times. However in most real-life applications continuous measurements are not available, and measures may be corrupted by noise. Designing efficient and mathematically sound approximation methods to solve continuous time and continuous state space impulse control problems under partial observation is very challenging, especially for processes with jumps such as PDMPs. Relevant literature is scarce. One can mention [2] or [4] (in the special easier case of optimal stopping) where the position of the process is observed through noise, but the jump times are perfectly observed, so that the properties of the inter-jump-time-post-jump-location chain can still be fully exploited. Another related recent work is [5] where the author studies an optimal control problem for pure jump processes (corresponding to PDMPs with contant flow) under partial observations. However, they consider continuous control instead of impulse control and the observations are not corrupted by noise. A first step toward solving the impulse control problem for PDMPs under partial and noisy observations and hidden jump times was made by the authors in [6] in the special easier case of optimal stopping. They have shown that when trajectories of the process can be simulated, a double discretization allows to approximate the value function with general error bounds, and provides a candidate policy with excellent performance. In this paper, we address a more general class of impulse control problems under partial observations where decisions do influence both the data collection and the dynamics of the process. More specifically, we focus on a sub class of impulse control problems with three specificities. First, the lapse between interventions can only take a finite number of values. Second, observations are collected at intervention times and corrupted by noise. Third, interventions only act on the discrete variables. In this setting, the impulse control problem can be stated as a partially observed Markov decision process (POMDP) and this is the first main contribution of this paper. This POMDP is in discrete time, with epochs corresponding to intervention dates, however the state space is not discrete, and becomes infinite dimensional when turned into the corresponding fully observed Markov decision process (MDP) for the filter on the belief space, see e.g. [3]. While such MDPs have theoretical exact optimal solutions, they are numerically untractable. Our second main contribution is to propose and prove the convergence of an algorithm to approximate the value fonction and explicitly build a policy close to optimality. Our approach is based on tailor-made discretizations of the state spaces taking into account two major difficulties. First, the state space may have numerous active boundaries that trigger jumps when reached and make the operators under study only locally regular. Second, the combinatorics associated to the possible decisions is too large to explore the whole belief space through simulations. Our third main contribution is to extensively discuss the practical construction of the discretization grids, which is no easy problem. Our results are illustrated by simulations on an example of medical treatment optimization for cancer patients. The paper is organized as follows. In section 2 we state our optimization problem and turn it into a POMPD. In section 3 we give our resolution strategy and our main assumptions. The construction of the approximate value function and policy is detailed in section 4. Experimental results and discussion of the practical construction of discretization grids are provided in section 5. Finally section 6 provides a short conclusion. The main proofs are postponed to the appendix as well as details and specifications of the model used for the simulation study. Problem statement We start with specifying the special class of impulse control problem for PDMPs we are focusing on, then we show how our control problem can be expressed as a POMDP. Impulse control for hidden PDMPs Let us first define the class of controlled PDMPs we consider. Let M = L × M be a twodimensional finite set. We will call regimes or modes its elements. We use a product set to distinguish between modes in L that can be controller chosen and modes in M that cannot. For all regime ( , m) in M, let E m be an open subset of R d endowed with a norm · . Set E = {( , m, x), ∈ L, m ∈ M, x ∈ E m }, and E = {(m, x), m ∈ M, x ∈ E m } for all ∈ L. A PDMP on the state space E is determined by three local characteristics: R d is continuous and satisfies a semi-group property Φ m (·, t + s) = Φ m (Φ m (·, t), s), for all t, s ∈ R + . It describes the deterministic trajectory between jumps. Let t * (x) = t * m (x) be the deterministic time the flow takes to reach the boundary of E when it starts from x = ( , m, x): • the Markov kernel Q on (B(Ē),Ē) represents the transition measure of the process and allows to select the new location after each jump. It satisfies for all x ∈Ē, Q({x} ∪ ∂E|x) = 0. We also write Q m (·|x) = Q(·|x) for all x = ( , m, x) ∈ E and add the additional constraint that Q cannot change the value of , as is intended to be controller-chosen, i.e. Q m sends E onto itself. Algorithm 1 Simulation of a trajectory of a controlled PDMP between interventions n and n + 1 from state x n = ( n , m n , if s + S ≤ r n then 9: s ← s + S 10: The formal probabilistic apparatus necessary to precisely define controlled trajectories and to formally state the impulse control problem is rather cumbersome, and will not be used in the sequel. Therefore, for the sake of simplicity, we only present an informal description of the construction of controlled trajectories. The interested reader is referred to [9], [11, section 54] or [16] for a formal setting. Our optimization problem will be rigorously stated as a POMDP in section 2.2. We consider a finite horizon problem. Let H > 0 be the optimization horizon, δ > 0 a fixed minimal lapse such that N = H/δ is an integer. Note that δ is not supposed to be small. A general impulse strategy S = ( n , r n ) 0≤n≤N −1 is a sequence of non-anticipative E-valued random variables on a measurable space (Ω, F) and of non-anticipative intervention lapses. In this work, we only consider a subclass of strategies where n takes values in L and r n is a multiple of δ: r n ∈ T where T is a subset of δ 1:N = {δ, 2δ, . . . , N δ} that contains δ. This means that on the one hand, the controller can only act on the process by changing its regime, i.e. by selecting the local characteristics to be applied until the next intervention, and on the other hand the lapse between consecutive interventions belongs to the finite set T. The trajectories of the PDMP controlled by strategy S are constructed recursively between intervention dates τ n (defined recursively by τ 0 = 0 and τ n+1 = τ n + r n ) as described in algorithm 1. In Line 5 of algorithm 1, S ∼ λ m (x) means that S has the survival function As a boundary jump can also occur, the distribution of the next jump time T , starting from x is given by the survival function The strategy S induces a family of probability measures P S x , x ∈ E, on a suitable probability space (Ω, F). Associated to strategy S, we define the following expected total cost for a process starting at where E S x is the expectation with respect to P S x , c r is some running cost, c i some intervention cost and c t some terminal cost. The last ingredient needed to state the optimisation problem is to define admissible strategies. Again, the rigorous definition will be given in section 2.2 in the framework of POMDPs. Informally, decisions can only be taken in view of some discrete-time noisy observations of the process, instead of the exact value of the process at all times. More specifically, we assume that • observations are only available at decision times τ n , • the controller-chosen regimes ∈ L are observed, the uncontrolled regimes m ∈ M are hidden, except for some top event m =m; • the Euclidean variable is observed through noise: at time τ n , if the state of the process is x = ( , m, x), controller receives observation y n = F (x) + n where ( n ) are real-valued independent and identically distributed random variables with density f independent from the controlled PDMP. We further assume that the random variables y n take values in a compact interval I of the real line. Denote by S the set of all admissible strategies. Our aim is to compute an approximation of the value function and explicitly construct a strategy close to optimality. Partially observed Markov decision process Finding a suitable rigorous way to state an impulse control problem for PDMPs with hidden jumps and under noisy observation is by no means straightforward, especially as regards defining admissible strategies, see e.g. [1] or [7, sec. 1.1]. This is our first main contribution in this paper. As decision dates are discrete, we use the framework of POMPDs to rigorously state our control problem. In the sequel, we drop the regime ∈ L from the state x and include it in the action instead to better fit the standard POMDP notation. Let (X, A, K, R, c, C) be the POMPD with the following characteristics. • The state space is , 1} × δ 1:N } the observation space. It gathers the values X n of the hidden PDMP at dates τ n as well as the observation processes Y n , and two additional observed variables Z n and W n . Here, Z n is an indicator of being in modem (hence at the top event) and W n is the time elapsed since the beginning. State ∆ is a cemetery state where the process is sent after the horizon H or modē m is reached. • The action space is A = (L × T) ∪ {ď}, whereď is an empty decision that is taken when the horizon H or modem is reached. This purely technical decision sends the process to the cemetery state ∆. • The constraints set K ⊂ X × A is such that its sections K(ξ) = {d ∈ A; (ξ, d) ∈ K} satisfy K(∆) = {ď} and K(x, y, z, w) = K(z, w) for (x, y, z, w) ∈ X − {∆} as decisions are taken in view of the information from the observations only. In addition, one has , to force the last intervention to occur exactly at the horizon time H, unless modem (z = 1) has been reached, -K(0, N δ) = K(1, w) =ď: no intervention is possible after the horizon or the top event has been reached. • The controlled transition kernels R are defined as follows: for any bounded measurable function g on X, any ξ ∈ X and d ∈ K(ξ), one has Rg(ξ, d) = I E<m g(x , y , 0, w + r)f (y − F (x ))P (dx |x, d)dy where P (·|x, d) is the distribution of X r conditionally to X 0 = x under regime , if d = ( , r). Its explicit analytical form is given in appendix A. • The terminal cost function C : X → R + satisfies C(∆) = 0 and C(x, y, z, w) = c t (x) for (x, y, z, w) ∈ X − {∆} with c t (x) = cm if x ∈ Em,where cm is a penalty for reaching the top value. • The optimisation horizon is finite and equals N corresponding to date N δ = H. Classically, the sets of observable histories are defined recursively by A decision rule at time n is a measurable mapping g n : H n → A such that g n (h n ) ∈ K(γ n ) for all histories h n = (γ 0 , d 0 , γ 1 , d 1 , . . . , γ n ). A sequence π = (g n ) 0:N −1 = (g 0 , . . . , g n , . . . , g N −1 ) where g k is a decision rule at time k is called an admissible policy. Let Π N denote the set of all admissible policies. The controlled trajectory of the POMDP following policy π = (g n ) 0:N −1 ∈ Π N is defined recursively by Ξ 0 ∈ X and for 0 ≤ n ≤ N − 1, Note that the cemetery state ∆ ensures that all trajectories have the same length N , even if they do not have the same number of actual decisions (d =ď). Then one can define the total expected cost of policy π ∈ Π N starting at ξ 0 ∈ X as and our control problem corresponds to the optimisation problem Now the problem is rigorously stated, our next aim is now to compute an approximation of the value function V and explicitly construct a strategy close to optimality. Resolution strategy and assumptions Our second main contribution is to propose a numerical approach to (approximately) solve our POMDP. The first difficulty to solve our POMDP comes from the fact that it is partially observed. Thus our first step is to convert it into an equivalent fully observed MDP on a suitable belief space X by introducing a filter process. This is done in section 3.3. While dynamic programming equations hold true for the fully observed MDP and in theory provide the exact optimal strategy to the optimisation problem, there are two main difficulties to its practical resolution. On the one hand, the state space X is continuous and infinite-dimensional. And on the other hand the filter process is not simulatable, even for fixed policies, preventing the use of direct simulation-based discretization procedures. Our approach to (approximately) solve the belief MDP is to construct a numerically tractable approximation of its value function based on discretizations of the dynamic programming equations. To obtain a finite-dimensional process and a simulatable approximation of the filter process, we first discretize the Euclidean part of the state space E of the PDMP (X t ). Then we discretize the state space of the approximated filtered process in order to obtain a numerically tractable approximation. Finally, we obtain an explicit candidate policy by solving the resulting discretized approximation of the dynamic programming equations. This procedure is detailed in section 4. In order to keep track of the discretization errors throughout the different steps described above, we need our transition kernels to be regular enough. Hence we start this section with stating the assumptions required on the parameters of the PDMP. The first set of assumptions stated in section 3.1 is not strictly necessary but is here to limit the combinatorics of possible jumps of the PDMP between consecutive epochs of the POMDP. The second set of assumptions in section 3.2 deals with regularity requirements and is necessary for our approach. Simplifying assumptions on the dynamics of the PDMP We make the following assumptions on the PDMP. Their aim is to limit the combinatorics of possible natural jumps between interventions, deal more easily with the boundary jumps, and obtain explicit forms for the transition kernel between two consecutive interventions. Although they are not strictly necessary for our approach to work, we believe they keep the exposition as simple as possible. First, we restrict the number of modes to create non trivial dynamics with limited enough combinatorics. We setm = 3 and in the sequel we write E 0:2 instead of E <m . Second, we consider a bounded one-dimensional Euclidean variable and add a counter of time since the last jump in order to encompass semi-Markov dynamics where the jump intensity is time dependent, instead of only state-dependent. where 0 < ζ 0 < D and u is the time since the last jump. Here we consider that ζ 0 is some nominal value the process should stay close to, and D is some non-return top value the process is trapped at when in modem = 3. With a slight abuse of notation we will write Φ m ((ζ, u), t) = (Φ m (ζ, t), u + t). For x = (m, ζ, u) ∈ E M , we will also denote its coordinates x 1 = m, x 2 = ζ and x 3 = u. Third, we specify monotonicity assumptions on the flow to be able to deal with boundaries easily. Φ 0 (ζ 0 , t) = ζ 0 for all t ≥ 0; • in modes m = 1 and m = 2, for any controller-specified ∈ L, and any ζ ∈ (ζ 0 , D), the • in mode m = 3 the flow is constant at the top value for any controller-specified ∈ L: Basically, this means that in mode m = 0, the process stays at the nominal value. In modes m ∈ {1, 2}, if = ∅, the Euclidean variable increases and may reach the top value. In both these modes, one of the controller-chosen modes is beneficial, in the sense that the process decreases and may reach the nominal value (a for m = 1 and b for m = 2) and the other one is neutral in the sense that the process keeps increasing (b for m = 1 and a for m = 2). We believe that this covers most interesting cases. In mode m = 3 the process is trapped at the top value whatever the choice of the controller. Next, we restrict the number of natural jumps between interventions. This is the strongest assumption. The first one prevents back-and-forth jumps between modes 1 and 2. The second one makes the top event absorbing. Finally, we consider that the Euclidean variable ζ is continuous and u is set to 0 by a natural jump as it represents the time since the last natural jump. In addition one can jump to mode m = 0 only by reaching the bottom boundary ζ = ζ 0 , and one can jump to mode m = 3 only by reaching the top boundary ζ = D. for all ( , m) ∈ L × M and m ∈ M . Regularity assumptions We start with regularity assumptions on the local characteristics of the PDMP. such that for all x and x ∈ E, ∈ L, and t ∈ R + one has The time to reach the boundary t * is Lipschitz continuous: there exists a positive constant [t * ] such that for all m ∈ {1, 2}, x, x ∈ E M and ∈ L, one has The Lipschitz-continuity assumptions on λ, Φ and t * are classical. The additional requirement on the mapping t → t * m (Φ m (ζ, t))+t is needed to obtain (local) Lipschitz regularity of the controlled kernels P , see the proof of Proposition C.1. This is one of the technical difficulties encountered when dealing with possible random or boundary jumps of the continuous process between epochs of the POMDP. In practice, it is easy to verify as soon as one specifies an explicit form for the flow Φ, see appendix E.2. We also need regularity assumptions on the observation process. Assumption 3.8 There exist non negative real constants L Y , f and f such that for all (ζ, ζ ) ∈ [ζ 0 , D] 2 and y ∈ I one has We also set L f = L Y |I| and B f = f |I|, where |I| = I dy is the length of interval I. Finally, we need regularity assumptions on the cost functions. Assumption 3.9 There exist non negative real constants L c , L C , B c and B C such that for all x, x , x in E M and d ∈ L × T, one has Equivalent MDP on the belief state To convert a POMDP into an equivalent fully observed MDP is classical therefore details are omitted. The interested reader may consult e.g. [3,4,6] for similar derivations. For n ≤ N , set , denote the filter or belief process for the unobserved part of the process. The standard predictioncorrection approach yields a recursive construction for the filter. Proposition 3.10 For any n ≥ 0, one has: Let P(A) denote the set of probability measures on set A, The equivalent fully observed MDP is defined as follows. • The state space is a subset X of (P(E M ) × O) ∪ {∆} satisfying the following constraints: all ξ = (θ, y, z, w) ∈ X satisfy -θ(E 0:2 ) = 1 or θ(E 3 ) = 1 and if θ(E 3 ) = 1, then y = 0 and z = 1; It is necessary to restrict the state space to ensure the regularity of our operators, see appendix C.1.1. The first constraint comes from the fact that reaching the top event is observed. As the filter (Θ n ) is adapted to the filtration (F O n ), it charges E 3 accordingly. The last constraint is more technical, and is guaranteed as soon as the process starts in mode 0. • The action space is still • The controlled transition kernels R are defined as follows: for any bounded measurable function g on X , any ξ ∈ X and d ∈ K (ξ), one has R g(ξ, d) = g(∆) if d =ď, and if ξ = (θ, y, 0, w), d = ( , r), x = (m, ζ, u) and x = (m , ζ , u ). See appendix B.1 for the proof that R maps X onto itself. • The non-negative cost-per-stage function c : K → R + and the terminal cost function C : X → R + are defined by C (∆) = c (∆,ď, ·) = 0 and for ξ = (θ, γ) ∈ P(E M ) × O and d ∈ K (ξ), • The optimisation horizon is still N . Denote (Ξ n ) a trajectory of the fully observed MDP. The cost of strategy π ∈ Π N is and the value function of the fully observed problem is, , so that solving the partially observed MDP is equivalent to solving the fully observed one. In addition, the value function V satisfies the well known dynamic programming equations, see e.g. [2]. . Approximation of the value function and candidate policy Our approximation of the value function is based on discretizations of the underlying state spaces. However, because deterministic jumps at the boundaries of the state space may occur between two epochs of the MDP, one must be especially careful to preserve regularity in selecting finitely many states. To do so, we introduce a partition of the state space that must be preserved by the discretization. Partition of the state space Because of the boundaries of the state space at ζ 0 and D, the transition kernels of our PDMP are not regular on the whole state space, but only locally regular in some sub-areas. More specifically, we consider the sets m its T = 3|T| + 2 ordered elements. We then consider (F j ) 1≤j≤2T the following partition of E M : The splitting points satisfy t * m (ζ) = r for m ∈ {1, 2} and d = ( , r) ∈ A−{ď}. They separate values of ζ for which the probability of reaching the top value until the next epoch is strictly positive from those with null probability. First discretization Let Ω = {ω 1 , . . . , ω nΩ } be a finite grid on E M containing at least one point in each mode m ∈ M . Let p Ω denote the nearest-neighbor projection from E M onto Ω for the distance defined in section 2.2 with (ζ, u) = |ζ| + |u|. In particular, p Ω preserves the mode. Let (C i ) 1≤i≤K be a Voronoi tessellation of E M associated to Ω. Namely, (C i ) 1≤i≤nΩ is a partition of E M such that for all We will denote D i the diameter of cell C i : Note that with our assumptions all D i are finite and bounded by |D − ζ 0 | + 2H. In addition, the Voronoi cells must be compatible with the partition of the state space. Assumption 4.1 The grid Ω and its Voronoi cells (C k ) 1≤k≤nΩ are such that each hyperplane with equation t * m (ζ) = r for m ∈ {1, 2} and ( , r) ∈ L × T is included in the boundary of some cell. In other words, for all 1 ≤ k ≤ n Ω , there exists some 1 ≤ j ≤ 2T such that C k ⊂ F j . In practice, this assumption means that the points closest to the hyperplanes in the grid are symmetric with respect to these hyperplanes, see section 5.2 for practical details about the grid construction. We define the controlled kernelsP from E M × (L × T) onto Ω as for all x ∈ E, d ∈ L × T and 1 ≤ j ≤ n Ω . In particular, the restriction ofP to Ω is a controlled Markov kernel on Ω. We now replace kernel P by kernelP in the dynamic programming equations of Theorem 3.11 in order to obtain our first approximation. LetΨ be the approximate filter operator defined by replacing the integrals w.r.t. P in the filter operator Ψ defined in Proposition 3.10 by integrals w.r.t.P , see . The approximated controlled transition kernelsR are defined as follows: for any bounded measurable function g onX , any ξ ∈X and d ∈ K (ξ), one hasR g(ξ and ω j = (m , ζ , u , w ). Note that this kernel does not depend on y, and see appendix B.2 for the proof thatR sendsX onto itself. The approximated cost-per-stage function is defined byc (∆,ď) = 0 and for all ξ = (θ, γ) ∈X and d ∈ K (ξ). Finally, for all ξ ∈X , setv N (ξ) = C (ξ), and for 0 ≤ n ≤ N − 1, define by backwards induction v n (ξ) = min If the grid Ω is precise enough,P should be a good approximation of kernel P and thus one can expect that functionsv n are good approximations of our value functions v n . Indeed, we have the following result. where C v n depends only on n, N , δ and the regularity constants of the parameters. Its proof is given in appendix C.5 and is based on the explicit analytic form of the kernels, their regularity and the dynamic programming equation. The main gain with this first approximation is that the filter operatorΨ now involves only finite weighted sums and therefore the corresponding approximate filter process is now simulatable. However, functionsv n still cannot be computed because of the continuous integration in y and because the state spaceX is still continuous. We now proceed to a second discretization in order to obtain numerically tractable approximations of our value functions. In particular, the restriction ofR to Γ is a family of controlled Markov kernels on Γ. Again, if the grid Γ is precise enough,R should be a good approximation of kernelR and thus one can expect that functionsv n are good approximations of functionsv n and thus of our value functions v n . More precisely, we have the following result. where Cv n depends only on n, N , δ and the regularity constants of the parameters. Its proof is given in appendix D.3. The main gain with this second approximation is that integration against kernelR boils down to computing finite weighted sum, and functionsv n are defined on a finite state space which makes the dynamic programming equations fully tractable numerically. Candidate strategy We can now construct a computable strategy using the fully discretized value function. The idea is to first build an approximate filter using the operatorΨ and project the resulting filter together with the current observation onto grid Γ. Then one selects the next decision using the dynamic programming equation on the grid Γ. More precisely, suppose that the process starts from point ξ 0 = (0, ζ 0 , 0, y 0 , 0, 0), such that the initial observation is γ 0 = (y 0 , 0, 0). One can recursively compute an approximate filter (θ n ) and the corresponding decisions (d n ) as follows. First, set Suppose one has constructed the sequence (θ n , d n ) up to stage k − 1. Then after receiving the k-th observation γ k , the next approximate filter and decisions arē with the convention that the last (not required) decision is d N =ď. Note that it should be better to use operatorΨ onθ k−1 then project the result onto grid Γ than using operatorΨ (obtained by replacing integration wrt θ by integration wrt p Γ (θ) in the definition ofΨ) as it should generate a smaller error. Although it is reasonable to think that this strategy should be close to optimality, it is an open problem to actually prove it as the sequence (θ n , γ n ) is not generated by the kernel R . Indeed, we use here the observations generated by the original sequence X n with kernel P and not that generated by kernelP , as in practice only the original observations are available. Its performance is assessed in the next section. Simulation study We consider an example of patient follow-up. The mode m corresponds to the overall state of the patient (m = 0: sound, m = 1: disease 1, m = 2: disease 2, m = 3: death of the patient). The variable ζ correspond to some marker of the disease that can be measured, ζ 0 being the nominal value for a sound patient and D the death level. The control correspond to the medical treatment ( = ∅: no treatment, = a: efficient for disease 1 and slows the progression of disease 2, = b: efficient for disease 2 and slows the progression of disease 1). Decision dates correspond to visits to the medical center when the marker is measured and a new treatment is selected and applied until the next visit. Horizon H is 2400 days with possible visits every 15, 30 or 60 days (δ = 15). If treatment ∅ is applied, the patient may randomly jump from m = 0 to any of the two disease states m ∈ {1, 2}. In the disease states (m = 1 or m = 2), the marker level grows exponentially and can reach the death level D in finite time, no other change of state is possible. If treatment a is applied, the patient may only randomly jump from m = 0 to the other disease state m = 2. In the disease state m = 1, the marker level decreases exponentially and can reach the nominal level ζ 0 in finite time or randomly jump to the other disease state m = 2. In the disease state m = 2, the marker is level grows exponentially and can reach the death level D in finite time, no other change of state is possible. Effects of treatment b is similar: exponential decrease of the marker in disease m = 2, exponential increase in disease m = 1. The specific values of the local characteristics of the PDMP can be found in the appendix E.1. Choice of the cost functions The candidate policy depends on the cost functions, therefore the latter has to be carefully chosen. We denote c i (x, d) = C V a fixed cost per visit that takes into account an emotional burden for the patient and health care expenses for the check-up. For the counterpart of the integral of the running cost, we choose en expression of the form Parameter β ∅ > 0 represent a lateness penalty for not applying the right treatment on time, β > 0, = ∅ represent the penalty for using an inappropriate treatment, κ > 0 is a scale parameter and |ζ −ζ 0 |r is a (crude) proxy of the integral of the process. Our cost function is then c( Construction of the grids Because decisions influence the dynamics of the process, grids cannot be constructed with techniques based on simulations, such as quantization. For instance, on a horizon of 2400 days with possible visits every 15, 30 or 60 days, this leads to approximately 10 152 possible strategies. Grids therefore have to be chosen by expert knowledge, and transition probabilities computed accordingly. In this section, we extensively discuss grid constructions which we consider as our last main contribution. To alleviate combinatorial burden, we propose a hybrid strategy that relies on extending iteratively an initial fixed grid through simulations with optimal policies computed from the current grid. This is computationally intensive, relies on the choice of an initial grid and of a cost function, but significantly improves performances compared to fixed grids. Construction of the first grid The first discretization only concerns the process X t on space E M . Recall that X t has three components (m t , ζ t , u t ). As the first component m t is discrete, we will want to project X t on a grid that preserves the mode, hence we will consider one grid per possible mode. In mode m = 0, ζ t = ζ 0 and we only need to discretize component u t . We consider all points {0, δ, . . . , N 2 δ} to allow increments of component u t at each iteration up to N/2, no matter the decision. After N/2, we anticipate that the computational cost associated to adding new points is not worth the additional information provided. In modes m = 1 and m = 2, we do not need to discretize component u t as the cost functions do not depend on u t , and the time since the last jump does not influence the trajectory, regardless of the decision. Indeed, either no or the wrong treatment is given, and the process will increase until reaching D, or the appropriate treatment is given and the process might jump in the other mode with probability depending only on the value of ζ t . Therefore we arbitrarily fix u = δ for all points of grids of mode 1 and 2. The main difficulties in modes 1 and 2 are the constraints related to the frontiers D and ζ 0 in order to satisfy Figure 1 gives an example of grids in modes 1 and 2: black points indicate the minimal grid, while grey points indicate additional refinement points. Blue lines indicate boundaries with respect to frontier ζ 0 while orange lines are boundaries with respect to D. For mode 3 we only consider one point (3, D) as cost functions do not depend on the time since the death of the patient. Finally we add one point for cemetery ∆. Once the grid is fixed, kernel P can be computed through Monte Carlo approximations for each point ω i of the grid and each decision d = ( , r). Construction of the second grid As the process (and its filter) are not simulatable (at least not simulatable according to all sets of possible strategies in sufficiently short time), the second discretization grid cannot be constructed based on simulation strategies such as quantization. Here the space to discretize isX which dimension is that of P(Ω) × O, that is (n Ω − 1) + 3. Choosing a relevant grid, in this context, is not trivial. In particular, even if we are given a fixed grid, the kernelsR cannot be estimated directly by Monte Carlo simulation and projection as we are not able to simulate a (θ, y, z, w) from a given point ρ i . We therefore need to compute directly theR(ρ j |ρ i , d) using the approximate filters and kernelsΨ andP from the previous discretization. The first thing we can notice is that it is not necessary to discretize the last dimension of O, corresponding to W the time spent since the beginning as it is already discrete along trajectories, W being a multiple of δ. The second dimension of O corresponds to Z, the observation of the death of the patient, which is also already discrete. Finally, the first dimension of O does not need to be discretized either. Indeed, the transition kernelsR do not depend on y. The integral on y over I can be estimated by Monte Carlo approximation on the noise variable ε once and for all. Given these remarks we need to discretize the (n Ω − 1) simplex of R nΩ . After extensive stimulation studies, we recommend starting from a minimal fixed grid that we enrich through iterative simulations. More precisely, our strategy is the following. discretization on x discretization on u q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q discretization on x discretization on u q q q q q q q q q q q q q q q q q q q q q qqqqqqqqqqqqq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q Compute an initial grid with n Ω points: for each element ω i of Ω we define a probability vector that charges ω i with probability 0.95, the rest of the mass being distributed randomly through a Dirichlet distribution (with α = 1) on all other elements of Ω; estimateR by Monte Carlo simulations. Compute optimal strategies according to section 4.4 on this initial grid. While some stopping criteria is not reached, • simulate trajectories with the optimal strategy from the current grid; • for each trajectory, at each time-point, compute the distances between the estimated filter and its projection on the current grid, and add all estimated filters which distances are larger than a threshold s to form the next grid; • estimateR by Monte Carlo simulations, and run dynamic programming to compute optimal strategies. Different stopping criteria can be used, among which: a minimal number of points in the grid, a maximal proportion of distances larger than s, or a minimal decrease of value function between successive grids. See section E.3 for some further discussion and numerical experimentations. Note that these new grids allow to significantly reduce the distances between estimated filters and their projections. However, these improved grids are cost-dependent as dynamic programming optimizes the strategies based on the cost values. This implies that new grids have to be computed when costs change. Strategies in competition and performance criteria To evaluate the performance of our approach (OS in the comparison tables) we will compare our work with several other strategies. The (unachievable) gold standard strategy is the See All (SA) where decisions are taken while observing the process X tn by choosing the optimal treatment for the current mode. In this SA strategy, we do not allow choice of the next visit date. The filter strategy corresponds to choosing the optimal treatment for the estimated mode: at each new observation y n the approximate filterθ n is computed based on the first discretization. Then the most probable modem n is obtained aŝ Note that to set up this strategy, only the first discretization is needed. As this discretization is the least computationally extensive, it might be worth designing larger grids Ω to obtain better mode estimates. However, this strategy does not take into account knowledge on the dynamics of the process, hence we expect it to have higher costs. It is not able to select the next decision date either. The Standard strategy is classically used in the hospitals and is based on thresholds s rel for relapse, and s rem for remission. While the observations remain below s rel , the patient does not receive treatment and visits are scheduled every 2 months. When s rel is reached, the practitioner gives treatment b (corresponding to the most frequent relapse type 2) and the next visit is scheduled in 15 days. If at the next visit the observed marker level has decreased, treatment b is maintained with visits every 15 days until s rem is reached. Otherwise the practitioner changes treatment for treatment a (with visits every 15 days.) Once s rem is reached, treatment is stopped, visits are scheduled every two months and the strategy is repeated. The fixed dates optimal strategies are the strategies based on our discretization and dynamic programming approach where the choice of the next visit date are not allowed. We will investigate the fixed dates every 15 and 60 days (FD-15 and FD-60 in the comparison tables). Performance criteria will include the real cost of each strategy evaluated on the real process X (averaged over 500 simulations), the estimated cost evaluated using the estimated filter and cost functions c and C , and their average number of visits (recall that this number may vary due to the possible choice of next visit date for our strategy, but also because patients may die before the horizon H is reached). When comparing only our strategy with the Fixed dates, we also compare the value ofv 0 computed by dynamic programming. Results First, note that all tables presented in this section have been created using Sweave, and all bits of codes are available online at https://github.com/acleynen/PDMP-control. All results presented here are based on 500 simulations. The exact specifications and further numerical investigations can be found in the section E. We first compare the use of two different distances on Γ: the L 2 distance, and distance L m , which brings closer filters with same amount of mass on each modes: . We compare these on the initial grid (of size 184, same grid for both distances) and on two grids of size approximately 1000 calibrated by simulation using each distance. Results are summarized in table 1. When the grid is too sparse (n Γ = 184) changing the distance does not improve the results, as the projected filters seldom differ. Calibrating larger grids through simulations significantly improves the performance of our approach, in particular for the L m distance. The gain is significant from the first iteration, and reach convergence really quickly (grids of size ∼ 700 points achieve similar performance as those presented here). As the exponential flows at first increase very slowly with respect to the noise of the observations, the FD-60 strategy benefits from the decision delay, thus decreasing the number of wrong decisions. On the contrary, the marker-dependent cost penalizes high values of the marker, hence might prefer wrong decisions to prevent jumps. It favors small return dates r when the marker increases (area under curve (AUC) approximated by histograms from above) and large values of r when the marker decreases (AUC approximated by histograms from below), hence a real effect of choosing visit dates. As a consequence, the average length of trajectories are longer with the marker-dependent cost, rising from 57 visits with the L 2 distance and time-dependent cost (resp 54 with the L m distance) to 65 (resp 63). For comparison, the length of FD-15 strategies are 161, and 41 for FD-60 strategies. The most significant gain comes from the use of the L m distance, which favors projected filters with same mass on each mode as the estimated one. As a consequence,θ n+1 under the L m distance is expected to be closer toθ n+1 thanθ n+1 under the L 2 distance, hence the decisions match the reality of the process better. Comparing this strategy with the Sea All, Filter and Standard strategies shows the added value of taking into account the knowledge on the dynamics of the process: it performs significantly better. This is particularly true when using the L m distance (results shown in table 2). Conclusion We have proposed a numerically feasible scheme to approximate the value function of an impulse control problem for a class of hidden PDMPs where control actions do influence the dynamic of the process. Approximations rely on discretization of the observed and unobserved state spaces, for which we propose strategies to explore the large-dimensional belief spaces that depend on the process characteristics while maintaining error bounds for this approximation explicitly depending on the parameters of the problem. Codes are made freely available on Github. This provides a mathematical setting to study realistic processes for instance in a disease-control framework, which we illustrate on simulations. This is a promising start for the study of even more realistic disease-control frameworks. Finally, the important open question concerns the optimality of our candidate strategy. It cannot be directly linked to our various op-erators, but we are hopeful that further work will enable us to prove theoretically that it is close to optimality. A Skeleton kernels of the controlled continuous-time PDMP The generic form of the transition kernels P of the skeleton chains with time span r is formally given below. As there are at most 2 jumps (at most one random jump and 1 boundary jump), the generic form of P for all x = (m, ζ, u) ∈ E M , d = ( , r) ∈ L × T and any bounded measurable where Φ (x, t) = (m, Φ m (x, t)), λ (x) = λ m (x), Λ (x, t) = Λ m (x, t), t * (x) = inf{t > 0 : Φ m (x, t) ∈ ∂E m }, and Q (·|x) = Q m (·|x) is detailed in table 3. Each line corresponds to a specific behavior. • Line (2) corresponds to the events where no jump has occurred and the process just followed the deterministic flow. • Lines (3) and (4) correspond to the events where a single jump occurs. This jump can be either a random jump (Line (3)) or a boundary jump at either ζ 0 or D (Line (4)). • Lines (5) and (6) correspond to the events where two jumps occur, which can happen in either of two ways: a boundary jump at ζ 0 followed by a random jump (Line (5)). a random jump followed by a boundary jump at D (Line (6)). Depending on the values of x and r, some terms may have zero value in the formula above. By definition of kernel P , if x ∈ E 0 and d = ( , r), one has thanks to Assumption 3.6. Assumption 3.8 then yields e −(w+r) λ , as f f ≤ 1 and r ≥ δ. C Error bounds for the first discretization We introduce some function spaces compatible with the partition defined in section 4.1. Let BL(E) be the set of Borel functions from X onto R for which there exist finite constants ϕ E and [ϕ] E such that for all (x, γ) and (x , γ) in X − {∆}, one has x and x belong to the same subset F j , Denote also the unit ball of BL(E) by For θ andθ two probability measures in P(E M ), define the distance d E (θ,θ) by Let BLP (E) be the set of Borel functions from X onto R for which there exist finite constants ϕ E,P and [ϕ] E,P such that for all (θ, γ) and (θ, γ) in X such that θ(E 0:2 ) =θ(E 0:2 ) = 1 or Denote also the unit ball of BLP (E) by Finally, for any function h ∈ BL(E), ξ = (x, γ) ∈ X − {∆} and d ∈ K(ξ), denote with a slight abuse of notation C.1 Regularity of operator P The first key step is to obtain the local regularity of P on our partition. Proposition C.1 Under Assumption 3.6 and 3.7, there exists a positive constant C P such that for all h ∈ BL 1 (E), for all x 1 and x 2 belonging to the same subset F j of E M , for all (y, z, w) ∈ O and d ∈ K(z, w) − {ď}, one has P h(x 1 , y, z, w, d) − P h(x 2 , y, z, w, d) ≤ C P x 1 − x 2 , in addition, P maps BL(E) onto itself and for g ∈ BL(E), one has P g E ≤ g E and [P g] E ≤ C P ( g E + [g] E ). Proof First note that if x 1 and x 2 have different modes, the result holds true as the right-hand side equals infinity. In addition, note that P maps E 3 onto itself, as no jump is possible in mode 3. Therefore, if x 1 and x 2 belong to E 3 , then P h(x 1 , y, z, w, d) = P h(x 2 , y, z, w, d) as h is constant on Second, note that if z = 1, then K(z, w) = {ď} and there is nothing to prove. Suppose now that x 1 and x 2 share same mode x 1 1 = x 2 1 = m. Select (y, z, w) ∈ O, d = ( , r) ∈ K(z, w) and a function h ∈ BL 1 (E) and denote γ = (y, z, w) and γ = (y, 0, w + r)1 x 1 =3 + (0, 1, w + r)1 x 1 =3 . As operator P involves indicator functions, we split the computation of the difference P h(x 1 , y, z, w, d) − P h(x 2 , y, z, w, d) into 3 cases depending on the values of t * (x 1 ) and t * ( , x 2 ) compared to r. • First case: t * (x 1 ) > r and t * (x 2 ) > r. In this case, the explicit formula in appendix A becomes We split the expression into a sum of 3 terms that we study separately. Term A 1 : This term corresponds to no jump occurring. As h is in BL 1 (E) on obtains from Assumptions 3.6 and 3.7 that Term B 1 : This term corresponds to one random jump, which can only lead to modes 1 or 2. We therefore split this term in 2 parts that can be controlled identically: Moreover, this jump can only happen if coming initially from mode m = 0, in which case Φ m (x j , s) is constant, or from mode 1 with control b or mode 2 with control a, in which case Φ m (x j , s) is non-increasing, according to Assumption 3.3. In all cases, for the mode i after the jump, t * i (x ) is non-increasing. To deal with the indicator functions in terms B 11 and B 12 , we now consider 3 different subcases: first sub-case: second sub-case: t * i (x 1 2 ) < r and t * i (x 2 2 ) < r. Hence there exists 0 < s 1 i < r and 0 < s 2 i < r such that for all s ≤ s j i , one has t * i (Φ m (x j 2 , s)) ≤ r − s, and for all s ≤ s j i , one has t * i (Φ m (x j 2 , s)) > r − s. Under Assumption 3.7, one also has t * i (Φ m (x j 2 , s j i )) = r − s j i . Suppose, without loss of generality that s 1 i ≤ s 2 i . Similar computations as in the previous sub-case lead to other sub-cases: If t * i (x 1 2 ) and t * i (x 2 2 ) are on different sides of r, then the difference P h(x 1 , γ, d)− P h(x 2 , γ, d) cannot be made arbitrarily small as x 1 − x 2 is small. However, this case is impossible as x 1 and x 2 are in the same subset F j . Term C 1 : Recall that this term comes from a first random jump followed by a boundary jump. Under our assumptions, this is only possible if (m, ) ∈ {(1, a), (2, b)}, the first random jump yields to mode i ∈ {1, 2} = m, thus Φ m is non increasing by Assumption 3.3, and the boundary jump is at D. In a similar manner as for B 1 we split the term as Note that only one term in the sum above is non-zero as i = m. second sub-case: t * i (x 1 2 ) < r and t * i (x 2 2 ) < r. Then as above, there exists 0 < s 1 i < r and 0 < s 2 i < r such that for all s ≤ s j i , one has t * i (Φ m (x j 2 , s)) ≤ r − s, and for all s ≤ s j i , one has t * i (Φ m (x j 2 , s)) > r − s. Under Assumption 3.7, one also has t * i (Φ m (x j 2 , s j i )) = r − s j i . Suppose, without loss of generality that s 1 i ≤ s 2 i . One has other sub-cases: We once again exclude other sub-cases by choosing x 1 and x 2 in the same subset F j of E. The first term corresponds to a single random jump. This is only possible if i ∈ {1, 2} = m and Φ m is non increasing. One has Once again this term is studied by separating cases based on the position of t * i (x 1 ) and t * i (x 2 ) with respect to r. Then, similarly to term B 1 above, if x 1 and x 2 are in the same subset F j of E, we obtain The second term corresponds to a single boundary jump at ζ 0 or at D : Recall that since x 1 and x 2 have the same mode and the same decision d is taken, thus this jump occurs at the same boundary for each term. Moreover, in either case we have t * (x ) = +∞. Therefore similar computations as in the first case lead to The third term corresponds to a boundary jump at ζ 0 followed by a random jump to mode i ∈ {1, 2} = m. It is only possible when the flow Φ m is non increasing. with x 1 = (0, ζ 0 ) and x 2 = (0, ζ 0 , 0) hence t * (x 1 ) = t * (x 2 ) = +∞. Moreover, Assumption 3.4 prevents more than two jumps from happening between two decision dates, and therefore t * (x ) > r − t * (x j ). Hence one has The last term corresponds to a random jump to i ∈ {1, 2} = m followed by a boundary jump at D. It is treated similarly to term C 1 . One has • Other cases: We exclude all other cases by considering points x 1 and x 2 in the same subset F j of E. C.1.1 Regularity of operator R We first need to investigate the regularity of the filter operator. The proof is based on the explicit form of the filter operator. The lower bounds on θ(E 0 ) are required to bound the terms in the denominators. Lemma C.2 Under Assumptions 3.6, 3.7 and 3.8, there exist some positive constant Proof We split the proof into two sub-cases according to the definition of Ψ. For z = 0: let g ∈ BL 1 (E) and y ∈ I, w ∈ [0, H], and γ ∈ O, we have On the other hand, one has By definition of kernel P , if x ∈ E 0 and d = ( , r), one has P (E 0 |x, d) = e −Λ (x,r) ≥ e −r λ ≥ e −H λ , thanks to theorem 3.6. Hence, one has Similarly, the second term can be bounded by As I is a bounded interval, the result follows. Now we can turn to the regularity of operator R . In particular, operator R maps BLP (E) onto itself and for g ∈ BLP (E), one has R g E,P ≤ g E,P and [R g] E,P ≤ ( g E,P + [g] E,P )C R . Note in particular that (θ, γ) ∈ X , (θ, γ) ∈ X and the other assumptions on θ andθ guarantee that the assumptions of Lemma C.2 are satisfied. Hence we conclude using Lemma C.2 and the minoration of θ(E 0 ) andθ(E 0 ). If z = 0, we have . Using a similar splitting and similar arguments as in the proof of Lemma C.2 together with Proposition C.4 yield the expected result. C.4 Regularity and approximation error for the cost functions The last preliminary result we need in order to prove Theorem 4.2 is to ensure that the cost functions c , C belong to the appropriate function spaces and the error between c andc is controlled. Lemma C.7 Under Assumption 3.9, the cost function C is in BLP (E). Proof Under Assumption 3.9, C is clearly in . Thus, C is still bounded by B C and for all (θ, γ) and (θ, γ) in X one has by applying Proposition C.1 to x → c(x 2 , d, x) that is clearly in BL(E) under Assumption 3.9. Hence application P c is still in BL(E). Then, for all (θ, γ) and (θ, γ) in X one obtains Hence the result. Lemma C.9 Under Assumptions 3.9 and 4.1, for all ξ ∈X , d ∈ K (ξ), one has Proof The result the follows directly form Proposition C.4 and the fact that x → c(ω i , d, x) is in BL(E). C.5 Proof of Theorem 4.2 We establish the result by (backward) induction on n, with the additional statement that v n is in BLP (E) for all n. The dynamic programming equations yield • The fist term A 1 is bounded thanks to Lemma C.9 by (B c + L c ) sup j∈{1,..., } D j . • The second term A 2 is bounded by Proposition C.6 as v n is inBLP (E): • The last term A 3 is bounded by the induction hypothesis and using the fact thatR is a Markov kernel hence the result. D.1 Regularity of operatorR We first need to investigate the regularity of the approximate filter operator. d Ω (θ,θ). D.2 Regularity of the cost functions We last need to check that the cost functions c , C also belong to BLP (Ω). Lemma D.4 Let g be a function from X onto R belonging to BLP (E), then the restriction of g toX is in BLP (Ω). Proof First, if g is bounded by g E,P on X , it is also bounded by g E,P onX . Second, if g is constant on E 3 it is also constant on Ω 3 . Last, for all (θ, γ) and (θ, γ) inX one has hence the result. In particular, the restrictions of C andc toX belong to BLP (Ω). D.3 Proof of Theorem 4.3 We establish the result by (backward) induction on k, with the additional statement thatv k is in BLP (Ω) for all k. by definition. In addition,v = C is in BLP (Ω). • Suppose the result holds true for some k + 1 ≤ N . By induction,v n+1 is in BLP (Ω), thus R v n+1 is also in BLP (E) by Proposition D.2. For all d ∈ A,c d is in BLP (Ω). As BLP (Ω) is clearly stable by finite maximum, it follows thatv n is also in BLP (Ω). Now, the dynamic programming equations yield The last term on the right-hand side is smaller than max 1≤j≤nΓ |v k+1 (ρ j ) −v k+1 (ρ j )| asR is a Markov kernel. The first term on the right-hand side is bounded by Proposition D.3 asv n is in BLP (Ω): which concludes the proof. E Specifications for the numerical example The simulation study presented in this section 5 has been constructed based on real data obtained from the Centre de Recherche en Cancérologie de Toulouse (CRCT). Multiple myeloma (MM) is the second most common haematological malignancy in the world and is characterized by the accumulation of malignant plasma cells in the bone marrow. Classical treatments are based on chemotherapies, which, if appropriate, act fast and efficiently bringing MM patients to remission in a few weeks. However almost all patients eventually relapse more than once and the five-year survival rate is about 50%. We have obtained data from the Intergroupe Francophone du Myélome 2009 clinical trial which has followed 748 French MM patients from diagnosis to their first relapse on a standardized protocol for up to six years. At each visit a blood sample has been obtained to evaluate the amount of monoclonal immunoglobulin protein in the blood, a marker for the disease progression. Based on these data, we chose to use exponential flows for Φ, piece-wise constant linear functions for jump intensities, and three possible visit values: T = {δ, 2δ, 4δ} with δ = 15 days. Explicit forms are given in appendix E.1, assumptions are verified in appendix E.2. E.1 Special form of the local characteristics in the simulation study We now detail the special form used in our numerical examples to fit with our medical decision problem. We choose ζ 0 = 1, D = 40, H = 2400 days and T = {15, 30, 60} days. The values of Φ m are given in table 4, where v ∅ 1 = 0.02, v ∅ 2 = 0.006, v 1 = 0.077, v 2 = 0.025, v 1 = 0.01 and v 2 = 0.003. The link function F is chosen to be the identity, and the noise ε corresponds to a truncated centred Gaussian noise with variance parameter σ 2 and truncation parameter s. The explicit form of t * d m is given in table 5. The values of λ m are given in table 6. For the standard relapse intensities (µ i ), we choose piece-wise increasing linear functions calibrated such that the risk of relapsing increases until τ 1 (average of standard relapses occurrences), then remains constant, and further increases between τ 2 and τ 3 years (to model late or non-relapsing patients): We set τ 1 1 = 750, τ 2 1 = 500 (days), τ 2 = 5, τ 3 = 6 (years), ν i 1 was selected so that 20% of patients relapse before τ i 1 , and ν i 2 such that 10% of patients have not relapsed at horizon time H. For the therapeutic escape relapses (patients who relapse while treated for a current relapse), we chose to fit a Weibull survival distribution of the form with −1 < α i < 0 to account for a higher relapse risk when the marker decreases. We arbitrarily chose β i = −0.8 and calibrated b i = 1000 such that only about 5% of patients experience a therapeutic escape. Finally, the Markov kernels are given in table 7. Cases for m = 3 are omitted as no jump is allowed when the patient has died. The possible transitions between modes are illustrated in fig. 2, and an example of(continuous-time) controlled trajectory is given in fig. 3. E.2 Technical specifications in our examples De −v 2 r ζ 0 e v 2 r theorem 3.8 is valid for our truncated Gaussian noise and identity link function with L f = 2sD(D + s)(pσ 3 √ 2π) −1 ,f ≤ (pσ √ 2π) −1 , f ≥ (pσ √ 2π) −1 e −D 2 /2σ 2 and B f = 2(D + s)f where p = P(−s ≤ Z ≤ s) for a centred Gaussian random variable Z with variance σ 2 . theorem 3.9 is valid for both the time-dependent and marker-dependent cost functions with parameters given in table 9. Table 9: Upper bounds for the regularity parameter for the cost functions. time-dependent cost marker-dependent cost E.3 Grids construction As explained in the main manuscript, starting from an initial grid (with 184 = n Ω points chosen to emphasize strong beliefs on each atom of grid Ω), grids were extended iteratively by simulating a number n sim of trajectories using optimal strategies obtained by dynamic programming on the previous grids, and including all estimated filters with distance to their projection larger than a fixed threshold s. We varied the couples ((n sim , s) for both distances (L 2 and L m ) at each iteration. Extensive simulations (work not shown) indicates that for a similar number of points in the resulting grids, the results are better when using a small number of simulations with a stringent threshold than a larger number of simulations with a larger threshold. At each iteration, we also removed from the current grid all points whose density did not reach a given threshold. To do so, we simulated 10000 trajectories using the current optimal strategy keeping track of all visited points in the grid. Then we removed all points with density smaller than 0.001/n Γ (note that if all points were used equally, the density would be 1/n Γ ). Due to the large amount of points removed at the first iteration, the second grid was computed without additional points. The final process is illustrated in fig. 4. E.4 Distance impact on trajectory Setting the same seed, we illustrate the impact of the grid and distance choice on a trajectory in figs. 5 and 6. The first row shows the (true) value of X 2 , with X 1 indicated by circles for X 1 = 0, triangles for X 2 = 1 and pluses for X 2 = 3. The second row shows the observed process Y , with colors indicating treatments: black for ∅, green for a and red for b. The third row shows the mass probability of each mode of the estimated filter, and the fourth for its projected counterpart. Finally, the fifth row shows the distance between estimated and projected filters, in black for the L 2 distance, and in blue for the L m distance. fig. 5 shows that though the distance between the filter and its projection significantly decreases between the two grids, the projection does not preserve the mode, hence leads to decision which do not always seem appropriate seeing the data, in particular regarding the next visit date. On the contrary, fig. 6 shows that distance L m decreases the distance meanwhile maintaining the mass distribution between modes. E.5 Choice ofΨ orΨ in practice The discretization strategy lead to a Markov kernelR onX to which can be associated a Markov chainΨ. In theory, the dynamic programming algorithm operates on this Markov chain, and error bounds from the main theorem are computed accordingly. This should imply that at iteration k, filterΨ k is computed from the new observation and the current filterΨ k−1 , then projected on Γ to identify the optimal decision. Then this projectionΨ k = p |Γ (Ψ k ) is saved as current filter for the next observation. In practice, this implies that the projection errorΨ k −Ψ k is propagated through the dynamic programming recursion. We propose instead to saveΨ k as current filter for the next iteration, i.e. the filter at iteration k + 1 will be computed using Y k+1 andΨ k . We do not propose any error bound on this practical strategy, and there is no guarantee that this should lead to better results, as projection errors on Γ may sometimes compensate previous errors. However in our simulation studies we have observed a significant difference between the strategies, as illustrated in table 10 (grid 1021 with Lm distance). One can note that the estimated cost is identical in the visit choice framework between filterΨ andΨ as they correspond to how grids were calibrated, but in practice the real cost is lower with Ψ as the latter is closer to the true data.
2021-12-20T02:15:10.170Z
2021-12-17T00:00:00.000
{ "year": 2021, "sha1": "8c91479af1861b13ba0b5c9423817ae2814ad0a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6617aaa1a59c1afae32fb4c13164206a5cbd58cb", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
80017197
pes2o/s2orc
v3-fos-license
Common behavioral problems among patients with dementia attending in tertiary care hospitals in Dhaka city Elderly people are increasing day by day both in developing and developed country due to development of new treatment, increased awareness of people and improved health facilities. This present study was conducted with the aim to identify behavioral problems according to severity of dementia. This descriptive cross sectional study was conducted in the Department of Psychiatry and Department of Neuro-medicine of Bangabandhu Sheikh Medical University (BSMMU), Dhaka, Bangladesh and in (NIMH), Sher-E-Bangla Nagar, Bangladesh from September 2013 to March 2015. A total 150 patients were selected purposefully; severity of dementia was graded according to Mini Mental State Examination (MMSE) and another questionnaire was applied to detect behavioral problems of patients. In this study mild dementia was found as the most frequent (38%), followed by severe dementia (35.3%) and moderate dementia (26.7%). The results indicated that behavioral problem was more common in severe dementia. Behavioral problem was more common in severe dementia than in mild and moderate dementia. Among behavioral problems sleep disturbance and sexual disturbance were statistically significant This study provides information about pattern of behavioral problems among patients with dementia. Liaison approach with other discipline may improve quality of life of these patients treatable. Introduction Elderly people are increasing day by day both in developing and developed country due to development of new treatment, increased awareness and improved health facilities.With rapid increase in the number of elderly population and under the condition of socio-economic transformation, the elderly persons are experiencing a difficult time.Most of them are suffering from different types of psychiatric disorders. 1Patients with dementia need to adjust new life style which is stressful for them.Dementia reduces the ability to learn, reason, retain or recall past experience and there is also loss of memory, patterns of thoughts, feelings and activities.Behavioral problems include restlessness, agitation, sleep disturbances, eating difficulty, disinhibition and resistance to care.(2.3%) and poor appetite (2.3%). 7Behaviors such as aggression, screaming, restlessness, agitation and wandering are frequent reason for referral to specialist mental health services for older people. 8The lives of patients with dementia are severely disrupted because of the loss of memories of person, place, time, and circumstances and how to handle them.Some patients often have recent telephone conversations with people who have died some time ago.This sort of problem is likely to be a result of a misunderstanding of time. 9Considering this fact, present study was conducted with the aim to identify behavioral problems according to severity of dementia.Findings of this study will provide baseline information to stimulate further studies as well as be helpful for the development of awareness and improvement of quality of life of individuals in ageing society.for Social Sciences (SPSS), version-15.Level of significance was measured at 95% confidence interval at 5 % level of significance. Results According to MMSE among participants in this study mild dementia was found as the most frequent (38%), followed by severe dementia (35.3%) and moderate dementia (26.7%) (Figure 1). Behavioral problem was more common in severe dementia than in mild and moderate dementia.Among behavioral problems sleep disturbance and sexual disturbance was statistically significant (p <0.05) (Table 1).Another study showed that anxiety was present up to 94.5% in vascular with severe dementia.In the other hand this study showed that in severe dementia agitation was present up to 55% of patents. 12So, this might be the cause of inconsistency.Anxiety 54%, eating problem 28% and aberrant motor behavior 47% was found in patients with dementia in a study, done on 125 demented people and most of them already took anti-demented and other psychotropic.In this study most patients at the time of data collection did not take any medication.It was their first visit.So, this result was inconsistent with that study. Conclusion Despite a number of limitations (like small sample size, short duration of study, lack of data from caregivers, purposive.sampling, and lack of validated tool) this study provides information about pattern of behavioral problems among patients with dementia.The findings in this study emphasizes that more awareness is required regarding management of patients with dementia.Patients with dementia need specific treatment and management.Liaison approach with other discipline is needed for these patients.To increase the quality of life and to avoid treatment complication it is necessary to give a comprehensive management in this elderly group. was a descriptive cross sectional study conducted among patients having dementia from September 2013 to March 2015 in the Department of Psychiatry and Department of Neuro-medicine of Bangabandhu Sheikh Mujib Medical University, Shahbag, Dhaka and National Institute of Mental Health, Sher-E-Bangla Nagar, Dhaka.Researcher took patients from dementia clinic of BSMMU and geriatric clinic of NIMH on their respective days and also from indoor and outdoor of respective institutes.A total number of 150 patients having the age range of 60 years and above irrespective sex were selected purposefully and informed consent was taken from patient and their care givers or their legal guardians.Each case of severity of dementia was graded according to Mini Mental State Examination (MMSE) by the consultants of respective institutes.Another questionnaire was applied to detect behavioral problems of patients.Due to lack of validated tool in Bangladesh researcher herself formed this questionnaire to diagnose behavioral problem in patients with dementia.lt included wandering, aggression, sleep disturbances, restlessness, eating disorder & lack of social behavior.
2018-12-05T23:37:14.722Z
2017-06-07T00:00:00.000
{ "year": 2017, "sha1": "cae253ff83e4eb7b4dd74b18f0dbcc3c9c5793d1", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/bjpsy/article/download/32736/22116", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cae253ff83e4eb7b4dd74b18f0dbcc3c9c5793d1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
9264762
pes2o/s2orc
v3-fos-license
Preparation and Properties of Electrospun Poly (Vinyl Pyrrolidone)/Cellulose Nanocrystal/Silver Nanoparticle Composite Fibers Poly (vinyl pyrrolidone) (PVP)/cellulose nanocrystal (CNC)/silver nanoparticle composite fibers were prepared via electrospinning using N,N′-dimethylformamide (DMF) as a solvent. Rheology, morphology, thermal properties, mechanical properties, and antimicrobial activity of nanocomposites were characterized as a function of material composition. The PVP/CNC/Ag electrospun suspensions exhibited higher conductivity and better rheological properties compared with those of the pure PVP solution. The average diameter of the PVP electrospun fibers decreased with the increase in the amount of CNCs and Ag nanoparticles. Thermal stability of electrospun composite fibers was decreased with the addition of CNCs. The CNCs help increase the composite tensile strength, while the elongation at break decreased. The composite fibers included Ag nanoparticles showed improved antimicrobial activity against both the Gram-negative bacterium Escherichia coli (E. coli) and the Gram-positive bacterium Staphylococcus aureus (S. aureus). The enhanced strength and antimicrobial performances of PVP/CNC/Ag electrospun composite fibers make the mat material an attractive candidate for application in the biomedical field. Introduction Electrospun technology is widely used to form high-quality and well-defined fibers with submicron or nanoscale diameters. The resultant fibers have unique properties, i.e., high surface area-to-volume ratio, small pore sizes, high porosity, and the potential for controlled release of active materials [1,2]. Moreover, the electrospinning process provides a significant compromise, considering throughput and the control of size and shape that could be tuned by proper control of electrostatic forces [3]. The critical factors affecting the electrospinning process and the nanofiber morphology can be divided into four kinds: structural properties of polymer (molecular weight and tacticity), polymer solution parameters (concentration, electrical conductivity, viscosity, and surface tension), processing conditions (voltage, spinning distance, feed rate, and nozzle geometry), and ambient (lactide-co-glycolide) (PLGA) [35], PEO [36,37], and PAN [38,39] exhibited the improved antimicrobial properties in composites. The Tar/PAN/Ag nanofibers showed higher antimicrobial activities (up to 39%) against Gram-positive Staphylococcus aureus (S. aureus) and Gram-negative Escherichia coli (E. coli) in comparison with the neat PAN nanofibers [39]. The main objectives of this work were to use CNCs as reinforcing agents and Ag nanoparticles as antimicrobial agents in electrospun PVP composite fibers to improve the mechanical properties and antimicrobial properties, respectively. The electrospun precursor suspensions and fibers were characterized in terms of rheology, morphology, thermal properties, and mechanical properties. The influence of CNCs and Ag nanoparticles on morphology and size of the electrospun fibers were characterized. E. coli and S. aureus were chosen to evaluate the antibacterial activity of electrospun PVP/CNC/Ag composite fibers. Materials Poly (vinyl pyrrolidone) (PVP, Mw 40,000 and Mw 360,000), N,N 1 -dimethylformamide (DMF), and silver nitrate were purchased from Sigma-Aldrich (St. Louis, MO, USA). All reagents were of analytical grade and were used without further purification. Preparation of CNCs The CNCs were isolated from corn stalk using 60 wt % sulfuric acid hydrolysis and mechanical treatments [40]. The prepared CNCs had a needle-like morphology with an average width of 6.4˘3.1 nm and length of 120.2˘61.3 nm from the transmission electron microscopy (TEM) analysis (see Figure S1 in Supplementary Materials). The aspect ratio was about 18.94. Samples from the prepared PVP solution or other suspensions were added to a plastic syringe with a 20 gauge needle (internal diameter = 0.64 mm), which was connected to a high voltage power supply (Gamma High Voltage Research, Ormond Beach, FL, USA). The feeding rate of the polymer was controlled at 1 mL/h by a syringe pump (Chemyx Inc., Stafford, TX, USA). A piece of grounded aluminum foil used as the collector. The distance between the spinneret and collector was 20 cm and the applied voltage was 18 kV. The obtained PVP composites were stored at 22˘2˝C and relative humidity (40%˘2%) before further testing. Characterization of Electrospinning Suspensions Electrical conductivity of PVP/CNC, PVP/AgNO 3 , and PVP/CNC/AgNO 3 suspensions were measured using a Jenway Model 4330 conductivity and PH meter (OAKION, Bath, UK) at room temperature. The viscosity of prepared suspensions were determined using a rheometer (AR 2000ex, TA Instrument, Inc., New Castle, DE, USA) with a cone and plate geometry (cone angle = 2˝; diameter = 40 mm; truncation = 56 µm) at 25˝C. Steady-state viscosity was measured in a shear rate range from 1 to 100 s´1. For non-Newtonian fluids, various mathematical models can be used to fit the relationship between shear stress and shear rate. Among them, the power law and Bingham plastic models are most commonly used to describe this behavior [41,42]. The power law model is widely used for its simplicity: where τ is the shear stress, K is the flow consistency coefficient, γ is the shear rate, and n is the flow behavior index. With the power law model, the flow consistency coefficient and flow behavior index can be obtained. However, due to the lack of yield point, the power law model may not be accurate to fit the rheological curves, especially at the low shear rates. To overcome this inconvenience, the Bingham plastic model was considered, as expressed by Equation (2): where µ p is the plastic viscosity. With the Bingham plastic model, the yield point and plastic viscosity can be calculated. Field Emission-Scanning Electron Microscopy (FE-SEM) Analysis FE-SEM (FEI Quanta TM 3D FEG dual beam SEM/FIB system, Hillsboro, OR, USA) was used to characterize the surface morphology of the composites. The samples were coated with a thin layer of gold before observation in order to increase the sample conductivity. The diameter and diameter distribution of the fibers in the mats were determined by using Pro Plus 6.3 (Media Cybernetics, Inc., Bethesda, MD, USA) with sampling sizes of at least 100 fibers from FE-SEM micrographs. Transmission Electron Microscopy (TEM) To characterize the dispersion of Ag nanoparticles in the PVP electrospun fiber and CNC suspensions, transmission electron microscopy (TEM, JEM 1400, JEOL, Peabody, MA, USA) operating at an accelerating voltage of 120 kV was used. The dimensions of CNCs were measured using the same process. The concentration of CNC suspension was diluted to 0.02% (w/v) prior to the TEM test. Fourier Transform Infrared Spectroscopy (FTIR) An FTIR spectrometer (VERTEX80, Bruker, Billerica, MA, Germany) was used to study the chemical structure of PVP/CNC/Ag composites. The FTIR spectra of materials were evaluated in the range of 4000 to 400 cm´1 with a resolution of 4 cm´1 at 32 scans. Thermogravimetric Analysis (TGA) To study the thermal stability of composite fiber samples, approximately 5 mg of sample were placed in a standard TGA pan and heated in temperature ranging from 30 to 600˝C, with a heating rate of 10˝C/min under a nitrogen flow of 40 cm 3 /min using a thermogravimetric analyzer TAQ50 analyzer (TA Instruments, New Castle, DE, USA). Mechanical Properties Tensile strength and elongation at break were measured using the TA AR2000 rheometer (TA Instruments, New Castle, DE, USA) with a solid fixture. Mats were carefully peeled off from the surface of aluminum foil and then placed between two pieces of weighing paper to avoid any direct touch damage on the mat surfaces during sample preparation. The tensile gauge length was 10 mm. The speed of tensile testing was 10 µm/s and three specimens with dimension of 15 mm (length)5 mm (width)ˆ0.2 mm (thickness) were used for each sample group. The stress and strain were calculated through the machine-recorded force and displacement based on the initial cross-section area and gauge length, respectively. Antimicrobial Performance Antimicrobial activities of PVP/CNC/Ag composite fibers were tested against both the Gram-negative bacterium E. coli and the Gram-positive bacterium S. aurues using the Kirby-Bauer antibiotic testing method. First, the bacteria were cultivated in 10 mL sterilized tryptic soy broth and incubated in an incubator at 37˝C. The cell density was monitored by measuring the absorbance at 600 nm of culture medium using a spectrophotometer. When the cell density achieved approximately 10 7 CFU/mL, the culture medium was taken out from the incubator and carefully spread on the surface of solidified agar plate using a sterile cotton swap. Second, the PVP/CNC/Ag film were cut into disks with a diameter of 12.7 mm and sterilized by UV irradiation for 15 min. The disks were then placed on the inoculated agar plates and incubated at 37˝C for 24 h. Finally, the inhibition zone for bacterial growth was detected visually. Properties of PVP/CNC/AgNO 3 Suspension The solution conductivity plays a key role in the electrospinning process since the viscous polymer solution is stretched due to the repulsion of the charges present on its surface, and more charges can be carried at higher solution conductivity [43]. As listed in Table 1, in comparison with that of the pure PVP solution, the electrical conductivity of PVP/CNC suspensions increased with the addition of CNCs. The conductivity of PVP/CNC-2% and PVP/CNC-4% increased by 9.9˘0.17 and 14.9˘0.18 µs¨cm´1 compared with the value of pure PVP, respectively. This was ascribed to the CNC surface having sulfate ester groups and uronic acid [35]. Similarly, PVP/AgNO 3 suspension also presented higher electrical conductivity than the pure PVP solution because of the excellent electrical conductivity of Ag. The conductivity of PVP/AgNO 3 -0.34% suspension increased from 53.8˘0.22 to 59.9˘0.23 µs¨cm´1 in comparison with that of PVP/AgNO 3 -0.17%. The PVP/CNC-4%/AgNO 3 -0.34% suspension had a higher electrical conductivity than other samples. The viscosity was another critical parameter for determining the morphology of the electrospun fibers. Figure 1a shows the plots of viscosity versus shear rate for the pure PVP, PVP/CNC, PVP/AgNO 3 , and PVP/CNC/AgNO 3 systems. The rheology behavior of the aqueous PVP solution changed from Newtonian fluid behavior to typical shear thinning behavior after adding the CNCs. The viscosity of suspension at shear rate of 0.1 s´1 increased from 0.3 to 1.2 Pa¨s after adding 4% CNCs in the PVP solution, which was due to the growth in the collision possibility of CNCs [19]. Zhang et al. reported that the orientation of macromolecular chains was the major cause of non-Newtonian behavior [44]. With the increase in the shear rate, the number of the oriented polymer segments increased, which decreased the viscosity, greatly promoting the non-Newtonian behavior. However, in comparison with CNCs, Ag + had no effect on the pure PVP solution. Moreover, after adding 0.34 wt % AgNO 3 to the PVP/CNCs suspension, the viscosity was observed to decrease by 21% and presented a similar Newtonian rheology behavior in comparison with PVP/CNC-2% polymer suspension. The PVP/CNC-4% polymer suspension also had the same tendency after adding AgNO 3 . For PVP/CNC/AgNO 3 systems, the rheology behavior converted back to the Newtonian fluid and the viscosity was also decreased after adding silver nanoparticles compared with the PVP/CNC systems. This was due to the fact that Ag + destroy some of the hydrogen bonds inside the CNCs, or some bonds between CNCs and PVP, which led to the appearance of competition between CNCs and Ag. All of the results indicated that the combination of CNCs and silver nanoparticles can help adjust the rheology behavior of PVP electrospun systems to meet a desired spinning need. Figure 1b shows the plots of shear stress versus shear rate for the pure PVP solution, PVP/CNC, PVP/AgNO 3 , and PVP/CNC/AgNO 3 suspensions. Similar to the viscosity results, the shear stress of the PVP solution also increased with the addition of CNCs. The Bingham plastic and power law models were applied to fit their shear stress-shear rate curves, and the corresponding fit parameters are summarized in Table 2. Both the Bingham plastic and power law models displayed a good fit for the shear stress-shear rate curves. The power law model was evidenced by the higher values of R 2 (>0.99), indicating a good correlation. The difference between power law and Bingham plastic can be explained in terms of yield stress. It can be seen that pure PVP solutions, PVP/CNC-2%, PVP/CNC-4%, PVP/AgNO 3 -0.17%, PVP/AgNO 3 -0.34%, PVP/CNC-2%/AgNO 3 -0.34%, and PVP/CNC-4%/AgNO 3 -0.34% suspensions had yield point values of 0.127, 1.011, 1.813, 0.149, 0.105, 0.556, and 0.520 Pa¨s from the Bingham plastic model, respectively. The PVP systems with CNCs had higher yield point values than pure PVP and PVP/AgNO 3 suspensions systems. The yield point, the stress required to move the electrospun suspensions, plays an important role in the electrospinning process, which can be used to predict the feeding rate and the morphology of electrospun fibers. The flow behavior index describes the type of suspensions: Newtonian (n = 1), non-Newtonian with a shear thinning behavior (n < 1), or non-Newtonian with shear thickening behavior (n > 1) [45]. From this aspect, the flow behavior index of PVP, PVP/AgNO 3 -0.17%, and PVP/AgNO 3 -0.34% was near 1, indicating the Newtonian fluid behavior. The addition of CNCs in the suspension led to increased shear thinning behavior (reduced n value). For PVP/CNC/AgNO3 systems, the rheology behavior converted back to the Newtonian fluid and the viscosity was also decreased after adding silver nanoparticles compared with the PVP/CNC systems. This was due to the fact that Ag + destroy some of the hydrogen bonds inside the CNCs, or some bonds between CNCs and PVP, which led to the appearance of competition between CNCs and Ag. All of the results indicated that the combination of CNCs and silver nanoparticles can help adjust the rheology behavior of PVP electrospun systems to meet a desired spinning need. Figure 1b shows the plots of shear stress versus shear rate for the pure PVP solution, PVP/CNC, PVP/AgNO3, and PVP/CNC/AgNO3 suspensions. Similar to the viscosity results, the shear stress of the PVP solution also increased with the addition of CNCs. The Bingham plastic and power law models were applied to fit their shear stress-shear rate curves, and the corresponding fit parameters are summarized in Table 2. Both the Bingham plastic and power law models displayed a good fit for the shear stress-shear rate curves. The power law model was evidenced by the higher values of R 2 (>0.99), indicating a good correlation. The difference between power law and Bingham plastic can be explained in terms of yield stress. It can be seen that pure PVP solutions, PVP/CNC-2%, PVP/CNC-4%, PVP/AgNO3-0.17%, PVP/AgNO3-0.34%, PVP/CNC-2%/AgNO3-0.34%, and PVP/CNC-4%/AgNO3-0.34% suspensions had yield point values of 0.127, 1.011, 1.813, 0.149, 0.105, 0.556, and 0.520 Pa·s from the Bingham plastic model, respectively. The PVP systems with CNCs had higher yield point values than pure PVP and PVP/AgNO3 suspensions systems. The yield point, the stress required to move the electrospun suspensions, plays an important role in the electrospinning process, which can be used to predict the feeding rate and the morphology of electrospun fibers. The flow behavior index describes the type of suspensions: Newtonian (n = 1), non-Newtonian with a shear thinning behavior (n < 1), or non-Newtonian with shear thickening behavior (n > 1) [45]. From this aspect, the flow behavior index of PVP, PVP/AgNO3-0.17%, and PVP/AgNO3-0.34% was near 1, indicating the Newtonian fluid behavior. The addition of CNCs in the suspension led to increased shear thinning behavior (reduced n value). Figure 2 shows the morphology of electrospun neat PVP (Figure 2a), PVP/CNC-2% (Figure 2b), PVP/CNC-4% (Figure 2c), and PVP/CNC-4%/AgNO 3 -0.34% (Figure 2d) composite fibers. The average fiber diameter (AFD) of PVP, PVP/CNC-2%, PVP/CNC-4%, PVP/AgNO 3 -0.17%, PVP/AgNO 3 -0.34%, PVP/CNC-2%/AgNO 3 -0.34%, and PVP/CNC-4%/AgNO 3 -0.34% were, respectively, 305˘31, 236˘40, 197˘41, 214˘35, 193˘43, 151˘45, and 131˘46 nm ( Table 1). With the increase in the silver nanoparticle content, the AFD of PVP/AgNO 3 -0.17%, PVP/AgNO 3 -0.34%, PVP/CNC-2%/AgNO 3 -0.34%, and PVP/CNC-4%/AgNO 3 -0.34% composite fibers decreased to 214˘35, 193˘43, 151˘45, and 131˘46 nm (Table 1), respectively. In comparison with pure PVP, the conductivity of PVP/AgNO 3 -0.34% increased by 32%, while the AFD decreased by 37% ( Table 1). The increased electrical conductivity led to increased surface charge of the polymer jet, and thus, stronger elongation forces were imposed to the jet, resulting in defect-free, more uniform fibers with a thinner diameter distribution [29,37]. Based on the conductivity and viscosity of solution above, it was concluded that PVP/CNC-4%/AgNO 3 -0.34% had the smaller diameter owing to the higher conductivity and controllable rheological properties. This phenomenon was similar to that of the electrospun Ag/CS/PEO and Ag/CNC/PLA fibers study by Jing An and Cacciotti et al. [36,46]. Figure 2 shows the morphology of electrospun neat PVP (Figure 2a), PVP/CNC-2% (Figure 2b), PVP/CNC-4% (Figure 2c), and PVP/CNC-4%/AgNO3-0.34% (Figure 2d) composite fibers. The average fiber diameter (AFD) of PVP, PVP/CNC-2%, PVP/CNC-4%, PVP/AgNO3-0.17%, PVP/AgNO3-0.34%, PVP/CNC-2%/AgNO3-0.34%, and PVP/CNC-4%/AgNO3-0.34% were, respectively, 305 ± 31, 236 ± 40, 197 ± 41, 214 ± 35, 193 ± 43, 151 ± 45, and 131 ± 46 nm (Table 1). With the increase in the silver nanoparticle content, the AFD of PVP/AgNO3-0.17%, PVP/AgNO3-0.34%, PVP/CNC-2%/AgNO3-0.34%, and PVP/CNC-4%/AgNO3-0.34% composite fibers decreased to 214 ± 35, 193 ± 43, 151 ± 45, and 131 ± 46 nm (Table 1), respectively. In comparison with pure PVP, the conductivity of PVP/AgNO3-0.34% increased by 32%, while the AFD decreased by 37% ( Table 1). The increased electrical conductivity led to increased surface charge of the polymer jet, and thus,stronger elongation forces were imposed to the jet, resulting in defect-free, more uniform fibers with a thinner diameter distribution [29,37]. Based on the conductivity and viscosity of solution above, it was concluded that PVP/CNC-4%/AgNO3-0.34% had the smaller diameter owing to the higher conductivity and controllable rheological properties. This phenomenon was similar to that of the electrospun Ag/CS/PEO and Ag/CNC/PLA fibers study by Jing An and Cacciotti et al. [36,46]. The FE-SEM-EDS data were recorded in order to provide further confirmation on the formation of silver nanoparticles on cellulose fibers. SEM-EDS spectra of silver nanoparticles impregnated composite are presented as shown in Figure S2 ( Figure S2 in Supplementary Materials). The obtained EDS spectrum of silver nanoparticles impregnated PVP/CNC-4%/AgNO 3 -0.34% confirms the existence of silver nanoparticles in the PVP/CNC-4%/AgNO 3 -0.34%, amounting at 0.37 wt %. The electrospinning process favors the uniform dispersion of Ag + species in PVP chains through the interaction with the carbonyl groups in the PVP molecules [13]. Figure 3 shows the TEM micrographs of electrospun PVP/CNC/AgNO 3 -0.34% fibers. Most Ag nanoparticles on the electrospun fibers had diameters between 7.84 and 21.53 nm (Figure 3a). It was clearly observed that individual fibers contained Ag nanoparticles on the surface of the fibers (Figure 3b). The Ag nanoparticles dispersed well in the electrospun composited fibers. The PVP not only promoted the nucleation of Ag nanoparticles, but also prohibited their aggregation [47]. Compared with other PVP-based composites, such as PVP/polyaniline (PANI), the aggregation between PANI was also significantly reduced by using PVP as the dispersing medium and, hence, the storage stability of the dispersions was improved as compared with a direct dispersion of PANI in distilled water [48]. The FE-SEM-EDS data were recorded in order to provide further confirmation on the formation of silver nanoparticles on cellulose fibers. SEM-EDS spectra of silver nanoparticles impregnated composite are presented as shown in Figure S2 ( Figure S2 in Supplementary Materials). The obtained EDS spectrum of silver nanoparticles impregnated PVP/CNC-4%/AgNO3-0.34% confirms the existence of silver nanoparticles in the PVP/CNC-4%/AgNO3-0.34%, amounting at 0.37 wt %. The electrospinning process favors the uniform dispersion of Ag + species in PVP chains through the interaction with the carbonyl groups in the PVP molecules [13]. Figure 3 shows the TEM micrographs of electrospun PVP/CNC/AgNO3-0.34% fibers. Most Ag nanoparticles on the electrospun fibers had diameters between 7.84 and 21.53 nm (Figure 3a). It was clearly observed that individual fibers contained Ag nanoparticles on the surface of the fibers (Figure 3b). The Ag nanoparticles dispersed well in the electrospun composited fibers. The PVP not only promoted the nucleation of Ag nanoparticles, but also prohibited their aggregation [47]. Compared with other PVP-based composites, such as PVP/polyaniline (PANI), the aggregation between PANI was also significantly reduced by using PVP as the dispersing medium and, hence, the storage stability of the dispersions was improved as compared with a direct dispersion of PANI in distilled water [48]. Figure 4 shows the FTIR spectra of PVP, PVP/CNC-4%, PVP/AgNO3-0.34%, and PVP/CNC-4%/AgNO3-0.34%. The peaks located at 2954, 1654, 1421, and 1288 cm −1 for PVP (Figure 4a) were assigned to the stretching vibrations of C-H, C=O, C=C, and C-N, respectively [15]. The characteristic bands such as C-O stretching at 1060 cm −1 , C-H rock at 897 cm −1 , and C-OH stretching at 1109 cm −1 are the spectrum of cellulose after acid hydrolysis [49][50][51]. These peaks were all observed in the blends of PVP/CNC-4%, indicating that the PVP/CNC-4% fiber composites contained both PVP and CNCs. With addition of 4 wt % CNCs, the absorption band of PVP at 1659 cm −1 shifted to 1651 cm −1 (Figure 4), which indicated the presence of some molecular interaction between PVP and CNCs. After the AgNO3 was added, the absorption band of PVP at 1659 cm −1 shifted to 1648 cm −1 (Figure 4), suggesting the presence of some interaction between Ag + and the C=O groups [34,52]. A similar result that involves Ag and PVP has also been reported by Chen et al. [53]. However, when 4 wt % CNCs were added to the PVP/AgNO3-0.34% composites, the band of PVP/AgNO3-0.34% at 1648 cm −1 shifted back to 1659 cm −1 again. This was ascribed to the existence of Ag disturbing the hydrogen bonds in the network structure of CNCs. Then, the characteristic peaks of CNCs, such as C-O stretching at 1060 cm −1 , C-H rock at 897 cm −1 , and C-OH stretching at 1109 cm −1 , become weak. The hydrogen bonds of CNCs crosslinked the Ag, which could help better disperse CNCs in the polymer suspensions. This was why the non-Newtonian behavior of PVP/CNC converted to Newtonian behavior after adding the silver compound. Figure 4 shows the FTIR spectra of PVP, PVP/CNC-4%, PVP/AgNO 3 -0.34%, and PVP/CNC-4%/AgNO 3 -0.34%. The peaks located at 2954, 1654, 1421, and 1288 cm´1 for PVP (Figure 4a) were assigned to the stretching vibrations of C-H, C=O, C=C, and C-N, respectively [15]. The characteristic bands such as C-O stretching at 1060 cm´1, C-H rock at 897 cm´1, and C-OH stretching at 1109 cm´1 are the spectrum of cellulose after acid hydrolysis [49][50][51]. These peaks were all observed in the blends of PVP/CNC-4%, indicating that the PVP/CNC-4% fiber composites contained both PVP and CNCs. With addition of 4 wt % CNCs, the absorption band of PVP at 1659 cm´1 shifted to 1651 cm´1 (Figure 4), which indicated the presence of some molecular interaction between PVP and CNCs. After the AgNO 3 was added, the absorption band of PVP at 1659 cm´1 shifted to 1648 cm´1 (Figure 4), suggesting the presence of some interaction between Ag + and the C=O groups [34,52]. A similar result that involves Ag and PVP has also been reported by Chen et al. [53]. However, when 4 wt % CNCs were added to the PVP/AgNO 3 -0.34% composites, the band of PVP/AgNO 3 -0.34% at 1648 cm´1 shifted back to 1659 cm´1 again. This was ascribed to the existence of Ag disturbing the hydrogen bonds in the network structure of CNCs. Then, the characteristic peaks of CNCs, such as C-O stretching at 1060 cm´1, C-H rock at 897 cm´1, and C-OH stretching at 1109 cm´1, become weak. The hydrogen bonds of CNCs crosslinked the Ag, which could help better disperse CNCs in the polymer suspensions. FTIR Analysis This was why the non-Newtonian behavior of PVP/CNC converted to Newtonian behavior after adding the silver compound. Thermal Properties TGA and derived differential TG (DTG) curves of neat PVP, PVP/CNC-2%, PVP/CNC-4%, and PVP/CNC-4%/AgNO3-0.34% mats are shown in Figure 5a and Figure 5b, respectively. Thermal parameters, including onset thermal degradation temperature (T10%) and the maximum thermal degradation temperature (Tmax), are summarized in Table 3. Herein, the onset thermal degradation temperature is regarded as the temperature corresponding to 10% weight loss (T10%). T10% of neat PVP was 396.0 °C, while the addition of CNCs decreased the T10% to 385.8 (2 wt % of CNCs) and 375.8 °C (4 wt % of CNCs), respectively. The reason was that, in comparison with PVP, neat CNCs had a lower T10% (Table 3), which indicated the relatively lower thermal stability. However, with addition of silver nanoparticles, the T10% values of PVP/CNC-4%/AgNO3-0.34% nanocomposite mats increased to 391.6 °C, indicating that its heat resistance increased. The addition of inorganic materials always increases thermal stability of polymers [54]. The Tmax of PVP/CNC and PVP/CNC-4%/AgNO3-0.34% mats almost had no changes, compared to that of pure PVP. Figure 5a also displays the char yield of electrospun PVP/CNC-4%/AgNO3-0.34% nanocomposite fibers increased with the addition of silver nanoparticles. Increased char formation can limit the production of combustible gases, decrease the exothermicity of the pyrolysis reaction, and inhibit the thermal conductivity of the burning materials [29]. Thermal Properties TGA and derived differential TG (DTG) curves of neat PVP, PVP/CNC-2%, PVP/CNC-4%, and PVP/CNC-4%/AgNO 3 -0.34% mats are shown in Figures 5a and 5b, respectively. Thermal parameters, including onset thermal degradation temperature (T 10% ) and the maximum thermal degradation temperature (T max ), are summarized in Table 3. Herein, the onset thermal degradation temperature is regarded as the temperature corresponding to 10% weight loss (T 10% ). T 10% of neat PVP was 396.0˝C, while the addition of CNCs decreased the T 10% to 385.8 (2 wt % of CNCs) and 375.8˝C (4 wt % of CNCs), respectively. The reason was that, in comparison with PVP, neat CNCs had a lower T 10% (Table 3), which indicated the relatively lower thermal stability. However, with addition of silver nanoparticles, the T 10% values of PVP/CNC-4%/AgNO 3 -0.34% nanocomposite mats increased to 391.6˝C, indicating that its heat resistance increased. The addition of inorganic materials always increases thermal stability of polymers [54]. The T max of PVP/CNC and PVP/CNC-4%/AgNO 3 -0.34% mats almost had no changes, compared to that of pure PVP. Figure 5a also displays the char yield of electrospun PVP/CNC-4%/AgNO 3 -0.34% nanocomposite fibers increased with the addition of silver nanoparticles. Increased char formation can limit the production of combustible gases, decrease the exothermicity of the pyrolysis reaction, and inhibit the thermal conductivity of the burning materials [29]. Thermal Properties TGA and derived differential TG (DTG) curves of neat PVP, PVP/CNC-2%, PVP/CNC-4%, and PVP/CNC-4%/AgNO3-0.34% mats are shown in Figure 5a and Figure 5b, respectively. Thermal parameters, including onset thermal degradation temperature (T10%) and the maximum thermal degradation temperature (Tmax), are summarized in Table 3. Herein, the onset thermal degradation temperature is regarded as the temperature corresponding to 10% weight loss (T10%). T10% of neat PVP was 396.0 °C, while the addition of CNCs decreased the T10% to 385.8 (2 wt % of CNCs) and 375.8 °C (4 wt % of CNCs), respectively. The reason was that, in comparison with PVP, neat CNCs had a lower T10% (Table 3), which indicated the relatively lower thermal stability. However, with addition of silver nanoparticles, the T10% values of PVP/CNC-4%/AgNO3-0.34% nanocomposite mats increased to 391.6 °C, indicating that its heat resistance increased. The addition of inorganic materials always increases thermal stability of polymers [54]. The Tmax of PVP/CNC and PVP/CNC-4%/AgNO3-0.34% mats almost had no changes, compared to that of pure PVP. Figure 5a also displays the char yield of electrospun PVP/CNC-4%/AgNO3-0.34% nanocomposite fibers increased with the addition of silver nanoparticles. Increased char formation can limit the production of combustible gases, decrease the exothermicity of the pyrolysis reaction, and inhibit the thermal conductivity of the burning materials [29]. Table 4. Upon the addition of 4 wt % CNCs, the ultimate tensile strength of pure PVP increased by approximately 0.8 MPa (from 2.30˘0.2 to 3.10˘0.1 MPa), indicating a reinforcing effect of the filler. However, with addition of CNCs, the elongation at break decreased sharply, which suggests that the composite became brittle in comparison with pure PVP. A similar result was reported by Zhang et al. regarding the PLA/CNC composite fibers [55]. For electrospun composite fibers, mechanical properties could be affected by several factors, i.e., individual fiber structure, molecular alignment of the amorphous chains, fiber alignment, and inter-fiber bonding [56]. With the addtion of 0.34 wt % AgNO 3 , the elongation at break and ultimate tensile strength decreased slightly. This was due to the addtion of Ag nanoparticles, leading to the presence of stress concentration in the nanofiber membranes [15]. Figure 6 shows stress-strain curves of electrospun pure PVP, PVP/CNC-2%, PVP/CNC-4%, and PVP/CNC-4%/AgNO3-0.34% composite fiber mats. The values of the elongation at break and ultimate tensile strength are summarized in Table 4. Upon the addition of 4 wt % CNCs, the ultimate tensile strength of pure PVP increased by approximately 0.8 MPa (from 2.30 ± 0.2 to 3.10 ± 0.1 MPa), indicating a reinforcing effect of the filler. However, with addition of CNCs, the elongation at break decreased sharply, which suggests that the composite became brittle in comparison with pure PVP. A similar result was reported by Zhang et al. regarding the PLA/CNC composite fibers [55]. For electrospun composite fibers, mechanical properties could be affected by several factors, i.e., individual fiber structure, molecular alignment of the amorphous chains, fiber alignment, and interfiber bonding [56]. With the addtion of 0.34 wt % AgNO3, the elongation at break and ultimate tensile strength decreased slightly. This was due to the addtion of Ag nanoparticles, leading to the presence of stress concentration in the nanofiber membranes [15]. Antimicrobial Performance The antimicrobial properties of PVP/CNC-4%/AgNO 3 -0.34% composites were tested against E. coli and S. aureus bacteria by disk diffusion testing. The inhibition zones are presented in Figure 7. After 24 h of incubation, there was bacterial growth directly under PVP/CNC, and also up to the edge of the fabric for both E. coli and S. aureus. However, PVP/CNC-4%/AgNO 3 -0.34% composites fibers acted as an excellent antimicrobial agent against both E. coli and S. aureus. This could be ascribed to the antimicrobial feature of Ag + . Ag particles released from PVP/CNC-4%/AgNO 3 -0.34% composite fibers [57] and the Ag nanoparticles attached to the cell walls and disturbed cell wall permeability and cellular respiration [15]. As the results shown above, the electrospun PVP/CNC-4%/AgNO 3 -0.34% composite fibers had a good potential for application as antimicrobial materials. Antimicrobial Performance The antimicrobial properties of PVP/CNC-4%/AgNO3-0.34% composites were tested against E. coli and S. aureus bacteria by disk diffusion testing. The inhibition zones are presented in Figure 7. After 24 h of incubation, there was bacterial growth directly under PVP/CNC, and also up to the edge of the fabric for both E. coli and S. aureus. However, PVP/CNC-4%/AgNO3-0.34% composites fibers acted as an excellent antimicrobial agent against both E. coli and S. aureus. This could be ascribed to the antimicrobial feature of Ag + . Ag particles released from PVP/CNC-4%/AgNO3-0.34% composite fibers [57] and the Ag nanoparticles attached to the cell walls and disturbed cell wall permeability and cellular respiration [15]. As the results shown above, the electrospun PVP/CNC-4%/AgNO3-0.34% composite fibers had a good potential for application as antimicrobial materials. Conclusions With the addition of CNCs and Ag nanoparticles, the PVP/CNC/Ag electrospun suspensions exhibited higher conductivity and controllable rheological properties (viscosity, shear stress, and yield point) using DMF as the solvent. Only a small amount of CNCs and Ag can help tune rheological properties and electrospinning ability. Both rheological properties and FTIR spectra indicated that the existence of Ag disturbed the hydrogen bonds in the network structure of CNCs. FE-SEM results show that the diameter of composite fibers was uniform. The average diameter of the electrospun fibers decreased with the increased loading of CNCs and Ag nanoparticles. Most Ag nanoparticles in the electrospun fibers had diameters between 7.84 and 21.53 nm. The CNCs helped increase the values of tensile strength slightly, while the value of elongation at break decreased. Thermal stability of the composite fibers decreased slightly with the addition of CNCs, but then increased with the presence of Ag nanoparticles. The PVP/CNC/Ag composites fibers showed improved antibacterial activity against both E. coli and S. aureus than the PVP/CNC composite fibers. The aggregation between Ag nanoparticles was significantly reduced by using PVP as the dispersing medium through electrospinning technology. This indicated that the controllable size of Ag nanoparticles was of potential use in the antibacterial materials at room temperature. Conclusions With the addition of CNCs and Ag nanoparticles, the PVP/CNC/Ag electrospun suspensions exhibited higher conductivity and controllable rheological properties (viscosity, shear stress, and yield point) using DMF as the solvent. Only a small amount of CNCs and Ag can help tune rheological properties and electrospinning ability. Both rheological properties and FTIR spectra indicated that the existence of Ag disturbed the hydrogen bonds in the network structure of CNCs. FE-SEM results show that the diameter of composite fibers was uniform. The average diameter of the electrospun fibers decreased with the increased loading of CNCs and Ag nanoparticles. Most Ag nanoparticles in the electrospun fibers had diameters between 7.84 and 21.53 nm. The CNCs helped increase the values of tensile strength slightly, while the value of elongation at break decreased. Thermal stability of the composite fibers decreased slightly with the addition of CNCs, but then increased with the presence of Ag nanoparticles. The PVP/CNC/Ag composites fibers showed improved antibacterial activity against both E. coli and S. aureus than the PVP/CNC composite fibers. The aggregation between Ag nanoparticles was significantly reduced by using PVP as the dispersing medium through electrospinning technology. This indicated that the controllable size of Ag nanoparticles was of potential use in the antibacterial materials at room temperature.
2016-07-09T08:41:28.331Z
2016-06-28T00:00:00.000
{ "year": 2016, "sha1": "a137c3c78c90653c12ea432c54801b41574da588", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/9/7/523/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a137c3c78c90653c12ea432c54801b41574da588", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
211112993
pes2o/s2orc
v3-fos-license
Cyanogenesis in Macadamia and Direct Analysis of Hydrogen Cyanide in Macadamia Flowers, Leaves, Husks, and Nuts Using Selected Ion Flow Tube–Mass Spectrometry Macadamia has increasing commercial importance in the food, cosmetics, and pharmaceutical industries. However, the toxic compound hydrogen cyanide (HCN) released from the hydrolysis of cyanogenic compounds in Macadamia causes a safety risk. In this study, optimum conditions for the maximum release of HCN from Macadamia were evaluated. Direct headspace analysis of HCN above Macadamia plant parts (flower, leaves, nuts, and husks) was carried out using selected ion flow tube–mass spectrometry (SIFT-MS). The cyanogenic glycoside dhurrin and total cyanide in the extracts were analyzed using HPLC-MS and UV–vis spectrophotometer, respectively. HCN released in the headspace was at a maximum when Macadamia samples were treated with pH 7 buffer solution and heated at 50 °C for 60 min. Correspondingly, treatment of Macadamia samples under these conditions resulted in 93–100% removal of dhurrin and 81–91% removal of total cyanide in the sample extracts. Hydrolysis of cyanogenic glucosides followed a first-order reaction with respect to HCN production where cyanogenesis is principally induced by pH changes initiating enzymatic hydrolysis rather than thermally induced reactions. The effective processing of different Macadamia plant parts is important and beneficial for the safe production and utilization of Macadamia-based products. Introduction Macadamia-based commercial products have rapidly increased in recent years. In addition to Macadamia nuts, Macadamia flowers, husks, leaves, and shells are now widely used as a source of functional foods, beverages, and raw materials in cosmetics, feed, and other applications. Abundant antioxidant substances, such as polyphenols, can be extracted from Macadamia skin and husks for utilization in the food and pharmaceutical industries [1][2][3]. Bioactive constituents in Macadamia are believed to provide health benefits such as improved blood lipid profiles, decreased inflammation, oxidative stress, and reduced cardiovascular disease risk factors [4,5]. Acute and chronic toxicities of hydrogen cyanide from plant-derived food have been reported [16,19,20]. Ingestion of 0.5-3.5 mg cyanide/kg body weight results in acute toxicity. Sublethal doses could lead to headache, hyperventilation, vomiting, weakness, abdominal cramps, and partial circulatory and respiratory systems failure. Moreover, cyanide can inhibit cellular respiration, which could result in fatal poisoning [7,16,21]. The concentration of cyanogenic glycosides, such as dhurrin and proteacin, varies among plant species of Macadamia (i.e., M. ternifolia, M. integrifolia, and M. tetraphylla) [6,22]. These compounds are also unevenly distributed within the different parts of a plant (i.e., nuts, seeds, and roots), and their concentrations change at different developmental stages from seed germination to plant maturation [23,24]. In this study, different conditions causing the hydrolysis of cyanogenic compounds in Macadamia that subsequently produce hydrogen cyanide gas were evaluated. Most characterization studies in Macadamia and other plants only involved analysis of cyanogenic glycosides using timeconsuming assays coupled with HPLC analysis [6,25,26]. Furthermore, typical analysis of releasable cyanide uses tedious assays and subsequent spectrophotometry or LC-or GC-MS analysis [14,27,28]. In this study, hydrogen cyanide was measured directly above the headspace of the different parts of the Macadamia plant including the flowers, leaves, husks, and nuts using selected ion flow tube-mass spectrometry (SIFT-MS). To our knowledge, this is the first study to measure hydrogen cyanide in real time and directly above the headspace of Macadamia samples using SIFT-MS. The rapid and realtime analysis of hydrogen cyanide is particularly important in the processing of the various parts of Macadamia that are known to contain cyanogenic glycosides and can subsequently hydrolyze and undergo cyanogenesis. The optimum conditions (heating temperature, heating time, and pH) for the hydrolysis of cyanogenic glycosides via cyanogenesis toward the maximum generation of hydrogen cyanide were determined. Identifying these conditions would be useful in the pre-processing of Macadamia to ensure maximum hydrolysis of cyanogenic glucoside, leading to maximum release and [17,18]. Acute and chronic toxicities of hydrogen cyanide from plant-derived food have been reported [16,19,20]. Ingestion of 0.5-3.5 mg cyanide/kg body weight results in acute toxicity. Sublethal doses could lead to headache, hyperventilation, vomiting, weakness, abdominal cramps, and partial circulatory and respiratory systems failure. Moreover, cyanide can inhibit cellular respiration, which could result in fatal poisoning [7,16,21]. The concentration of cyanogenic glycosides, such as dhurrin and proteacin, varies among plant species of Macadamia (i.e., M. ternifolia, M. integrifolia, and M. tetraphylla) [6,22]. These compounds are also unevenly distributed within the different parts of a plant (i.e., nuts, seeds, and roots), and their concentrations change at different developmental stages from seed germination to plant maturation [23,24]. In this study, different conditions causing the hydrolysis of cyanogenic compounds in Macadamia that subsequently produce hydrogen cyanide gas were evaluated. Most characterization studies in Macadamia and other plants only involved analysis of cyanogenic glycosides using time-consuming assays coupled with HPLC analysis [6,25,26]. Furthermore, typical analysis of releasable cyanide uses tedious assays and subsequent spectrophotometry or LC-or GC-MS analysis [14,27,28]. In this study, hydrogen cyanide was measured directly above the headspace of the different parts of the Macadamia plant including the flowers, leaves, husks, and nuts using selected ion flow tube-mass spectrometry (SIFT-MS). To our knowledge, this is the first study to measure hydrogen cyanide in real time and directly above the headspace of Macadamia samples using SIFT-MS. The rapid and real-time analysis of hydrogen cyanide is particularly important in the processing of the various parts of Macadamia that are known to contain cyanogenic glycosides and can subsequently hydrolyze and undergo cyanogenesis. The optimum conditions (heating temperature, heating time, and pH) for the hydrolysis of cyanogenic glycosides via cyanogenesis toward the maximum generation of hydrogen cyanide were determined. Identifying these conditions would be useful in the pre-processing of Macadamia to ensure maximum hydrolysis of cyanogenic glucoside, leading to maximum release and volatilization of hydrogen cyanide and ultimately toward the safe production and utilization of Macadamia-based products especially as ingredients in food and beverages. Sample Preparation Macadamia (M. integrifolia) flowers, leaves, nuts, and three variety of husks (A16, Oc, and 695) were donated by Shouxiang Township Organic Agricultural Products Development Co., Ltd. (Guangxi, China). The three varieties of M. integrifolia husks were introduced and propagated in China from Australia and are hybrid cultivars selected from different plantations or open-pollinated progeny (variety A16). HPLC-grade water, hexane, Na 2 HPO 4 , and citric acid were purchased from Fisher Scientific (Fisher Chemical, Fair Lawn, NJ, USA). Macadamia flowers and leaves were freeze-dried, ground, and sifted. Macadamia husks were air-dried, crushed, and sifted. Macadamia nuts were crushed, defatted using hexane, and air-dried. All samples were stored in sealed bottles under freezing temperature (−20 • C). Buffer Preparation Na 2 HPO 4 solution (0.2 mol/L) was prepared by dissolving 14.2 g Na 2 HPO 4 with 500 mL carbon dioxide-free HPLC water. Citric acid solution (0.1 mol/L) was prepared by dissolving 10.51 g citric acid with 500 mL HPLC water. Different volumes of 0.2 mol/L Na 2 HPO 4 solution and 0.1 mol/L citric acid were mixed to prepare various buffer solutions with pH 2, 3, 4, 5, 6, 7, 8, and 9. The pH of each solution or sample mixture was measured using a Model 10 pH meter (Denver Instrument Company, Arvada, CO, USA). Optimization of Heating Temperature and Heating Time Macadamia samples (0.100 g) were subjected to different heating times and temperatures to evaluate the optimum conditions for the maximum hydrolysis of cyanogenic compounds and maximum production of hydrogen cyanide. Samples were heated at 30, 40, 50, 60, 70, or 100 • C. At each temperature, samples were heated for 20, 30, 45, 60, 80, 100, or 120 min. Immediately after heating, the headspace concentration of hydrogen cyanide was measured using SIFT-MS. Optimization of pH-Buffering Solution To evaluate the optimum pH for maximum enzymatic activity and the hydrolysis reaction, 0.100 g powdered Macadamia flower sample was dissolved in 0.75 mL Na 2 HPO 4 -citric acid buffered solutions with different pH (2, 3, 4, 5, 6, 7, 8, or 9). The solutions were heated at 50 • C for 15, 30, 60, 90, and 120 min. The headspace concentration of hydrogen cyanide was immediately measured using SIFT-MS. Headspace Cyanide Analysis Using SIFT-MS Headspace hydrogen cyanide (HCN) was analyzed using a V200 selected ion flow tube-mass spectrometry, SIFT-MS (Syft TM Technologies, Middleton, Christchurch, New Zealand). Using selected ion scan mode, HCN was measured using the H 3 O + precursor ion to detect a protonated HCNH + at m/z 28 with a reaction rate, k, of 3.8 × 10 −9 cm 3 s −1 . SIFT-MS has recently been used for the headspace analysis of various compounds in different food (oil, cheese, and garlic) and breath matrices [29][30][31][32][33][34]. For the headspace detection of HCN using SIFT-MS, 0.100 g of Macadamia flowers, leaves, husks, or defatted nuts sample was weighed into individual 500 mL Schott bottles. Then, 0.75 mL HPLC water or Na 2 HPO 4 -citric acid buffer was added, and the solution was mixed and heated (50 • C) in a water bath (Precision Inc., Winchester, VA, USA). A stock cyanide standard solution (1002 ± 5 mg/L KCN in 0.1% NaOH, Specpure, Alfa Aesar, Tewksbury, MA, USA) was used to prepare the working standard aqueous solutions (0, 20, 40, 80, 160, 200, 425, and 1000 µg/L). After this, 1 mL of the working standard solution or matrix blank (HPLC water) was transferred to a 100 mL Schott bottle sealed with a septum-lined screw cap. The working standards were heated at 50 • C for 30 min to allow for headspace equilibrium prior to SIFT-MS analysis. Figure 2A shows the concentration of cyanide in the headspace (ppb v ) as a function of cyanide concentration in aqueous solution (µg/L) generated by a linear regression model. The correlation coefficient (R 2 ) for the calibration curve was 0.9993 which signifies that the linear regression model fits the data having <0.0001 significance probability associated with the F statistic (Pr > F) at 95% confidence intervals. MS analysis. Figure 2A shows the concentration of cyanide in the headspace (ppbv) as a function of cyanide concentration in aqueous solution (µg/L) generated by a linear regression model. The correlation coefficient (R 2 ) for the calibration curve was 0.9993 which signifies that the linear regression model fits the data having <0.0001 significance probability associated with the F statistic (Pr > F) at 95% confidence intervals. Immediately after achieving headspace equilibrium by heating, headspace sampling was carried out by inserting a passivated sampling needle (~3.5 cm) through the bottle's septum. The sample inlet flow rate was optimized to 0.35 ± 0.01 Torr·L s −1 (26 ± 1 cm 3 min −1 under standard ambient temperature (298 K). The scan duration was 120 s. HPLC water or Na2HPO4-citric acid buffer was used as a blank solution. Lab air was analyzed in between samples to minimize carry-over effects and potential crosscontamination. Five replicates were performed in all analyses. Dhurrin Analysis in Plant Extracts Using HPLC Dhurrin extraction and analysis were performed based on the procedure by De Nicola and coworkers [35]. Briefly, 0.2 g of freeze-dried, powdered plant sample was weighed into a 25 mL centrifuge tube and 0.1 g of activated carbon (Fisher Chemical, Fair Lawn, NJ, USA) and 10 mL methanol (Fisher Chemical, Fair Lawn, NJ, USA) were added. The mixture was sonicated for 25 min at room temperature in a 435 W ultrasonic water bath (Model FS28H, Fisher Scientific, Fair Lawn, NJ, USA) and was left overnight in the tube. After 12-14 h, the mixture was centrifuged (Model Sorvall Legend XFR Centrifuge, Thermo Fisher Scientific, Waltham, MA, USA) for 30 min at 17,000× g and 10 °C and was filtered through a Whatman no. 4 filter paper (GE Healthcare, Buckinghamshire, UK). Immediately after achieving headspace equilibrium by heating, headspace sampling was carried out by inserting a passivated sampling needle (~3.5 cm) through the bottle's septum. The sample inlet flow rate was optimized to 0.35 ± 0.01 Torr·L s −1 (26 ± 1 cm 3 min −1 under standard ambient temperature (298 K). The scan duration was 120 s. HPLC water or Na 2 HPO 4 -citric acid buffer was used as a blank solution. Lab air was analyzed in between samples to minimize carry-over effects and potential cross-contamination. Five replicates were performed in all analyses. Dhurrin Analysis in Plant Extracts Using HPLC Dhurrin extraction and analysis were performed based on the procedure by De Nicola and co-workers [35]. Briefly, 0.2 g of freeze-dried, powdered plant sample was weighed into a 25 mL centrifuge tube and 0.1 g of activated carbon (Fisher Chemical, Fair Lawn, NJ, USA) and 10 mL methanol (Fisher Chemical, Fair Lawn, NJ, USA) were added. The mixture was sonicated for 25 min at room temperature in a 435 W ultrasonic water bath (Model FS28H, Fisher Scientific, Fair Lawn, NJ, USA) and was left overnight in the tube. After 12-14 h, the mixture was centrifuged (Model Sorvall Legend XFR Centrifuge, Thermo Fisher Scientific, Waltham, MA, USA) for 30 min at 17,000× g and 10 • C and was filtered through a Whatman no. 4 filter paper (GE Healthcare, Buckinghamshire, UK). The supernatant was collected and 1:1 (v/v) HPLC-grade water was added to the resulting solution. Prior to HPLC analysis, the diluted supernatant solution was filtered through a 0.2 µm RC membrane filter (Phenomenex, Torrance, CA, USA) using a luer-type syringe (Henke-SASS Wolf GmbH, Tuttlingen, Germany) and was transferred into 1.5 mL amber vials for HPLC analysis. Dhurrin stock standard solution was prepared by dissolving 1 mg of pure dhurrin standard (Sigma Aldrich, St. Louis, MO, USA) with 1 mL of HPLC-grade water. Working standard solutions (0, 5, 10, 25, 50, and 100 mg dhurrin/L solution) were prepared using aliquots of the stock standard solution and diluted with 1:1 H 2 O/methanol (v/v) solution. Dhurrin standard solutions were transferred to 1.5 mL amber vials, correspondingly, for HPLC analysis. A solution of 1:1 H 2 O/methanol (v/v) was used as matrix blank. Figure 2B shows the peak area of dhurrin as a function of dhurrin concentration in aqueous solution (mg/L) generated by the linear regression model. The correlation coefficient (R 2 ) for the calibration curve was 0.9999 which signifies that the linear regression model fits the data having a 0.0037 significance probability associated with the F statistic (Pr > F) at 95% confidence intervals. Analysis of dhurrin from the sample extracts and standards were carried out using an HPLC (1100 Series, Agilent Technologies, Santa Clara, CA, USA) equipped with a G1311A quaternary pump, a G1322A degasser, a G1313 ALS autosampler, and a G1316A thermostated column compartment with a C-18 column. The chromatographic conditions involved a flow rate of 1 mL/min by eluting with a gradient of water (A) and acetonitrile (B). The gradient program was set as follows: isocratic 10% B for 1 min, linear gradient to 30% B for 7 min, and linear gradient to 10% B for 2 min. Dhurrin was detected using a G1315B diode array detector (DAD) detector (Agilent Technologies, Santa Clara, CA, USA), and its absorbance was monitored at 232 nm. Dhurrin's spectral peak was identified by comparing the retention time to that of pure dhurrin from the standard solutions. The resulting chromatograms ( Figure 3) were automatically integrated using ChemStation software (Agilent Technologies Inc., Santa Clara, CA, USA). Five replicates per standard or sample extracts were performed in all analyses. The supernatant was collected and 1:1 (v/v) HPLC-grade water was added to the resulting solution. Prior to HPLC analysis, the diluted supernatant solution was filtered through a 0.2 µm RC membrane filter (Phenomenex, Torrance, CA, USA) using a luer-type syringe (Henke-SASS Wolf GmbH, Tuttlingen, Germany) and was transferred into 1.5 mL amber vials for HPLC analysis. Dhurrin stock standard solution was prepared by dissolving 1 mg of pure dhurrin standard (Sigma Aldrich, St. Louis, MO, USA) with 1 mL of HPLC-grade water. Working standard solutions (0, 5, 10, 25, 50, and 100 mg dhurrin/L solution) were prepared using aliquots of the stock standard solution and diluted with 1:1 H2O/methanol (v/v) solution. Dhurrin standard solutions were transferred to 1.5 mL amber vials, correspondingly, for HPLC analysis. A solution of 1:1 H2O/methanol (v/v) was used as matrix blank. Figure 2B shows the peak area of dhurrin as a function of dhurrin concentration in aqueous solution (mg/L) generated by the linear regression model. The correlation coefficient (R 2 ) for the calibration curve was 0.9999 which signifies that the linear regression model fits the data having a 0.0037 significance probability associated with the F statistic (Pr > F) at 95% confidence intervals. Analysis of dhurrin from the sample extracts and standards were carried out using an HPLC (1100 Series, Agilent Technologies, Santa Clara, CA, USA) equipped with a G1311A quaternary pump, a G1322A degasser, a G1313 ALS autosampler, and a G1316A thermostated column compartment with a C-18 column. The chromatographic conditions involved a flow rate of 1 mL/min by eluting with a gradient of water (A) and acetonitrile (B). The gradient program was set as follows: isocratic 10% B for 1 min, linear gradient to 30% B for 7 min, and linear gradient to 10% B for 2 min. Dhurrin was detected using a G1315B diode array detector (DAD) detector (Agilent Technologies, Santa Clara, CA, USA), and its absorbance was monitored at 232 nm. Dhurrin's spectral peak was identified by comparing the retention time to that of pure dhurrin from the standard solutions. The resulting chromatograms ( Figure 3) were automatically integrated using ChemStation software (Agilent Technologies Inc., Santa Clara, CA, USA). Five replicates per standard or sample extracts were performed in all analyses. Total Cyanide Analysis in Plant Extracts Using UV-Vis Spectrophotometer The alkaline picrate method was used for the extraction and analysis of total cyanide as outlined by Sarkiyayi and Agar [36] and Omar and co-workers [37]. Five grams (5 g) dried samples and 50 mL HPLC water were placed in a conical flask which was soaked overnight and then filtered using Whatman no. 4 filter paper. One mL of the filtrate was transferred to a test tube, and 4 mL alkaline picric acid solution was added. The mixture was incubated for 5 min in a 95 • C H 2 O bath. After color development, the absorbance of the mixture was measured at 490 nm using a Varian UV-vis spectrophotometer (Agilent, Cary 50 Bio UV/Visible, Santa Clara, CA, USA). Alkaline picric acid solution was prepared by mixing 1 g picric acid (2,4,6-trinitrophenol crystal, Electron Microscopy Sciences, Hatfield, PA, USA), 5 g Na 2 CO 3 (Fisher Scientific, Fair Lawn, NJ, USA), and 200 mL HPLC water. A stock cyanide standard solution (1002 ± 5 mg/L KCN in 0.1% NaOH) was used to prepare the working standard aqueous solutions (0-20 mg/L). One milliliter of the working standard solution or matrix blank (HPLC water) was transferred to a test tube. Four milliliters of alkaline picric acid solution was added, and the mixture was incubated for 5 min in a 95 • C H 2 O bath for color development. The solution absorbance was measured at 490 nm using a UV-vis spectrophotometer. Figure 2C shows the spectral absorbance as a function of the cyanide concentration in aqueous solution (mg/L) generated by the linear regression model. The correlation coefficient (R 2 ) for the calibration curve was 0.9966 which signifies that the linear regression model fits the data having <0.0001 significance probability associated with the F statistic (Pr > F) at 95% confidence intervals. For analysis, 20 replicates per standard and 10 replicates per sample extract were used for UV-vis measurement. Statistical Analysis Data fitting, analysis of least square means, and regression analysis of the headspace hydrogen cyanide concentrations were carried out using the PROC REG and PROC MIXED options of Statistical Analysis System (SAS ® Institute Inc., Cary, NC, USA). Analysis of variance (ANOVA) was performed to analyze the statistical differences of cyanide concentration between different samples using least significant difference of the means (LSD) technique using SAS. Significance was defined using p < 0.05 (95% confidence intervals) for least square means comparison. Five replicates were performed in all analyses, except where otherwise specified. Limit of blank (LOB) and limit and detection (LOD) were determined using the methods described by Browne and Whitcomb [38] and Shrivastava and Gupta [39]. The estimated headspace LOB (PHBA) and LOD (PHBA) for HCN using SIFT-MS were 2.462 ppb v and 2.775 ppb v , respectively, which were determined using repeated headspace measurements of blank (n = 60) heated in a water bath at 90 • C for 50 min. Figure 4A shows the concentration of hydrogen cyanide (HCN) from the headspace of Macadamia flower samples heated at 30, 40, 50, 60, 70, and 100 • C for 20, 30, 45, 60, and 80 min. For all heating times, the headspace HCN concentration increased from 30 to 50 • C and decreased linearly beyond 50 • C. Thus, HCN generation in Macadamia was maximum at 50 • C. These results suggest that the optimum temperature for enzymatic activity of endogenous dhurrinase and α-hydroxynitrile lyase is 50 • C (Figure 1). At higher heating temperatures (i.e., above 50 • C), cyanide production decreased, which could be caused by decreased enzyme activity or inactivation and, therefore, reduced subsequent hydrolysis reaction of the main cyanogenic glycoside dhurrin ( Figure 4A). It is interesting to note that the decreasing production of cyanide at higher temperatures (60-100 • C) is gradual rather than an abrupt cyanide reduction that could be expected from thermally induced enzyme inactivation. The measured cyanide at high temperature could be produced from other thermolabile cyanogenic glycosides that are present in minor amounts. At high temperature, the isomers of dhurrin such as taxiphyllin, zierin, and p-glucosyloxy-mandelonitrile can readily dissociate and release cyanide without enzymatic hydrolysis [40][41][42]. Therefore, the cyanide released from the thermally induced decomposition of these minor cyanogenic glycosides could be contributing to the detected cyanide in the headspace of Macadamia flower samples heated at higher temperatures. isomers of dhurrin such as taxiphyllin, zierin, and p-glucosyloxy-mandelonitrile can readily dissociate and release cyanide without enzymatic hydrolysis [40][41][42]. Therefore, the cyanide released from the thermally induced decomposition of these minor cyanogenic glycosides could be contributing to the detected cyanide in the headspace of Macadamia flower samples heated at higher temperatures. In addition, the longer the heating time, the higher the HCN concentration, with the longest heating times (60 and 80 min) generating the highest HCN concentration ( Figure 4A). The maximum HCN concentration was reached when samples were heated at 40-50 °C for 80 min or 50 °C for 60 min. From these results, the optimum heating time and temperature were determined to be 50 °C for 60 min, which were used for succeeding experiments. Figure 4B shows the headspace concentration of hydrogen cyanide above Macadamia flower samples treated with Na2HPO4-citric acid buffered solutions at different pH (2,3,4,5,6,7,8,9) and heated at 50 °C for 15, 30, 60, and 90 min. As the pH increased from pH 2 to 7, the concentration of HCN increased. From pH 7 to 9, the concentration of HCN decreased slightly. The Macadamia flower sample treated with pH 7 buffer and heated at 50 °C for 60 min generated the highest headspace In addition, the longer the heating time, the higher the HCN concentration, with the longest heating times (60 and 80 min) generating the highest HCN concentration ( Figure 4A). The maximum HCN concentration was reached when samples were heated at 40-50 • C for 80 min or 50 • C for 60 min. From these results, the optimum heating time and temperature were determined to be 50 • C for 60 min, which were used for succeeding experiments. Figure 4B shows the headspace concentration of hydrogen cyanide above Macadamia flower samples treated with Na 2 HPO 4 -citric acid buffered solutions at different pH (2,3,4,5,6,7,8,9) and heated at 50 • C for 15, 30, 60, and 90 min. As the pH increased from pH 2 to 7, the concentration of HCN increased. From pH 7 to 9, the concentration of HCN decreased slightly. The Macadamia flower sample treated with pH 7 buffer and heated at 50 • C for 60 min generated the highest headspace concentration of HCN. This result suggests that these conditions are optimum for the underlying enzymatic activities involved in the hydrolysis reaction of cyanogenic glycoside compounds producing hydrogen cyanide gas (Figure 1). Optimization of Mixture's pH for Maximum Generation of Hydrogen Cyanide When Macadamia flower was heated at 50 • C for 60 min under its normal physiological pH (pH 4.35), the HCN level was only about 4900-5600 ppb v (Figure 4A,B). Increasing the treatment's pH to pH 7, significantly increased the headspace HCN concentration by 200-250% (~12,500 ppb v ). At a more basic pH (pH 8 or 9), HCN concentration was still significantly higher than the concentration at acidic pH (pH 6 and below), but it was lower than that at pH 7. Further analysis of data was done by plotting the hydrogen cyanide concentration as a function of pH ( Figure 4C), the hydrolysis reaction of cyanogenic glycosides at 50 • C could be described as a first-order reaction with respect to the production of hydrogen cyanide. A constant pseudo first-order rate value (k = 0.0081 ± 0.0007 M min −1 ) was determined from the linear regression slopes of ln [HCN], mol L −1 versus heating time (min) plots for pH 2, 3, 4, 5, 6, and 7. At pH 8 and 9, the 90 min data point had to be excluded. The calculated empirical rates of hydrogen cyanide production (d [HCN]/dt) at different pH (Table 1) suggest that hydrogen cyanide production is slower at acidic pH values (pH 2, 3, 4, 5, and 6), increases at basic pH, but reaches a peak at pH 7. These findings are similar to the results of the study by Johansen and co-workers [17]. According to their study, hydrolysis of the cyanogenic glycoside, dhurrin, follows a first-order reaction with respect to dhurrin and the rate of dhurrin hydrolysis is very slow at low pH values but strongly increases as the pH is increased. Thus, the first-order rate of hydrolysis of cyanogenic glycoside dhurrin in aqueous solution is supported by the in vitro hydrolysis of cyanogenic glycosides in Macadamia flower as reported by the present study. [6,43]. Previous reports have mentioned that cyanogenic compounds are highest in growing tissues of plants and that activation of metabolic processes coincide with cyanogenic glycoside production [6,44]. The flower is the main reproductive organ of a plant and has very active and complex morphological and physiological features, which support an abundance of ecological functions related to floral development and plant reproduction [45][46][47]. For instance, de novo synthesis of amino acids, enzymes, and structural proteins, which are precursors of N-containing secondary metabolites (such as cyanogenic glycosides) and signaling molecules, all occur in floral tissues [47]. These complex metabolic processes during floral development and growth could be contributing to the increased biosynthesis of cyanogenic glycosides The concentration of hydrogen cyanide in Macadamia leaves (513 ± 0.6 ppbv) is within the concentration range of cyanide (364-1403 ppbv) detected in the leaf tissue of M. ternifolia, M. integrifolia, and M. tetraphylla species during their early to mid-developmental stages (3rd-4th week) [6]. Young leaves were observed to contain higher amounts of cyanogenic glycosides, which could be due to the copious amounts of carbon and nitrogen precursors readily available during germination, so there is rapid biosynthesis of cyanogenic compounds. Cyanogenic glycoside in leaf tissue was, however, observed to decrease with plant maturation because these compounds are rapidly metabolized and broken down as the leaves become older [6,[48][49][50]. Macadamia husks are the fleshy green fibrous pericarp covering the conical or spherical hard brown shell enclosure of Macadamia nuts [51]. Similar to Macadamia flowers, there are no available published data reported for the hydrogen cyanide concentration in Macadamia husks for comparison. Moreover, the hydrogen cyanide concentration of husks analyzed from three different Macadamia varieties were significantly different: 21 ± 0.1 ppbv for variety 695; 256 ± 0.4 ppbv for variety Oc; and 476 ± 0.7 ppbv for variety A16 ( Figure 5). It was previously reported that the quantities of cyanogenic glycosides in Macadamia seedlings and other plants vary according to species, developmental stage, and tissue type; however, the cyanogenic glycosides in the varieties of Macadamia husk used in this study have yet to be conclusively identified [6,52]. Seeds of Macadamia species are also capable of accumulating cyanogenic glycoside compounds and the concentration varies depending on the variety [6,43]. In the present study, the hydrogen cyanide concentration of Macadamia nuts (5.8 ± 0.1 ppbv) was lower than the reported cyanide concentrations in commercially used seeds (~74 ppbv) of M. integrifolia or M. tetraphylla and significantly lower than the concentration detected in M. ternifolia (~4800 ppbv), which is considered to be inedible. The concentration of hydrogen cyanide in Macadamia leaves (513 ± 0.6 ppb v ) is within the concentration range of cyanide (364-1403 ppb v ) detected in the leaf tissue of M. ternifolia, M. integrifolia, and M. tetraphylla species during their early to mid-developmental stages (3rd-4th week) [6]. Young leaves were observed to contain higher amounts of cyanogenic glycosides, which could be due to the copious amounts of carbon and nitrogen precursors readily available during germination, so there is rapid biosynthesis of cyanogenic compounds. Cyanogenic glycoside in leaf tissue was, however, observed to decrease with plant maturation because these compounds are rapidly metabolized and broken down as the leaves become older [6,[48][49][50]. Dhurrin and Total Cyanide Concentrations in Untreated and Treated Macadamia Plant Part Extracts Macadamia husks are the fleshy green fibrous pericarp covering the conical or spherical hard brown shell enclosure of Macadamia nuts [51]. Similar to Macadamia flowers, there are no available published data reported for the hydrogen cyanide concentration in Macadamia husks for comparison. Moreover, the hydrogen cyanide concentration of husks analyzed from three different Macadamia varieties were significantly different: 21 ± 0.1 ppb v for variety 695; 256 ± 0.4 ppb v for variety Oc; and 476 ± 0.7 ppb v for variety A16 ( Figure 5). It was previously reported that the quantities of cyanogenic glycosides in Macadamia seedlings and other plants vary according to species, developmental stage, and tissue type; however, the cyanogenic glycosides in the varieties of Macadamia husk used in this study have yet to be conclusively identified [6,52]. Seeds of Macadamia species are also capable of accumulating cyanogenic glycoside compounds and the concentration varies depending on the variety [6,43]. In the present study, the hydrogen cyanide concentration of Macadamia nuts (5.8 ± 0.1 ppb v ) was lower than the reported cyanide concentrations in commercially used seeds (~74 ppb v ) of M. integrifolia or M. tetraphylla and significantly lower than the concentration detected in M. ternifolia (~4800 ppb v ), which is considered to be inedible. The cyanide concentration in the extracts have the same trend as the dhurrin concentration. Figure 7 shows the total cyanide concentration of the fresh, untreated Macadamia flower (417.7± 0.8 mg/L), leaves (167 ± 2 mg/L), nuts (67.1 ± 0.6 mg/L), and husks (695: 94.3 ± 0.7 mg/L; A16: 23.9 ± 0.1 mg/L; Oc: 50.6 ± 0.4 mg/L). After full sample treatment using the optimized conditions (i.e., samples treated with pH 7 buffer solution and heated at 50 °C for 60 min), significant amounts of dhurrin and cyanide were removed in the analyzed extracts as shown in Figures 6 and 7, respectively. The cyanide concentration in the extracts have the same trend as the dhurrin concentration. Figure 7 shows the total cyanide concentration of the fresh, untreated Macadamia flower (417.7 ± 0.8 mg/L), leaves (167 ± 2 mg/L), nuts (67.1 ± 0.6 mg/L), and husks (695: 94.3 ± 0.7 mg/L; A16: 23.9 ± 0.1 mg/L; Oc: 50.6 ± 0.4 mg/L). After full sample treatment using the optimized conditions (i.e., samples treated with pH 7 buffer solution and heated at 50 • C for 60 min), significant amounts of dhurrin and cyanide were removed in the analyzed extracts as shown in Figures 6 and 7, respectively. Dhurrin and Total Cyanide Concentrations in Untreated and Treated Macadamia Plant Part Extracts It is interesting to note that heating samples at 50 • C for 60 min without pH adjustment (heated-only samples) had little to no effect on the removal of dhurrin ( Figure 6A) or cyanide ( Figure 7A) in Macadamia flower and leaf samples. On the other hand, treating the Macadamia flower and leaf samples with buffered solution at pH 7 without heating (buffered-only), resulted in significant removal of dhurrin ( Figure 6B: flower, 419 ± 1 mg/L; leaves, 98 ± 1 mg/L) and cyanide ( Figure 7A: flower, 370 ± 2 mg/L; leaves, 48 ± 1 mg/L) in the extracts. Analysis of treatment efficiencies (Table 2) showed that the full treatment of samples (i.e., treated samples) by heating (50 • C, 60 min) and pH 7 adjustment results in 93-100% removal of dhurrin and about 81-91% removal of cyanide in the different Macadamia plant parts. Treatment by heating alone was only about 1% effective in the removal of dhurrin and only about 5-12% effective in the removal of cyanide (Table 2). However, treating the Macadamia flower and leaf samples with buffered solution at pH 7 without heating (buffered-only), treatment produced about similar removal effectivity (Table 2) of dhurrin (93-100%) and cyanide (89-91%) as that of the fully treated samples heated at 50 • C for 60 min at pH 7. It is interesting to note that heating samples at 50 °C for 60 min without pH adjustment (heatedonly samples) had little to no effect on the removal of dhurrin ( Figure 6A) or cyanide ( Figure 7A) in Macadamia flower and leaf samples. On the other hand, treating the Macadamia flower and leaf samples with buffered solution at pH 7 without heating (buffered-only), resulted in significant removal of dhurrin ( Figure 6B: flower, 419 ± 1 mg/L; leaves, 98 ± 1 mg/L) and cyanide ( Figure 7A: flower, 370 ± 2 mg/L; leaves, 48 ± 1 mg/L) in the extracts. Analysis of treatment efficiencies (Table 2) showed that the full treatment of samples (i.e., treated samples) by heating (50 °C, 60 min) and pH 7 adjustment results in 93%-100% removal of dhurrin and about 81%-91% removal of cyanide in the different Macadamia plant parts. Treatment by heating alone was only about 1% effective in the removal of dhurrin and only about 5%-12% effective in the removal of cyanide (Table 2). However, treating the Macadamia flower and leaf samples with buffered solution at pH 7 without heating (buffered-only), treatment produced about similar removal effectivity ( Table 2) of dhurrin (93%-100%) and cyanide (89%-91%) as that of the fully treated samples heated at 50 °C for 60 min at pH 7. Conclusions The optimum conditions for the maximum release of hydrogen cyanide in Macadamia samples were 50 • C, 60 min at pH 7. Under these treatment conditions, trace amounts of hydrogen cyanide could still be detected in the headspace directly above the different Macadamia plant part samples using SIFT-MS. The measured hydrogen cyanide in the headspace of the treated samples were 12,535 ± 11 ppb v (flower), 513 ± 0.6 ppb v (leaves), 6 ± 0.1 ppb v (nuts), 476 ± 0.7 ppb v (husk A16), 256 ± 0.4 ppb v (husk Oc), and 21 ± 0.1 ppb v (husk 695). Treatment of Macadamia samples under these optimum conditions produced 93-100% removal of dhurrin and 81-91% removal of total cyanide in the sample extracts. Treatment by pH 7 adjustment (buffered-only) without heating also resulted in an effective removal of dhurrin (86-100%) and total cyanide (88-89%) in Macadamia extract similar to the full, optimized treatment conditions. Heating the samples alone at 50 • C for 60 min without pH adjustment was not effective in the hydrolysis and removal of cyanogenic glycoside dhurrin and total cyanide in Macadamia samples. The varying concentration of generated hydrogen cyanide could be correspondingly attributed to the concentration of cyanogenic glycosides (such as dhurrin) from the different parts of the Macadamia plant and their subsequent hydrolysis to hydrogen cyanide. Cyanogenic glycosides were greatest in Macadamia flowers, followed by the leaves and husks (depending on variety), and lowest in nuts. The results indicate that the hydrolysis of cyanogenic glycosides in Macadamia is predominantly induced by pH changes rather than by heat. This further suggests that the enzymatic hydrolysis involved in cyanogenesis is chiefly pH-directed rather than thermally induced. In addition, the hydrolysis reaction of cyanogenic glycosides could be described as a first-order reaction with respect to the in vitro production of hydrogen cyanide. These results provide further insights into the cyanogenic systems in Macadamia. Moreover, the evaluated optimum conditions for the hydrolysis of dhurrin and removal and release of hydrogen cyanide could be helpful for the effective processing of different parts of Macadamia. Such information provides some guidelines toward the safe production, utilization, and consumption of Macadamia-based products.
2020-02-13T09:24:48.025Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "63ac5c2e5c7096cc2c6f4d2e80606a5bc0461f85", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/9/2/174/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d49236ff75ef3e8bf1db40192b6fe5ddcf28faf9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
54512264
pes2o/s2orc
v3-fos-license
Data Acquisition in Particle Physics Experiments Generally speaking, most of the particle physics experiments incur in the use of different types of sensors which information has to be gathered and processed to extract the biggest semantic contents about the process being analyzed. This fact implies the need for, not only a hardware coordination (timing, data width, speeds, etc.) between different sub-acquisition systems for the different types of sensors, but also an information processing strategy to gain more significance from the analysis of the data from the whole system that the one got from a single type of sensor. Also, from the point of view of hardware resources, each type of sensor is replicated several times (even millions) to achieve some spatial coverage. This fact directly drives to the extensive use of integrated devices when needed to improve cost and space utilization. Introduction This chapter presents an overview of technological aspects related to data acquisition (DAQ) systems for particle physics experiments. Being a general topic as data acquisition can be, particle physics experiments pose some challenges which deserve a special description and for which special solutions may be adopted. Generally speaking, most of the particle physics experiments incur in the use of different types of sensors which information has to be gathered and processed to extract the biggest semantic contents about the process being analyzed. This fact implies the need for, not only a hardware coordination (timing, data width, speeds, etc.) between different sub-acquisition systems for the different types of sensors, but also an information processing strategy to gain more significance from the analysis of the data from the whole system that the one got from a single type of sensor. Also, from the point of view of hardware resources, each type of sensor is replicated several times (even millions) to achieve some spatial coverage. This fact directly drives to the extensive use of integrated devices when needed to improve cost and space utilization. This chapter, thus, will cover the specific technologies used in the different stages in which a general DAQ system in particle physics experiments can be divided. The rest of the chapter is organized following the natural flow of data from the sensor to the final processing. First, we will describe the most general abstraction of DAQ systems pointing out the general architecture used in particle physics experiment. Second, different common types of transducers used will be described with their main characteristics. A review of the different hardware architectures for the front-end system will follow. Then, we will get into several common data transmission paradigms including modern standard buses and optical fibers. Finally, a review of present hardware processing solutions will be done. Data acquisition architectures Data acquisition pursues the reading of the information from one or many sensors for its real-time use or storage and further off-line analysis. Strictly speaking we may establish four different activities in a sensor processing system: acquisition, processing, integration and analysis. However, most of the time we refer to DAQ system as the whole of these activities. It is worth to say that not every DAQ system includes these four activities, depending on their complexity and application. For example, in single sensor systems neither integration nor processing could be necessary. On the other hand, in systems with replicated sensors, processing could be minimal, but the integration is crucial. If the system is based on different types of sensors, processing is necessary to make the readings of the various sensors compatible and the integration is needed to obtain comprehensive information of the environment. However, the majority of the DAQ systems will include the four activities: the physical variable is sensed in the acquisition activity; the data collected is processed properly (for example, performing scaling or formatting) before being transported to the integration activity; the output of the integration is a more meaningful information on which the analysis activity can base its tasks (storage, action on a mechanism, etc.). Architectures of sensor systems As we mentioned before, the DAQ system consists of four activities: acquisition, processing, integration and analysis. Depending on the characteristics of the process under study we will have to choose how to organize them, as we shall see now, to adapt it to our needs [1]. Collection of sensors A collection of sensors is a set of sensors arranged in a certain way. They can be in series, parallel or a mixed combination of these two basic arrangements. The choice of the particular configuration will depend on the application. The integration of information is carried out progressively through the different sensors to get a final result. Hierarchical system In a centralized system, data from the sensors are transmitted to a central processor to be combined. If the volume of data is large, this organization may require considerable bandwidth. For these cases, the DAQ system can be arranged as a hierarchy of subsystems. The interesting aspect of this organization is that an increase in the "size" of the problem does not translate in a similar increase in the organization of the DAQ, i.e., the system does not grow linearly with the problem. This is true provided that the data and feature fusion stages reduce the volume of information. Data Acquisition in Particle Physics Experiments 273  Multisensor integration Figure 1b shows the integration of various sensors s1, s2, s3 and s4 (not all of the same type). We assume that s1 and s2 are of the same type, s3 of a second type and s4 of a third type. In this case, the integration of the information has to be done to ensure that data from the sensors are compatible. Distributed processing of sensors In the following we will focus on DAQ systems where the four activities described before take place in a distributed way, known as distributed processing systems [2]. This case is of special interest as many of the present DAQ systems for particle physics follow it. To better understand the different aspects of the processing of sensors let us consider a distributed processing system of sensors with the main objective of detecting targets present in the surveyed space. This example may apply to particle physics experiments but also to other fields like distributed control systems, sensory systems for robots, etc. Let us assume that there is a finite number of resources (sensors and processors) in the distributed system. Consider a system in which there are N sensors (S1 to SN) and P processors (EP1 EPP). N sensors, for example, can track objects in observation space and we assume that they are all of the same type, that is, they conform a system based on physically replicated sensors. Let us suppose they have been organized in P sensors groups or clusters, for example 3, of N/P sensors each. In our example, there are three groups, each with three sensors and a processor to control them. a b Data Acquisition Applications 274 The main task, T, is to detect and possibly follow targets across the surveyed space. Consider two possibilities: 1. The space of observation is too broad and therefore cannot be efficiently covered by any of the clusters of sensors. 2. The space of observation can be covered by all the groups of sensors, but the system requires a response in real time for the follow-up of the target. In the first case, part of the space can be assigned to each group of sensors. Collectively, they will cover all the surveyed space. In the second case, we can assign to each cluster the task of following some specific number of targets; ideally, each group should be following a single target. In our example, the distributed processing system breaks down the main task T in P subtasks; this operation is known as task decomposition. The objective of each subtask Ti is to detect and follow the i th -target in the observation space. Each task is assigned with a processing element, EPi, that controls the three sensors of the group. Each group of replicated sensors has a local processor. The processor is responsible for local processing and control; it can control the sensors assigned to it and obtain the values from them. Ideally, the sensors of the clusters should always obtain the same value, but in practice they give different values following some statistical distribution. Suppose that each group can see only part of the space, but the targets can move anywhere within this space. In this case, the system would require a communication between the local processors to share the information about the object and to know when it moves from one area to another. Finally, the integrator is responsible for combining data from sensors and/or their abstractions. It should be noted that we started with nine sensors. There are three groups of three sensors each. The three sensors on each group provide redundant information. The processing of each group combines the redundant information to obtain a solution of a subproblem -what object is the one the group is observing? In this way, the integrator gets three sets of data, each coming from a group of sensors. With these data, the observer determines that there are three objects in the space of observation. The distance from the integrator to the sensor is not, in general, negligible, so the results of the local processing must be transmitted in some way. In our example, the result obtained by the integrator is a map of the objects present in the whole surveyed space. The DAQ system is assumed to have a knowledge base that can analyze and interpret the data and take the appropriate action depending on the result obtained. In our case, the system interprets that there are three objects that occupy the space of observation; the reaction of the system will depend on the knowledge base. Distributed sensor networks The use of different, intelligent and distributed sensors space and geographically has grown constantly in applications such as robotics, particle physics experiments, medical images, tracking radar, air navigation and control systems of activities on production lines, to name a few. These systems, and other similar, are called distributed sensors networks or DSN [1]. Otherwise, we could define a network of distributed sensors as a set of intelligent sensors distributed spatially and designed to obtain data of the environment that surrounds them, abstracting the relevant information and infer from it the observed object, deriving from all this, an action appropriate according to the scenario. Data acquisition systems in particle physics experiments The distributed sensor network (DSN) paradigm fits what we generally implement as DAQ systems in particle physics experiments. Because of the need of a spatial coverage or an identification scheme based on the detection of different types of particles, the DAQ system will include several sensors of the same type or different types of multiple replicated sensors. Hardware architectures to read out all them are implemented in a distributed and possibly hierarchical way due to high data volume, high data rate or geographical sensor distribution. Comparison of hierarchical DSN versus other type of solution may be found in [3,4]. Radiation detection. Transducers Radiation detection involves the conversion of the impinging energy in form of radiation into an electrical parameter which can be processed. In order to achieve this, transducers are the responsible for transforming the radiation energy into an electrical signal. The type of detector has to be specific for each radiation and its energy interval. In general, several factors must be taken into consideration as the sensitivity, the response of the detector in energy resolution, response time and efficiency of the detector. Energy conversion can be carried out whether in a direct mode, if the signal is directly detected through the ionization of a material (figure 2a), or in an indirect mode, when it performs different energy conversions before obtaining the electrical signal (light production plus electrical conversion, figure 2b). The following sub-sections describe the most commonly used devices for both medical as nuclear physics applications. Direct detection Direct detection with ionization chambers is a common practice. They are built with two electrodes to which a certain electrical potential is applied. The space between electrodes is occupied by a gas and it responds to the ionization produced by radiation as it passes through the gas. Ionizing radiation dissipates part or all of its energy by generating electronion pairs which are put in motion by the influence of an electrical field and consequently, by producing an electrical current. Other possibility that provides good results in radiation detection is a semiconductor detector. They are solid-state devices which operate essentially like ionization chambers but in this case, the charge carriers are electron-hole pairs. Nowadays, most efficient detectors are made of Silicon (Si) or Germanium (Ge). The main advantage is their high energy resolution; besides, they provide linear responses in a wide energy range, fast pulse rise time, several geometric shapes (although the size is limited) and insensitivity to magnetic fields [5]. Scintillators Scintillators are materials which exhibit luminescence when ionizing radiation passes through them. Material absorbs part of the incident energy and reemits it as light, typically in the visible spectrum. Sir William Crooks discovered this property presented by some materials in 1903 when bombarding ZnS with alpha particles. Organic scintillators belong to the class of the aromatic compounds like benzene or anthracene. They are made by combining a substance in higher concentration, solvent, and one or more compounds at lower concentrations, solutes, which are generally responsible for the scintillation. They are mainly used for the detection of beta particles (fast electrons with linear response from ~125 keV), alpha particles and protons (not linear response and lower efficiency for the same energies) and also for the detection of fast neutrons. They can be found in different states such crystals, liquid solutions, scintillating plastics with almost every shape and size and in gaseous state. On the other hand, inorganic scintillators are crystals of the alkali metals such as NaI(Tl), Cs(Tl), LiI(Eu) and CaF2(Eu). The element in brackets is the activator responsible of the scintillation with a small concentration in the crystal. Inorganic scintillators have in general high Z and for this reason they are mainly used for gamma particle detection, presenting a linear response up to 400 keV. Regarding to its behavior towards charged particle detection, they exhibit linear responses with the energy of the protons from 1 MeV and for alpha particles from 15 MeV. However, they are not commonly used to detect charged particles [5,6,7]. As it is shown in figure 2b, the scintillator produces a light signal when it is crossed by the radiation to be detected. It is coupled to a photodetector that will be responsible of transforming the light signal into an electrical signal. Data Acquisition in Particle Physics Experiments 277 Optoelectronic technology for radiation detection Light detection is achieved with the generation of electron-hole pairs in the photosensor in response to an incident light. When the incident photons have energy enough to produce photoelectric effect, the electrons of the valence band jump to the conduction band where the free charges can move along the material under the influence of an external electric field. Thus, the holes left in the valence band due to prior removal and displacement of electrons, contribute to the electrical conduction and in this way, photocurrent is generated from the light signal. Photodetectors. Features One of the main characteristics of a photodetector is its spectral response. The level of electric current produced by the incidence of light varies according to the wavelength. The relationship between them is given by the spectral response, expressed in form of photosensitivity S (A/W) or quantum efficiency QE (%). Another important feature is the Signal to Noise (SNR) Ratio. It is a measure that compares the level of a desired signal to the level of the background noise. The sensibility of the photodetector depends on certain factors such as the active area of the detector and its noise. The active area usually depends on the construction material of the detector; about the noise level, it is expected that the level of the signal exceeds the noise associated to the detector and its electronics, taking into account the desired SNR. One important component of the noise in the photodetector is the dark current [8,9]; this current is due to the current flow existing in the photodetector even when they are in a dark environment, both in the photoconductive mode as in the photovoltaic mode. This current is known as dark current with intensities from nA to pA depending on the quality of the sensor. The light coming from the scintillator is generally of low intensity and because of that, some photodetectors make avalanche processes to multiply the electrons for obtaining a detectable electric signal. Other parameters that determine the quality of the photodetector are the reverse voltage, the time response and its response against temperature fluctuations. Commercial photodetectors Photomultiplier Tubes (PMT) have been the photodetectors longer employed for a wide number of applications, mainly due to their good features and benefit results. They are used in applications that require measuring low-level light signals, for example the light from a scintillator, converting few hundred photons into an electrical signal without adding a large quantity of noise. A photomultiplier is a vacuum tube that converts photons in electrons by photoelectric effect. It consists of a cathode made of photosensitive material, an electron collecting system, some dynodes for multiplying the electrons and finally an anode which outputs the electrical signal, all encapsulated in a crystal tube. The research carried out in this type of detectors and the evolutionary trend is mainly focused on the improvement of the QE, achieved with the development of the photomultipliers with bialkali photocathode or GaAsP, but it has also been focused on obtaining better time response. In relation to the building material, four of them are commonly used depending on the detection requirements and the wavelength of the light, (Si, Ge, InGaAs and PbS, figure 3). In such applications where the level of light is high enough, photodiodes use to be the detectors employed due to their lower price but also to their remarkable properties and response. It is a semiconductor with a PN junction sensitive to the infrared and visible light. If the light energy is greater than the band gap energy, the electrons are pulled up into the conduction band, leaving holes in their place in the valence band. If a reverse bias is applied, then there is an electrical current. Thus, P layer in the surface with the N layer both act as a photoelectric converter. Other important photodetectors such as Avalanche Photodiodes (APDs) have been developed in the last few years [10]. Compared to photodiodes, APDs can detect lower levels of light and they are employed in applications where high sensitivity is required. Although the principle of operation, materials and construction are similar to the photodiodes, there are considerable differences. It has an absorption area A and a multiplication area M which implies an internal gain mechanism that works by applying a reverse voltage. When a photon strikes the APD, electron-hole pairs are created and in the gain area, the acceleration of the electrons is produced; thus, the avalanche process starts with a chain reaction due to successive ionizations. Finally, the reaction is controlled in a depletion area. The output result is, after the incidence of a photon, not only the generation of one or few electrons but a large number of them. In this way, a high level of electric current is obtained from a low level of incident light with gain values around 10 8 . Silicon Photomultipliers (Si PMT) are promising detectors due to their characteristic features and probably its utilization in many applications will increase during the following years. It is a photon counting device consisting on multiple APD pixels forming an array and that operates in Geiger mode. The addition of the output of each APD results on the output signal of the device, allowing the counting of individual events (photons). One advantage is the low reverse voltage needed for its operation, lower than the one used with PMTs and APDs. When the reverse voltage applied exceeds the breaking reverse voltage, the internal electric field is high enough to produce high gains of the order of 10 6 [11,12]. Data Acquisition in Particle Physics Experiments 279 Finally, a CCD camera (charged-coupled device) is an integrated circuit for digital imaging where the pixels are formed with p-doped MOS capacitors. Its principle of operation is based on the photoelectric effect and its sensitivity depends on the QE of the detector. At the end of the exposition, the capacitors transmit their charge which is proportional to the amount of light and the detector is read line by line (although there are different configurations). They offer higher performances regarding QE and noise levels; however their disadvantages are the big size and high price. Figure 4 shows the general features of the different photodetectors. Front-end electronics When talking about front-end electronics in nuclear or particle physics applications, we usually refer to the closest electronics to the detector, involving processes from amplification, pulse-shape conformation to the analog-to-digital conversion. The back-end electronics are left apart further away from the detector for processing tasks. In this section, we will introduce the common circuits used in the front-end electronics, such as preamplifiers, shapers, discriminators, ADCs, coincidence units and TDCs. Unipolar and bipolar signals In nuclear and particle physics, usually the signals obtained are pulse signals. Depending on the detector used, different parameters such as the rise or fall time, as well as the amplitude are different. Figure 5 (left) shows a typical pulse signal with all its important parameters. Mostly related to the rise time, it is important to remark the bandwidth of pulse signals, related to the fastest component of the pulse, usually the rise time. A typical criteria to choose the signal bandwidth based on the temporal parameters is to choose a bandwidth such that BW=0.35/tr, where tr is the signal rise time [13]. Preamplifiers Often in particle physics nuclear and particle physics experiments, the signal obtained at the output of the amplifier is an electrical pulse whose amplitude is proportional to the charge produced by the incident radiation energy. It is quite impractical to provide directly the signal without a proper amplification, and for this reason, preamplifiers are the first stage seen by the pulse signal, usually placed the closest to the detector for noise minimization since noise at this stage is very critical. Two different types of preamplifiers are commonly used depending on the sensing magnitude: Voltage-sensitive amplifiers and chargesensitive amplifiers. Voltage-sensitive amplifiers They are the most conventional type of amplifiers, and they provide an output pulse proportional to the input pulse, which is proportional to the collected charge as well. If the equivalent capacitance of the detector and electronics is constant, this configuration can be used. On the other hand, in some applications, as for example semiconductor detectors, the detector capacitance changes with temperature, so this configuration is not anymore useful. Hence, it is preferred to use the configuration called charge-sensitive preamplifiers. The basic schematic of the voltage-sensitive amplifier is shown on the figure 6 (left). Charge-sensitive amplifiers Semiconductors, such as germanium or silicon detectors are capacitive detectors itself with very high impedance. The capacitance, Ci, for these detectors fluctuates making the voltagesensitive amplifier inoperable. The idea of this circuit is to integrate the charge using the feedback capacitor Cf. The advantage of this configuration is the independence of the amplitude with the input capacitance if the condition ≫ ( + ) ⁄ is satisfied. A picture of the charge-sensitive amplifier is shown on the figure 6 (right) [14]. The feedback resistor Rf is used to discharge the capacitor leading the signal to the baseline level with an exponential tail around 40-50 µs. This discharge is usually done with a high R in order to provide a slow pulse tail, minimizing the noise introduced, but a tail too slow can lead to pile-up effects. Another approach to get rid of the pile-up effect is the optical feedback charge amplifier [6,7]. Amplifiers and shapers After the pre-amplification process is carried out, it might be useful to provide a certain shape in order to simplify the measurements of certain magnitudes, preserving the interest magnitude intact. Pulse stretching and spreading techniques can be used for pile-up cancellation, timing measurements, pulse-height measurements and preparation for sampling. Other reasons to use pulse shaper is its SNR optimization, where a certain shape provides the optimal SNR ratio. Most of the shaper circuits are based on differentiator (CR) and integrator (RC) circuits. The circuit schematic and time response are shown in the figure 7 (left). For further information about these circuits, consult the references [7,15]. Shaper networks Three different pulse shapers will be introduced into this sub-section, although there exist many more. CR-RC network, CR-RC network with pole-zero cancellation and the double differentiating CR-RC-CR circuit are introduced here. CR-RC circuits are implemented as a differentiator followed by an integrator (figure 7a). The differentiated pulse allows the signal to return to the baseline level but it does show neither an attractive pulse nor allows easy sampling of the maximum point when extracting the energy with pulse height analysis (PSA). The integrator stage improves the SNR ratio and smoothens the waveform. The choice of the time constant often is a compromise between pile-up reduction and the ballistic deficit, which occurs when the shaper produces an amplitude drop. This can be solved by choosing a high value of compared to the rise time, or the charge collecting time from the detector. When considering a real pulse instead of a step ideal response, CR-RC circuits produce undershoot (figure 7b), which leads to a wrong amplitude level. This can be solved by adding a resistor to cancel the pole of the exponential tail, cancelling the undershoot. If the system counting rate is low, this strategy is useful, but when this counting rate increases, the pulses start to pile-up onto each other creating baseline fluctuations and amplitude distortion. A solution for this problem is the double differentiating network CR-RC-CR (figure 7c), in which a bipolar pulse is obtained from the input pulse. The main difference resides on the fact that the bipolar pulse does not leave any residual charge, making it very suitable for systems with high-counting rates, but still for systems with low-counting rates, it is preferred to use the unipolar pulses, since its SNR ratio is fairly better. Further shaping methods such as semi-gaussian shapers, active pulse shapers, triangular and trapezoidal shaping, as well as shapers using delay lines can be consulted on the references [6,7,15]. Discrimination techniques Discriminator circuits are systems that are activated only if the amplitude of the input signal crosses a certain threshold. Discriminators are used to find the events and to use them as trigger signals, commonly for time measurement. Besides, it blocks the noise coming from previous devices, such as the detector and other electronics stages. The simplest method for pulse discrimination is the leading edge triggering. It provides a logic signal if the pulse amplitude is higher than a threshold. The logic signal is originated at the moment when the signal crosses the threshold but with the problem called the timewalk effect, which describes the dependence of the pulse discrimination with the signal rise time. This effect can be seen, on the figure 8 left. Another undesirable effect for pulse discrimination is the time jitter effect (figure 8 right). This effect is caused by statistical fluctuations at the detector and electronics level, and as a Data Acquisition in Particle Physics Experiments 283 difference from the time walks effect. The time jitter is shown as a timing uncertainty when the signal amplitude is constant. This effect comes from the noise introduced in the components, and also the detector sources, as for example the transit time from the electrons in a photomultiplier or the fluctuation of photons produced in a scintillator. Figure 8. Time walk and jitter effects Other methods to avoid or reduce these effects in discrimination systems are zero-crossover timing or constant fraction discrimination methods. Zero-crossover timing method is based on the double-differentiation of the pulse shape. This method, although improves the time resolution and makes independent the crossing point from the amplitude, the shape and rise time still influence the time resolution, making it unsuitable for applications where these fluctuations are very large. The constant fraction discriminators establish the threshold as a fraction of its maximum level. The most common way to implement it is based on the comparison between a fraction of the signal with a slightly delayed version, where the zerocrossing point of its difference makes the pulse independent of the amplitude with lower jitter [7]. A/D conversion (analog to digital conversion) More sophisticated algorithms may be implemented digitally inside the logic devices (FPGA or GPUs). Nevertheless, before performing those algorithms, an analog to digital conversion is required, introducing inevitably a source of error due to the sampling and quantization processes. Two of the most common techniques used in nuclear or particle physics research proposals are the Wilkinson method and the FADC (Flash ADC) if a very high sampling rate is required, although other conversion methods such as successive approximation and sub-ranging ADC are used as well. The main difference between the Wilkinson method and FADC sampling is that Wilkinson method takes one sample per event based on the time measured when a capacitor is discharged, where the time counted is proportional to the pulse charge. On the other hand, FADC takes several samples per event, where the digitized value is taken when comparing the input voltage with a set of resistors forming a voltage divider across all the possible digitized values. Although FADC technique leads to the fastest architecture, as far as the number of bits required is higher, the amount of comparators increases exponentially [6,16,17]. The analog-to-digital conversion performance can be tested by measuring certain parameters, such as the differential and integral nonlinearities (DNL and INL), which cause missing codes, noise and distortion, as well as the effective number of bits (ENOB), which quantifies the resolution loss when the distortion and nonlinearities come in. Further information about ADC parameters con be found in [17] and its measuring method in [17,18]. Coincidence and anti-coincidence units These circuits are used to know whether an event has been detected in several detectors at the same time or to detect events only occurring at only one detector. This is especially useful in detector arrays in order to discard fake events. The way to implement it, is based on simple logical operations between the signals from the discriminators [6,7]. TDC (Time-to-digital converter) In most of the applications in nuclear and particle physics, the measurement of time intervals will be a primary task. Basically the way to measure time intervals, is based on a start and stop signals, usually given by discriminator circuits. Then, a value proportional to the time interval between the start and stop signals is digitized. Different architectures lead to different performance, but the most notorious for TDC is the time resolution, defined as the minimum time the TDC is capable to measure. Among different architectures, we can mention the TAC (Time-to-amplitude converter), the direct time-to-digital converter, and for higher resolutions, the differential TDC and the Vernier counter [15]. Data Bus systems for back-end electronics In this section we present the most popular standards used today to build DAQ systems in particle physics experiments. All the systems presented are modular systems with each module carrying out a specific function. This technique allows the reuse of the modules in other systems and makes the DAQ system scalable. Most of the features of these modular systems as mechanics, data buses characteristics or data protocols are defined in standards. Many DAQ system manufacturers develop their own products according to these standards. The use of standards implies many benefits as the use of third party products and support [19]. NIM Standard NIM stands for Nuclear Instrumentation Module and was established in 1964. NIM standard does not include any kind of bus for data transfer communications since NIM crates only provide power to the NIM modules. The advantage of the NIM standard is that modules are interchangeable and work as a standalone system allowing the set up of DAQ systems in a simple way, where a module can be replaced without affecting the integrity of the rest of the system [20]. These advantages make the NIM standard very popular in nuclear and particle experiments, and it is still used for small experiments. However, NIM has disadvantages since lacks of a digital bus, not allowing a computer based control or data communication between modules. Crate and modules Standard NIM crates have 12 slots for modules and include a power supply that provides AC and DC power to the modules. The power supply is distributed via the backplane to the NIM modules that comprise many different functions like discriminators, counters, coincidences, amplifiers or level converters, for example. VME standard The Versa Module Europa (VME) is a standard introduced by Mostek, Motorola, Philips and Thompson in 1981. It offers a backplane that provides fast data transfer allowing an increase of the amount of transferred data and, therefore, an increase of the channel count coming from the front-end electronics. This fact makes VME standard the widest standard used in physics experiments. Crate and modules VME crates contain a maximum of 21 slots where the first position is reserved for a controller module; the other 20 slots are available for modules that can perform other functions. There are different types of VME modules, each having a different size and a different number of 96-pin connectors that define the number of bits designed for address and data buses. VMEBus VME systems use a parallel and asynchronous bus (VMEBus) with a unique arbiter and multiple masters. VMEBus also implements a handshaking protocol with multiprocessing and interrupt capabilities. VMEbus is composed by four different sub-buses: Data Transfer Bus, Arbitration Bus, Priority Interrupt Bus and Utility Bus [20]. While VMEBus achieves a maximum data transfer of 40 MBps, some extensions of the VME standard as VME64, VME64x and VME320 standards have enhanced its capabilities increasing the number of bits for address and data and implementing specific protocols for data communication [21]. In this way, VME64 systems achieve data transfers up to 80 MBps, while VME64x support data transfers up to 160 MBps and VME320 between 320 MBps and 500 MBps. Also, VXS standard is an ANSI/VITA standard approved in 2006. VXS standard maintains backward compatibility with VME systems combining the parallel VMEbus with switched serial fabrics. VXS systems achieve a maximum data transfer between modules of 3050 MBps [22]. PCI standard PCI stands for Peripheral Component Interface and was introduced by Intel Corporation in 1991. The PCI bus is the most popular method used today for connecting peripheral boards to a PC providing a high performance bus using 32 bit or 64 bit bus with multiplexed address and data lines. PC-based DAQ systems can be easily built using PCI systems as PCI cards are directly connected to a PC. PCI cards The last PCI standard specifies three basic form factors for PCI cards: long, short and low profile [23]. PCI cards are keyed to distinguish between 5V or 3.3V signaling cards and they use different pin count connectors according to the data and address bus widths. PCI Local Bus PCI devices are connected to the PC via a parallel bus called PCI Local Bus. Typical PCI Local Bus implementations support up to four PCI boards that share the address bus, data bus and most of the protocol lines, but also having dedicated lines for arbitration. PCI local bus width and clock speed determines the maximum data transfer speed. Table 1 shows a summary of the achievable data transfer speeds in PCI and the extension version of the PCI, called PCI-X [24]. A disadvantage of the PCI standard is the use of a parallel bus for data and address lines. The skew between lines and the fact that only one master/slave pair can communicate at any time and the handshaking protocol limits the maximum achievable data transfer in PCI [19]. Further, in 1995, PICMG introduced the compact PCI (cPCI) standard as a very high performance bus based on the PCI bus using Eurocard format boards. But cPCI is not widely used in particle physics experiments due to some additional disadvantages, such as, small size cards, limited power consumption and limited number of slots [25]. PCI Express PCIe stands for Peripherial Compatible Interface Express. PCIe standard was introduced in 2002 to overcome the space and speed limitations of the conventional PCI bus by increasing the bandwidth while decreasing the pin count. This standard not only defines the electrical characteristics of a point to point serial link communication, but also a protocol for the physical layer, data link layer and transaction layer. Moreover PCIe standard includes advanced features such as active power management, quality of service, hot plug and hot swap support, data integrity, error handling and true isochronous capabilities [26,27]. As PCI systems, PCIe systems allow the implementation of DAQ systems based on PC. PCIe cards PCIe uses four different connector versions: x1, x4, x8 and x16, where the number refers to the number of available bi-directional data path and correspond to 32, 64, 98 and 164 pin connectors respectively. There are two possible form factors for PCIe cards: Long and Short. PCIe Bus The PCIe serial bus transmits with a data rate of 2.5 Gbps using LVDS logic standard. But really, the effective data rate is reduced to 80% of the original data rate due to the use of the 8b10b codification [27]. A summary of the achieved data rates per direction using PCIe is shown in ATCA standard ATCA stands for Advanced Telecommunications Computing Architecture and was introduced by PICMG in 2002 in the PICMG 3.0 specification. PICMG 3.0 and 3.x specifications define a modular open architecture including mechanical features, components, power distribution, backplane and communications protocols. This specification was created for telecommunication purposes where high speed, high availability and reliability are extremely needed. ATCA systems can deploy a service availability of 99.999% in time [28]. ATCA shelf The shelf can allocate a different number of ATCA boards (blades) and it also allocates the shelf manager that is responsible for the power and thermal control issues. Figure 10a shows a 14 slots ATCA shelf with a height of 13U, and figure 10b shows a processor ATCA module. ATCA modules Regarding the ATCA modules we can highlight three main types of ATCA blades for the data transport purposes: Front boards, Rear Transition Modules (RTM) and Advance Mezzanine Cards (AMC). All of these modules are hot swappable and have different form factors.  Front boards are connected to the shelf backplane through the Zone 1 and Zone 2 connectors. Zone 1 connector is use to feed the module and Zone 2 connector is use for data transport signals. Moreover Front boards have a third connector, Zone 3, which provides a direct connection with the RTM.  Rear Transition Modules are placed in the rear side of the shelf and it is used to expand the ATCA system functionalities. a b Data Acquisition in Particle Physics Experiments 289  AMCs are mezzanine modules pluggable onto ATCA carriers enlarging system functionalities. Examples of AMCs include CPUs, DSP systems or storage. Moreover, ATCA Fabric interface reaches data rates between modules of 40 Gbps using protocols such as Gigabit Ethernet, Infiniband, Serial Rapid IO or PCIe and network topologies such as dual star, dual-dual star or full mesh. These features provide a clear advantage of ATCA systems over other platforms. MicroTCA MicroTCA is a complementary specification to the PICMG 3.0 introduced by PICMG in 2006. It was defined to develop systems that require lower performance, availability and reliability than ATCA systems, and also lower space and cost, but maintaining many features from PICMG 3.0 such as shelf management or fabric interconnects [29]. MicroTCA shelf and modules The shelf can allocate and manage up to 12 single or double size AMCs. AMCs are directly plugged into the backplane in a similar way than ATCA carriers. The function of the backplane is to provide power to the AMC boards and also connection with the data, control, system management, synchronization clock and JTAG test lines. The MicroTCA backplane can implement network topologies like star, dual star, mesh or point-to-point between AMCs. The protocols used for data communications in the MicroTCA backplane are: Ethernet 1000BASE-BX, SATA/SAS, PCIe, Serial Rapid IO or 10GBase-BX4. Data transfers between AMCs within the MicroTCA backplane can achieve speeds of 40 Gbps. Transmission media In the past, copper wires, as coaxial or twisted cables, were widely used to communicate front-end electronics and back-end electronics, or even modules in back-end systems. For example, many NIM and VME modules use coaxial cables with BNC, LEMO or SMA connectors for control or data communications. But, nowadays, data transmission media for DAQ has moved to fiber optics due to the advantages of fiber optics over copper cables that make them the best option to transmit data in present particle physics experiments. Some of these features are: EMI immunity, lower attenuation, no electrical discharges, short circuits, ground loops or crosstalk, resistance to nuclear radiation and high temperatures, lower weight and higher bandwidth [30]. Due to the widespread use of fiber optics, optic modules play an essential role in present particle physics experiments. Optic modules are needed to convert electrical signals into optical ones for transmission via optic fibers. Some examples of optic modules that are used for data transmission in particle physics experiments and their data bandwidths are shown in the table 3 [31]. Table 3. Examples of parallel optic modules used in particle physics experiments Back-end data processing In the last decades, the improvements in the analog-to-digital converters, in terms of sampling rate and resolution, opened a wide range of possibilities for the digital data processing. The migration from the analog to the digital processing has proven a number of scenarios where the digital approach has potential advantages, such as system complexity, parameter setup changes or scalability. On the other hand, system designers have to deal with bigger amounts of data processed at higher sampling rates, which affects the complexity of the processing algorithms working in real time and the transport of those data at high rates. For instance, digital processing has demonstrated significant advantages processing pulses from large-volume germanium detectors, where a good choice in the pulse shaping parameters is crucial for achieving good energy resolution and minimum pulse pile-up for high counting rates. Common used algorithms Following the inheritance of the analog data processing, some of the digital data processing algorithms perform similar tasks to the analog blocks; taking advantage of the digital information compiled in the ADCs. These algorithms can be divided in five groups:  Shaping or filtering: When only part of the information from the detector pulses is relevant, such as, for instance, the height of the pulse, shaping techniques can be applied. They filter the digital data according to certain shaping parameters, which could be changed easier in the digital setup. Thus, the only difference in order to apply the same filters in the analog and digital approaches would remain in the continuous or discrete characteristics that differentiate them. In addition, apart from the timeinvariant filters similar to the analog ones, in the digital world also adaptive filtering could be applied, changing the filter characteristics for a certain period of time.  Pulse shape analysis: Exploiting the amount of digital information available in the fast digitalization process, different techniques for the analysis of the shape of the pulses can be applied. According to the detector response, these algorithms could be used for obtaining better detector performance or distinguishing between different input particles, as show in Figure 11. Figure 11. Comparison between an analog (left) and two digital algorithms in a neutron-gamma separation using Pulse shape analysis algorithms [32].  Baseline restoration: During the time gap between two consecutive pulses, the baseline value can also be digitized and easily subtracted from the digital values of the waveform. Also sometimes more elaborated algorithms can be applied in order to calculate the baseline of the pulses. Thus, better system performance is achieved, avoiding changes due to temperature drifts or other external agents.  Pile-up deconvolution: The pile-up effect consists on the accumulation of pulses from different events in a short time, which, in principle, avoids the study of those events. In the analog electronics, this effect usually causes an increment in the dead time of the system, as those events should be rejected. However, taking advantage of the digital characteristics, a further analysis of these pulses can be performed and, consequently, in some cases the information of the compound pulses can be disentangled.  Timing measurements: Timing information is mainly managed in two ways:  Trigger generation: In a fully digital system, pulse information can be used for generating logical signals for validating certain events of interest. Furthermore, in complex systems with different trigger levels the generation of logic signals for the validation or rejection of events becomes very important. For this purpose, two methods are commonly used: leading edge triggering and constant fraction timing. The first is the simplest, and generates the logic pulse performing a comparison between the input pulse and a constant trigger level. In the second, a small algorithm generates the logic pulse from a constant percentage of the pulse height. There are other algorithms like the crossover timing, ARC timing, ELET, etc. but its usage is lower.  Measurement of timing properties: With the trigger information generated either analog or digitally, several logic setups can be performed. Thus, depending on the complexity of the experiment and its own characteristics, trigger pulses can be used for measuring absolute timing between detectors, perform new trigger levels according to certain conditions or filter events according to a specific coincidence trigger. Although the needs of the experiments and the complexity of the setups changes enormously, the processing algorithms usually can fit into one of the categories described previously. However, it is also common to combine several algorithms in the process, so the system architecture can be quite elaborated. The split of the system into several firmware and software blocks allows designers and programmers to manage the difficulty of the experiment. Hardware choices The complexity and performance of the algorithms previously presented varies depending on the input data, the sampling rate or the implementation architecture. For the last one, several options are presented according to the experiment needs:  Digital Signal Processing (DSP): The DSPs are Integrated Circuits (ICs) that perform programmable filtering algorithms by applying the Multiply-Accumulate (MAC) operation. These devices have been used since more than 30 years, especially in other fields, such as audio, image or biomedical signal processing. The wide knowledge acquired with these devices has contributed to its use within the particle experiments network, taking advantage of its compactness or stability with temperature changes.  Field-Programmable Gate Array (FPGA): These programmable ICs are composed by a large number of gates that can be individually programmed and linked, as it is depicted in figure 12. They are usually programmed using Hardware Description Languages (HDLs), like the Application-Specific Integrated Circuits (ASICs), so, in theory, they could implement any of the algorithms presented previously. Furthermore, these devices use to embed other DSPs or microprocessors, which allow performing different processing tasks, even concurrently. In some cases, a Personal Computer (PC) with specific memory or CPU characteristics is used. In this case, the data are treated with software programs together with the operating system. They are often included as an interface to long-term memories, i.e. to handle the storing process. However, sometimes additional data processing is required, and they are particularly efficient when there is a high amount of data, which are not needed to be processed at a high frequency. Moreover, data processing at this point sometimes requires a lot of computing resources, so these processes use to run in computer farms, which are handled by distributed applications or operating systems.  Graphics Processing Unit (GPU): When parallel processing and the amount of data to manage increase, CPUs may not be the best hardware architecture to support it. Thus, the underlying idea of the GPUs is to use a Graphical card as a processing unit. They have been shown as very efficient hardware setups that can be more efficient than CPUs in some configurations.  Grid computing: This technique, which is used in large experiments where the amount of data is not even manageable by computing processing farms, consists of taking advantage of the internet network to communicate computing farms or PCs in order to perform a large data processing with an enormous number of heterogeneous processing units. Obviously this technique is implemented with data without timing constraints, as the processing time for each unit may be different. Among this short description of data processing hardware blocks, an overview of all possible units has been presented. However, it is important to remark that the final system requirements may advise to select a certain setup. For this purpose, the reader is encouraged to review the bibliography for further details. Application examples This section reviews at a glance some implementations of DAQ in particle physics experiments and medical applications in order to clarify the concepts on detectors, algorithms and hardware units previously described. The first example regards to the data processing in the Advanced GAmma Tracking Array (AGATA) [33]. As it is a "triggerless" system, the detector signals (HPGe crystals) are continuously digitized and sent to the pre-processing electronics, which implement shaping, baseline restoration, pile-up deconvolution and trigger generation algorithms in FPGAs in a fully digital way. After that, crossing different bus domains, the data arrive to a PC that performs Pulse Shape Analysis algorithms to calculate the position of the interaction and add it to the energy information calculated in the preprocessing. Then, data coming from all detectors arrive to a PC (Event Builder), using a distributed Digital Acquisition program. When the event is generated, its information is added to the data from other detectors (ancillaries) in a PC called "Merger", that sends them to the tracking processor. This is another PC that performs tracking algorithms for reconstructing the path of the gamma-rays in the detectors. Finally, the data is stored in external servers. In this example, most of the presented algorithms and hardware configurations are used. In addition, GPUs have been tested for the Pulse Shape Analysis and they have shown excellent performance characteristics. Also Grid computing techniques are used for data analysis and storage, so the hardware components previously presented are almost covered in this example. The second example is the data processing in the ATLAS detector, at CERN [34]. In this case, the system is composed of several different types of sensors and the DAQ system is controlled by a three level trigger system. In the first level of trigger, the events of interest recorded in the detectors, mostly selected by comparators, are directly sent to the second level of trigger. In this level, the information from several sub-detectors is correlated and merged according to the experimental conditions. Finally, a third event filtering is carried out with the data from the whole system. Along these trigger levels, different processing algorithms are used, combined in different hardware setups. However, all of them can be included in one of the categories previously detailed. An example of the hardware systems developed for this particle physics experiment may be found in [35,36]. The last example is from radiation therapy where ionizing radiation is used with medical purposes. Nowadays, radiologists make use of radioactive beams i.e. gamma particles, neutrons, carbon ions, electrons, etc. to treat cancer, but they also take advantage of the properties of the ionizing radiation and its application in the diagnosis of internal diseases through medical imaging. This field has involved the development of devices capable of, on one hand, producing the radioactive particle needed for a specific treatment and, on the other hand, to detect the radioactive beam (in some cases, the part of the radiation that has not been absorbed by the patient) to reconstruct the internal image. This is the case of the Computed Tomography (CT) scan, a medical imaging technique consistent on an X-Ray source (X-Ray tube) that rotates 360º around the patient providing at each rotation a 2D cross-sectional image or even a 3D image by putting all the scans together through computing techniques. The detection of the X-Rays is carried out whether in a direct or in an indirect way depending on the device; actually, the detection area consists from one up to 2600 detectors of two categories, scintillators (coupled to PMTs) or gas detectors. Another well-known imaging technique is the Positron Emission Tomography (PET). It provides a picture of the metabolic activity of the body thanks to the detection of the two gamma rays that are emitted after the positron annihilation produced by a radionuclide previously inserted into the patient. Gamma detection is achieved by placing scintillators generally coupled to PMTs but also to Si APDs. Conclusion In this chapter we have presented a review of the technologies currently used in particle physics experiments following the natural path of the signals from the detector to the data processing. Even these kinds of applications are well established there is not a comprehensive review, as this chapter tries in a very light version, of the overall technologies commonly in use. Being a wide field, we have tried to be concise and provide the interested reader with a list of references to consult.
2018-12-03T13:40:23.627Z
2012-08-23T00:00:00.000
{ "year": 2012, "sha1": "bcf5df5718ab7825668ec9adb8d95b816e7c57c7", "oa_license": "CCBY", "oa_url": "https://cdn.intechopen.com/pdfs/38453.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "aa092468afb139492d4dcc04ed4facf6135aee93", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
268187139
pes2o/s2orc
v3-fos-license
Intracranial osteochondroma arising from the posterior clinoid process: a rare case report with diagnostic challenges and comprehensive literature review Introduction and importance: Intracranial osteochondroma is rare, presenting diagnostic challenges due to overlapping imaging findings with other pathologies. This case report highlights the significance of considering osteochondroma in calcified tumour differentials near bone. Case presentation: A 34-year-old man with vision deterioration and headaches had an MRI revealing a suprasellar lesion. Intraoperatively, a bony hard tumour was partially resected. Subsequent computed tomography (CT) confirmed a calcified mass contiguous with the posterior clinoid. Clinical discussion: Reviewing 28 cases, skull base osteochondromas were common, with differential diagnoses including craniopharyngioma and meningioma. Surgical decision-making involved balancing complete resection for convexity and falx cases versus partial resection for skull base tumours due to proximity to critical structures. Conclusion: Intracranial osteochondroma poses diagnostic challenges, especially near bone. Tailored surgical approaches are vital, with complete resection yielding good outcomes for convexity and falx cases. Close follow-up is crucial for monitoring recurrences and complications. Introduction Osteochondroma is the most common benign bone tumour that can develop from any bone with enchondral ossification [1] .Some consider it to be a developmental lesion rather than a true neoplasm that results from a herniated fragment of the growth plate [2] .Osteochondroma commonly originates within the long bones, comprising ~35% of benign and 8% of all bone tumours.Conversely, its manifestation within the intracranial region is comparatively rare, accounting for only 0.1-0.2% of all intracranial tumours [3] .Altogether 27 cases of intracranial osteochondroma have been reported in the literature . Here, e report a case of osteochondroma arising from the posterior clinoid process.The case is discussed and the literature is reviewed.This case report has been reported in line with the SCARE Criteria [32] . Case report A 34-year-old male presented with a progressive decline in vision on both sides and intermittent headaches over 2 years.His medical and familial background revealed no noteworthy history.Ophthalmological assessment yielded normal results.Neurological examinations were unremarkable.MRI unveiled a heterogeneous suprasellar mass, predominantly exhibiting low signal intensity across all sequences.Notably, T1-weighted images [Fig.1A] depicted high signal intensities within the lesion.T2-weighted images HIGHLIGHTS • This case report presents a rare occurrence of intracranial osteochondroma arising from the posterior clinoid process, highlighting the diagnostic challenges faced due to overlapping imaging findings with other pathologies.1C].Posteriorly, the mass exerted pressure on the brainstem without associated perilesional oedema.During the preoperative assessment, the potential diagnoses considered included craniopharyngioma, meningioma, dermoid tumour, and osteochondromatous lesion.The absence of significant post-contrast enhancement and the location of mass aided in excluding craniopharyngioma.Furthermore, the heterogeneous signal and absence of a dural tail were indicative factors in ruling out meningioma.The lack of surrounding oedema also leaned towards the likelihood of an osteochondroma.Dermoid tumour was also considered as it presents with heterogeneous signal characteristics due to the presence of fat, calcification, and hair follicles.However, the absence of enhancement in imaging helped to differentiate it from the other differential diagnoses.A biopsy, conducted through a right orbito-zygomatic craniotomy, revealed an exceptionally firm tumour, allowing only partial resection.Numerous small calcified fragments were excised and subjected to histopathological examination. Post-biopsy, the patient developed a headache, prompting a computed tomography (CT) scan.The CT scan unveiled a calcified suprasellar mass measuring 50 × 45 × 36 mm, exhibiting a cauliflower-like appearance [Fig.2A and B].The sella turcica appeared distorted from the posterior aspect, accompanied by a reduced volume.Additionally, there was a defect in the cortical outline of the left-sided posterior clinoid process, with the cortical outline seamlessly merging with the calcified mass. Pathological analysis of the excised pieces revealed a macroscopic composition predominantly consisting of bone.Microscopic examination disclosed trabecular bone with marrow spaces containing hematopoietic elements, including megakaryocytes and adipocytes.A cartilaginous cap was identified in a portion of the tissue [Fig.3], with no presence of epithelial elements.These findings were indicative of an osteochondromatous lesion.Over a 64-month follow-up postsurgery, there was a gradual amelioration of symptoms, with no reported recurrences. Clinical discussion Osteochondroma, also known as exostosis, represents a benign bony outgrowth covered by hyaline cartilage.In both CT and MRI, a distinctive characteristic of osteochondroma is the seamless connection of the lesion with the cortex and medullary canal of the originating bone [3] .Our investigation encompassed a comprehensive review of the literature, utilizing databases such as Embase, Medline (via PubMed), Scopus, Cochrane Library, and Google Scholar.The searches were conducted using MeSH terms, combined key terms, text words, and search strings.To access the records, the following combination of key terms were used: intracranial osteochondroma AND case report, intracranial osteochondroma AND recurrence, and intracranial osteochondroma AND follow-up.After identifying the key relevant articles their references were looked into (ancestor search strategy).Similarly, other studies which cited were looked at the line (descendent search strategy). As of the present, a total of 29 cases of intracranial osteochondroma have been documented, including the case under consideration (Table 1).Notably, of the 29 cases, 23 (79.31%) involved male individuals.The predominant locations for intracranial osteochondroma were the skull base (46.4%),followed by the convexity (39.3%) and the falx (14.3%).Within the skull base, the posterior clinoid process (5 cases), parasellar-middle cranial fossa region (4 cases), sella turcica (2 cases), petrous bone (1 case), and foramen magnum (2 cases) were identified as the most common sites.In our scenario, the affected area encompasses the posterior clinoid process of the skull base. Skull base osteochondroma often originates in the parasellar region, in proximity to the confluence of sphenopetrosal, sphenooccipital, and petro-occipital synchondroses [21][22][23] .The prevalent clinical manifestation among patients with skull base osteochondroma was focal cranial nerve deficits.In contrast, patients with convexity and falcine osteochondroma typically presented with symptoms such as headache and epilepsy.In these instances, the cranial nerves most commonly affected were the optic nerve and abducens nerve, mirroring our case where visual disturbances and headaches were evident. Intracranial osteochondroma can exhibit similarities to meningioma and oligodendroglioma in CT and MRI due to the presence of calcifications [23][24][25][26] .In rare instances, acute intratumoral haemorrhage may imitate pituitary apoplexy [23] .CT proves to be a more effective modality than MRI in illustrating the exophytic nature of the bony lesion and its connection with the bone of origin.MRI may reveal areas of high signal in T1weighted images, indicative of fatty bone marrow, as observed in our case [23] .Contrast-enhanced MRI may display heterogeneous enhancement, posing a challenge in differentiation from meningioma, as both exhibit enhancement [17,18,20,23] .Angiography reveals osteochondromas as avascular [12,16,24] , and Thallium-201 SPECT demonstrates extremely low uptake [28] .These modalities aid in distinguishing osteochondromas from highly vascular tumours like meningiomas. The primary treatment for osteochondroma is complete surgical excision, as incomplete excision may lead to recurrences [1,24] .Gross total resection was successful in convexity and falcine osteochondroma cases, resulting in a symptom improvement rate of 66.7%.However, one falcine osteochondroma case succumbed to postoperative complications [21] , and a case of convexity osteochondroma experienced recurrences and malignant transformation to chondrosarcoma [4] .Skull base osteochondroma cases achieved partial to subtotal resection, yielding symptom improvement in 41.7% without recurrences.Two skull base osteochondroma cases died due to postoperative complications, one from intratumoral haemorrhage on the second postoperative day [8] and the other from pulmonary infection on the 12th postoperative day [22] .In a paramedian skull base osteochondroma, multiple operations were performed due to recurrences, resulting in no significant improvement of symptoms, and the patient eventually succumbed to intracranial haemorrhage during a follow-up after 3 years [12] .Consequently, it can be inferred that complete resection of convexity and falcine osteochondroma yields substantial symptom improvement without recurrences.However, the decision to resect skull base osteochondroma should be carefully considered due to its proximity to carotid arteries and branches, cavernous sinuses, and cranial nerves.Small and asymptomatic skull base osteochondromas may be observed, while in symptomatic cases, subtotal or partial resection with close follow-up represents a viable management strategy.In the Followup study conducted by Forsythe et al. [5] .and Herskowitz et al. [7] , spanning 6 months, there was no discernible evidence of recurrence observed.Alpers et al. [4] ., conducted the longest period of follow-up extended to 68 months, during which recurrence manifested in the form of chondrosarcoma, resulting in the patient's demise after the 11th postoperative day.Conversely, in our instance, a partial removal was carried out, leading to an improvement in clinical symptoms with no subsequent recurrences in follow-up for 64 months. Figure 1 . Figure 1.(A) Axial T1-weighted MR image demonstrating predominantly low signal suprasellar mass (green arrow) with areas of high signal intensity likely marrow fat.(B) Axial T2-weighted MR image showing heterogeneous intensity suprasellar mass (green arrow) with adjacent mass effect.(C) Sagittal T1-weighted MR image with gadolinium depicting heterogeneous enhancement of the mass (green arrow). Figure 2 . Figure 2. (A) Postoperative axial computed tomography (CT) bone window image showing exophytic extra-axial cauliflower-like bony mass (green arrow) around the Dorsal Sella and clinoid process.(B) Postoperative sagittal CT bone window image showing exophytic extra-axial cauliflower-like bony mass (green arrow) around Dorsal Sella and clinoid process. Figure 3 . Figure 3. Photomicrograph showing the tumour consisting of bony trabeculae containing marrow elements and adipocytes along with foci of hyaline cartilage.Original magnification: 100 × . Table 1 Summary of cases of intracranial Osteochondroma
2024-03-03T18:46:01.349Z
2024-02-22T00:00:00.000
{ "year": 2024, "sha1": "3f3d307b90d8fe5711d9d81ed975d3dc3017994f", "oa_license": "CCBYND", "oa_url": "https://doi.org/10.1097/ms9.0000000000001855", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71a3445332213f101dae621b845db22e5a4426a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
247847717
pes2o/s2orc
v3-fos-license
A Rapid LC-MS/MS-PRM Assay for Serologic Quantification of Sialylated O-HPX Glycoforms in Patients with Liver Fibrosis Development of high throughput robust methods is a prerequisite for a successful clinical use of LC-MS/MS assays. In earlier studies, we reported that nLC-MS/MS measurement of the O-glycoforms of HPX is an indicator of liver fibrosis. In this study, we show that a microflow LC-MS/MS method using a single column setup for capture of the analytes, desalting, fast gradient elution, and on-line mass spectrometry measurements, is robust, substantially faster, and even more sensitive than our nLC setup. We demonstrate applicability of the workflow on the quantification of the O-HPX glycoforms in unfractionated serum samples of control and liver disease patients. The assay requires microliter volumes of serum samples, and the platform is amenable to one hundred sample injections per day, providing a valuable tool for biomarker validation and screening studies. Introduction Biomarker studies rely heavily on nano-flow liquid chromatography tandem mass spectrometry (nLC-MS/MS) for both the discovery shotgun proteomics and the targeted follow-up validation studies. In contrast to the small molecule analyte quantification, where standard HPLC flow rates for LC-MS analysis are common, the nLC-MS/MS has been favored for peptide quantification primarily because of the sensitivity of analyte detection. However, nLC-MS methods remain technically challenging, time consuming, and less robust [1], which limits their use in clinical laboratories or their applications to large sample sets. More recently, researchers have begun to explore capillary columns with a bore wider than the conventional 75 µM ID nano-flow analytical columns [2][3][4]. This allows execution of the LC step of proteomic studies at a microflow rate, and at a substantially higher throughput. The increased flow rate reduces the gradient time and increases the reproducibility and robustness of the measurements [5]. However, in a conventional single spray-tip setup, the higher flow rate diminishes ionization efficiency and lowers sensitivity of detection below acceptable limits for the majority of the peptides in complex samples. This has been addressed by the development of a multi-nozzle emitter that splits the flow evenly into multiple smaller streams, which has been shown to enhance substantially the ionization efficiency [6]. In combination with advances in the sensitivity of the mass Molecules 2022, 27, 2213 2 of 9 spectrometers, the microflow LC-MS/MS (mLC-MS/MS) methods reach sensitivity of detection comparable to that of nLC-MS/MS. Shotgun proteomics studies using mLC-MS/MS have reported identification of close to 10,000 proteins in cell digests, and stability and reproducibility over thousands of runs [5,7]. In these studies, the robustness of the method in high-throughput bottom-up proteomic analyses has been demonstrated using complex cell, tissue, and body fluid digests. The microflow method enabled avoidance of column overloading, resulting in good peak shapes. This, in addition to negligible carryovers, is critical for accurate quantification of compounds by the LC-MS/MS analyses. The method has been adapted for protein biomarker studies using data independent analysis (DIA), parallel reaction monitoring (PRM), and multiple reaction monitoring (MRM) [3, [8][9][10]. However, we are not aware of any reports of the use of the mLC-MS/MS for the analysis of O-glycopeptides. In this study, we developed a mLC-MS/MS-PRM assay for the quantification of site-specific mucin-type O-glycoforms of hemopexin, which we previously reported as a promising candidate biomarker for the serologic monitoring of liver fibrosis [11,12]. We have shown that the sialylated O-glycoforms of hemopexin (HPX) in serum of patients are associated with advancing fibrosis in hepatitis C-associated liver disease [11]. This may prove useful in the monitoring of the fibrotic liver disease, which affects a large segment of the world's population, and whose progression can be mitigated by timely lifestyle changes and interventions [13,14]. Our newly optimized method allows for capture of the analytes, desalting, and gradient elution using a one-column setup, directly in a tryptic digest of unfractionated serum, which significantly reduces the time needed for sample preparation and analysis. We used the method to quantify the HPX glycoforms in serum samples of HCV-induced liver disease, and we demonstrate that the mLC-MS/MS-PRM assay offers substantially higher throughput compared to our reported workflow [11], maintains higher sensitivity of detection, and offers a high-throughput serologic assay (100 injections/day) for an improved screening of these glycopeptide biomarker candidates. Results and Discussion Liver biopsy has been the gold standard in the diagnosis of fibrotic changes associated with chronic liver diseases, and non-invasive methods such as liver imaging, ultrasound elastography, and serologic monitoring provide additional options [13]. Serum protein biomarkers, including glycosylation pattern of liver secreted proteins, represent an attractive strategy for serologic monitoring of liver disease (reviewed in [15,16]). We have characterized O-glycoforms of HPX by mass spectrometry [11,12,17] and demonstrated that the relative abundance of the di-and mono-sialylated O-glycoforms increase significantly with the progressing fibrotic liver disease of HCV etiology [11]. Building upon our earlier studies, we aimed to develop a fast mLC-MS/MS assay to quantify the HPX glycoforms at high throughput. Microflow LC-MS/MS for the Quantification of O-HPX We optimized a microflow (1.5 µL/min) LC-MS/MS workflow with 5× higher throughput compared to the earlier nanoflow (0.3 µL/min) method. In a conventional metal/glass needle emitter setup this would translate to a loss of sensitivity because of the dilution of analytes. To circumvent this, we used a multi-nozzle emitter (8-nozzle, Newomics) [6], which has been reported to achieve sensitivity close to routine nLC-MS/MS applications. The sample trapping and desalting was achieved within 2 min at a 5 µL/min flow rate using a 20 mm C18 trap column, followed by elution of the analytes at a 1.5 µL flow rate in 3 min, column washing for 2 min, followed by a 6 min equilibration step (total 13 min; for a schematic see Supplementary Figure S1). The time gap between each sample run is negligible, thus making the analysis of approximately 100 samples per day feasible. The analytes were measured by a scheduled PRM assay using an Orbitrap Fusion Lumos Mass Spectrometer (Thermo Scientific, Dreieich, Germany). Measurement using serially diluted samples showed optimal sensitivity between 0.1 and 0.2 µg of injected serum protein sample ( Figure 1). The retention time (RT) of the analytes was highly reproducible (RSD 0.20%, Figure 2) which is suitable for automated results processing. The S-HPX measurement (i.e., the ratio of disialo m/z 916.4/monosialo m/z 843.6 analyte) [11] was shown to be consistent over 50 injections (RSD 8.91%, Figure 3), demonstrating outstanding technical reproducibility of the label-free tandem mass spectrometry assay. Molecules 2022, 27, x FOR PEER REVIEW 3 of 10 The analytes were measured by a scheduled PRM assay using an Orbitrap Fusion Lumos Mass Spectrometer (Thermo Scientific, Dreieich, Germany). Measurement using serially diluted samples showed optimal sensitivity between 0.1 and 0.2 µg of injected serum protein sample ( Figure 1). The retention time (RT) of the analytes was highly reproducible (RSD 0.20%, Figure 2) which is suitable for automated results processing. The S-HPX measurement (i.e., the ratio of disialo m/z 916.4/monosialo m/z 843.6 analyte) [11] was shown to be consistent over 50 injections (RSD 8.91%, Figure 3), demonstrating outstanding technical reproducibility of the label-free tandem mass spectrometry assay. The analytes were measured by a scheduled PRM assay using an Orbitrap Fusion Lumos Mass Spectrometer (Thermo Scientific, Dreieich, Germany). Measurement using serially diluted samples showed optimal sensitivity between 0.1 and 0.2 µg of injected serum protein sample ( Figure 1). The retention time (RT) of the analytes was highly reproducible (RSD 0.20%, Figure 2) which is suitable for automated results processing. The S-HPX measurement (i.e., the ratio of disialo m/z 916.4/monosialo m/z 843.6 analyte) [11] was shown to be consistent over 50 injections (RSD 8.91%, Figure 3), demonstrating outstanding technical reproducibility of the label-free tandem mass spectrometry assay. Application of the Micro-Flow LC-MS/MS Assay to Serum Samples of Liver Disease Patients We reported detectability of other O-glycoforms of HPX, including the Tn-antigen, in our previous study; however, we were not able to quantify these analytes in the patient samples [11]. In our current assay, we quantify the additional analytes because of enhanced sensitivity of the current setup in spite of the introduction of faster flow rates (Supplementary Table S1 The enhanced detection of the O-HPX glycoforms in unfractionated serum samples using this microflow method may be due to the combination of sample loading capacity and excellent peak shape ( Figure 5) obtained at the higher flow rate. With the assumption that minor ionization differences of the glycoforms do not affect the overall results, we calculated the ratios of multiple sialylated to respective monosialylated glycoforms. The ratios of the sialylated O-HPX analytes (S-HPX) were calculated based on the peak areas of the multiple sialylated structures to singly sialylated structures 916.4/843.6, 1080.5/1007.7, and 1153.2/1007.7 using the transitions described previously [11]. Application of the Micro-Flow LC-MS/MS Assay to Serum Samples of Liver Disease Patients We reported detectability of other O-glycoforms of HPX, including the Tn-antigen, in our previous study; however, we were not able to quantify these analytes in the patient samples [11]. In our current assay, we quantify the additional analytes because of enhanced sensitivity of the current setup in spite of the introduction of faster flow rates (Supplementary Table S1 (Figure 4). The enhanced detection of the O-HPX glycoforms in unfractionated serum samples using this microflow method may be due to the combination of sample loading capacity and excellent peak shape ( Figure 5) obtained at the higher flow rate. With the assumption that minor ionization differences of the glycoforms do not affect the overall results, we calculated the ratios of multiple sialylated to respective monosialylated glycoforms. The ratios of the sialylated O-HPX analytes (S-HPX) were calculated based on the peak areas of the multiple sialylated structures to singly sialylated structures 916.4/843.6, 1080.5/1007.7, and 1153.2/1007.7 using the transitions described previously [11]. As a proof of applicability, we quantified S-HPX in serum samples of 15 HCV fibrotic and 15 HCV cirrhotic patients (HALT-C trial participants), and compared the quantities to 15 serum samples of healthy controls. The measurement was undertaken using a fixed volume of serum samples and the measure is normalized by the ratio of the glycoforms of the same protein, as described previously [11]. Statistical analyses were performed to find the association between the different analytes and the disease status. The mean ratio and standard error of 916.4/843.6 in control, fibrotic, and cirrhotic groups was 7.905 ± 0.8562, 13.69 ± 2.942, and 29.99 ± 4.950; and that of 1080.5/1007.7 was 8.802 ± 0.8, 11.65 ± 1.558, and 21.59 ± 2.587; and that of 1153.2/1007.7 was 1.07 ± 1.131, 4.261 ± 1.979, and 14.65 ± 3.49 respectively. One-way ANOVA analysis showed that the relative ratios for the three analytes, 916.4/843.6 (p < 0.0001), 1080.5/1007.7 (p < 0.0001), and 1153.2/1007.7 (p = 0.0004) vary significantly between the control, fibrosis, and cirrhosis groups ( Figure 5). Thus, this study expands the number of meaningful analytes for the detection of liver fibrosis. It confirms the results observed in our earlier study, that the S-HPX increases progressively in fibrotic and cirrhotic participants compared to disease-free controls ( Figure 5). Further studies are needed to understand the mechanism and biological processes controlling this outcome. Nevertheless, our results show that the mLC-MS/MS-PRM assay has adequate analytical performance for direct quantification of the clinically relevant S-HPX analyte in serum samples. Overall, we demonstrate the utility of a 13 min mLC-MS/MS-PRM assay for the quantification of the S-HPX glycoforms diagnostic of liver fibrosis of HCV etiology. The assay is more sensitive compared to that of our earlier report, highly reproducible, and amenable to 100 sample injections per day. Target analyte carryover between the sample injections is negligible (results not shown). In conjunction with a simple sample preparation method without an off-line desalting step, our workflow enables analysis of at least 30 samples per day in triplicate, including necessary QC injections. These parameters would be applicable in a clinical setting. A further increase in the throughput is feasible using a wider-bore capillary column with a higher flow rate, thereby reducing the gradient run time. A multi-nozzle emitter suitable for a flow rate up to 40 µL is commercially available and would support such adjustments. Optimization of a high-flow high-sensitivity methodology would be a focus for future studies. = 15). S-HPX, the ratio of monosialylated glycopeptide of the same structure (disialoT/monosialoT) increases significantly (p < 0.01) from the control, to the fibrosis and cirrhosis groups. Ratio of (A) HexNAc-Hex-2Neu5Ac/HexNAc-Hex-Neu5Ac, (B) 2HexNAc-2Hex-3Neu5Ac/2HexNAc-2Hex-2Neu5Ac, (C) 2HexNAc-2Hex-4Neu5Ac/2HexNAc-2Hex-2Neu5Ac. Sample Processing Serum samples were processed by trypsin digestion, without any enrichment step, as described earlier [11]. Briefly, 2 µL of each serum sample was diluted to 140 µL with 25 mM ammonium bi-carbonate; the proteins were reduced by 5 mM DTT at 60 • C for 1 h, followed by alkylation with 15 mM iodoacetamide for 20 min at RT in the dark. Residual iodoacetamide was reduced with 5 mM DTT for 20 min at RT. The proteins (20 µL by volume from above) were digested with mass spectrometry grade trypsin (1 µg) at 37 • C O/N. Tryptic peptides were analyzed without further processing to ensure reliable quantification of the glycoforms. Study Population Serum samples of participants in the HALT-C trial were obtained from the central repository at the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) as described previously [12]. In this study, O-HPX glycoforms comparison was performed in 30 participants (15 HCV fibrotic and 15 HCV cirrhotic patients) and 15 disease-free controls that donated blood samples at Georgetown University (GU) in line with approved IRB protocols. Briefly, the HALT-C trial is a prospective randomized controlled trial of 1050 patients that evaluated the effect of long-term low-dose peginterferon alpha-2a in patients who failed initial anti-HCV therapy with interferon [18]. Liver disease status of the study participants was classified based on biopsy-evaluation into groups of fibrosis (Ishak score 3-4) or cirrhosis (Ishak score 5-6). The two groups of liver disease samples, and the controls, were frequency matched on age, gender, and race (Supplementary Table S2). Data Analysis LC-MS/MS data were processed by Quant Browser (Thermo) with manual confirmation/integration. Peak areas were used for peptide and glycopeptide quantification and data normalization. A specific Y-ion (e.g., loss of whole glycan) was used for the quantification of the O-glycopeptides. The specific backbone fragments (y-ions) were used for the confirmation of the correct O-glycopeptides signal. The details of the MS/MS transitions used for the quantification of each glycoforms are listed in Table 1. Relative intensity of multiple sialylated analyte was calculated by normalizing its peak area to the peak area of monosialylated glycopeptide of the same structure (DisialoT/monosialoT, etc.), as described previously [11]. Statistical analysis for the HCV dataset was performed using GraphPad Prism software (v9.3.1). The ratio of three HPX-sialylated analytes 916.4, 1080.5, and 1153.2, to their respective non-sialylated forms (843.6, 1007.7, and 1007.7), was used as the quantitative measure for evaluation of the liver disease. The mean, standard error of mean, and the oneway ANOVA test was performed to determine the correlation between different analytes and disease status, and the data was visualized by nested Tukey plot. Funding: This work was supported in part by the National Institutes of Health (NIH grants U01CA230692 to RG and MS, R01CA238455 and R01CA135069 to RG). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Georgetown University, IRB code: 2008-549, study: Glycans in Hepatocellular Carcinoma [12]. Informed Consent Statement: All participants provided written informed consent. Data Availability Statement: The datasets generated during the current study are available from the corresponding author on reasonable request.
2022-04-01T15:10:13.272Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "6ecc2a3a9f0d29baaf0dff34069b6fc539ca9e9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/7/2213/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5a93f5a50ac90e0bd1ecd996e96792258abd836", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267782035
pes2o/s2orc
v3-fos-license
Highly accurate blood test for Alzheimer’s disease is similar or superior to clinical cerebrospinal fluid tests With the emergence of Alzheimer’s disease (AD) disease-modifying therapies, identifying patients who could benefit from these treatments becomes critical. In this study, we evaluated whether a precise blood test could perform as well as established cerebrospinal fluid (CSF) tests in detecting amyloid-β (Aβ) plaques and tau tangles. Plasma %p-tau217 (ratio of phosporylated-tau217 to non-phosphorylated tau) was analyzed by mass spectrometry in the Swedish BioFINDER-2 cohort (n = 1,422) and the US Charles F. and Joanne Knight Alzheimer Disease Research Center (Knight ADRC) cohort (n = 337). Matched CSF samples were analyzed with clinically used and FDA-approved automated immunoassays for Aβ42/40 and p-tau181/Aβ42. The primary and secondary outcomes were detection of brain Aβ or tau pathology, respectively, using positron emission tomography (PET) imaging as the reference standard. Main analyses were focused on individuals with cognitive impairment (mild cognitive impairment and mild dementia), which is the target population for available disease-modifying treatments. Plasma %p-tau217 was clinically equivalent to FDA-approved CSF tests in classifying Aβ PET status, with an area under the curve (AUC) for both between 0.95 and 0.97. Plasma %p-tau217 was generally superior to CSF tests in classification of tau-PET with AUCs of 0.95–0.98. In cognitively impaired subcohorts (BioFINDER-2: n = 720; Knight ADRC: n = 50), plasma %p-tau217 had an accuracy, a positive predictive value and a negative predictive value of 89–90% for Aβ PET and 87–88% for tau PET status, which was clinically equivalent to CSF tests, further improving to 95% using a two-cutoffs approach. Blood plasma %p-tau217 demonstrated performance that was clinically equivalent or superior to clinically used FDA-approved CSF tests in the detection of AD pathology. Use of high-performance blood tests in clinical practice can improve access to accurate AD diagnosis and AD-specific treatments. Article https://doi.org/10.1038/s41591-024-02869-zPotentially because %p-tau217 is less affected by confounding factors, this blood test has the highest performance yet demonstrated in identifying individuals with AD pathology 29 . Despite BBMs being used in clinical practice in some countries, including the United States, they have not been recommended as standalone diagnostic tests due to a lack of studies demonstrating their equivalence to clinically used CSF and PET methods 16,[35][36][37] .Therefore, we compared the diagnostic performance of plasma %p-tau217 with clinically used and FDA-approved CSF assays (CSF Aβ42/40 from Fujirebio and p-tau181/Aβ42 from Roche) in independent Swedish and US cohorts.Because confirmation of Aβ positivity is required for initiation of anti-amyloid immunotherapies, the primary outcome was the detection of Aβ pathology as determined by Aβ PET imaging.Secondary outcomes included the classification of brain tau aggregates as determined by tau PET imaging, which has also been used by some trials in the selection of patients suitable for anti-amyloid immunotherapy 7,38 , and agreement with a clinical AD diagnosis.Our main analyses were focused on individuals with cognitive impairment (MCI and mild dementia), because the clinical use of anti-amyloid therapies is currently approved for cases where cognitive impairment is deemed to be caused by AD pathology. Classification of Aβ or tau PET status by fluid biomarkers We first compared the area under the curve (AUC) of plasma %p-tau217 with clinically used CSF biomarkers in classification of Aβ PET (Centiloids ≥ 37) or tau PET status (standardized uptake value ratio (SUVR) > 1.32 in Braak I-IV region of interest (ROI) for both cohorts) (Fig. 1 and Extended Data Table 1).The diagnostic performances of two aggregates.During an extended pre-symptomatic phase, which lasts 10-20 years, Aβ plaques first accumulate in the cortex and are thought to facilitate the subsequent spread of tau pathology from the medial temporal lobe to neocortical areas 4 .The presence of tau pathology in the neocortex is correlated with the clinical phase of the disease, which is marked by progressive cognitive impairment and dementia 5 . Several phase 3 trials demonstrated that anti-amyloid antibodies can clear Aβ plaques from the brain [6][7][8] , which leads to a slowing of cognitive and functional decline in individuals with mild cognitive impairment (MCI) and mild dementia due to AD.Recently, lecanemab received traditional approval from the US Food & Drug Administration (FDA) for treatment of patients with MCI and mild dementia with biomarker-proven Aβ pathology 8 , and other immunotherapies are expected to follow.The presence of Aβ pathology can be determined by positron emission tomography (PET), which visualizes Aβ deposition in the brain, or cerebrospinal fluid (CSF) assays, which measure CSF levels of Aβ42 as a ratio with Aβ40, phosphorylated tau (p-tau) or total tau 4,[9][10][11] .Biomarker testing reduces dementia misdiagnoses: when biomarkers are not used, the rate of misdiagnosis is approximately 25-35% in specialty clinics and even higher in primary care clinics 4,12,13 .Additionally, PET and CSF can identify cognitively unimpaired individuals at high risk of future cognitive decline and progression to AD dementia 14,15 .However, although safe, the widespread clinical use of PET and CSF has been hampered by high costs, reliance on expensive equipment and specially trained personnel and perceived invasiveness 11 .As a result, there is an urgent need for scalable and cost-effective methods to detect AD pathology in routine clinical practice. In the last several years, blood-based markers (BBMs) capable of detecting AD pathology have been developed [16][17][18] .Plasma levels of p-tau are strongly associated with PET and CSF biomarkers of AD pathology [19][20][21][22][23][24][25] , neuropathological changes associated with AD 20,23,26,27 and the subsequent development of AD dementia 20,23,28 .Among different p-tau variants, tau phosphorylated at threonine 217 (p-tau217) has demonstrated the highest accuracy in detecting AD pathology and predicting future cognitive decline 23,27,[29][30][31] .However, certain comorbidities, especially kidney disease, can lead to false elevations in plasma p-tau levels 32,33 , although this can be mitigated by using the ratio of p-tau217 to the non-phosphorylated levels of the same tau peptide (%p-tau217) 34 .ROC curves including all participants are included in the first row.AUCs for all, cognitively impaired and cognitively unimpaired groups are shown in the next three columns, respectively.c,f, Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in c and f for both the BioFINDER-2 cohort (left) and the Knight ADRC (right) cohort.The horizontal dashed line is plotted at zero, representing the lack of difference between plasma and CSF biomarkers.We considered plasma and CSF biomarkers clinically equivalent if the 95% CI of the mean difference included zero and clinically superior if it did not include zero and favored plasma (>0).Dots and error bars represent the actual statistic and 95% CI (from bootstrapped n = 1,000 samples with replacement), respectively.Vertical dashed lines represent the maximal AUC value possible (1).Aβ PET positivity was assessed as Centiloids ≥ 37. Tau PET positivity was assessed using previously validated in-house thresholds (SUVR > 1.32 in Braak I-IV for both cohorts).AUC, area under the curve; CI, cognitively impaired; CSF, cerebrospinal fluid; CU, cognitively unimpaired; SUVR, standardized uptake value ratio; CI, confidence interval. Article https://doi.org/10.1038/s41591-024-02869-zbiomarkers were considered clinically equivalent when the range of 95% confidence intervals (CIs) of the mean difference included zero.Superiority was considered when the range of 95% CI did not include zero and favored the plasma biomarker.In classification of Aβ PET status in the entire BioFINDER-2 cohort, plasma %p-tau217 had very high performance (AUC = 0.97, 95% CI: 0.95, 0.98), which was clinically equivalent to that of CSF Elecsys p-tau181/Aβ42 (AUC = 0.97, 95% CI: 0.96, 0.98) or CSF Elecsys Aβ42/40 (AUC = 0.96, 95% CI: 0.95, 0.97) (Fig. 1a and Extended Data Table 1).Similar results were obtained for classification of Aβ PET status in the entire Knight ADRC cohort: plasma %p-tau217 had an AUC (0.97, 95% CI: 0.95, 0.99) that was clinically equivalent to CSF Lumipulse Aβ42/40 (AUC = 0.96, 95% CI: 0.94, 0.98) and CSF Lumipulse p-tau181/ ) from the BioFINDER-2 cohort, using a single-cutoff (a) and a two-cutoffs (b) approach, respectively.In the first approach, the threshold was calculated, maximizing sensitivity and fixing specificity at 90%.In the second approach, the lower threshold was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper threshold was obtained by maximizing sensitivity while fixing specificity at 95%.Participants who fall between these two cutoffs were classified in the intermediate group.Dots and error bars represent the actual statistic and 95% CI (from bootstrapped n = 1,000 samples with replacement), respectively.c, Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in c for both single cutoff and two cutoffs.Aβ42 (AUC = 0.97, 95% CI: 0.96, 0.99) (Fig. 1b).The AUCs were similar when cognitively impaired and cognitively unimpaired groups were analyzed separately (Fig. 1a,b and Extended Data Table 1).Differences between the AUCs of plasma %p-tau217 and CSF biomarker ratios are shown in Fig. 1c and Extended Data Table 1. Use of a two-cutoffs approach to improve diagnostic accuracy We also evaluated for potential improvements in diagnostic accuracy by applying an approach with two cutoffs, which divides results into three categories: those with clearly normal values, those with clearly abnormal values and those with intermediate values.The upper cutoff was set at a value yielding a specificity of 95%, while maximizing sensitivity, and the lower cutoff was set at a value resulting in a sensitivity of 95%, while maximizing specificity.When the two-cutoffs approach was applied to predict Aβ PET positivity in cognitively impaired patients in the BioFINDER-2 cohort, plasma %p-tau217 had an overall accuracy of 95% (95% CI: 94%, 97%), a PPV of 95% (95% CI: 94%, 97%) and an NPV of 96% (95% CI: 94%, 98%), which were clinically equivalent to the performances of CSF Elecsys p-tau181/Aβ42 (accuracy, 95% (95% CI: 94%, 96%); 2b).Similar results were obtained when FDA-approved visual reads were used to determine the Aβ PET status (Extended Data Fig. 1b and Supplementary Table 1) and in the Knight ADRC cohort (Supplementary Fig. 1b and Supplementary Table 2).When predicting tau PET status in cognitively impaired individuals in the BioFINDER-2 cohort using the two-cutoffs approach, we found cohort (n = 663), using a single-cutoff (a) and a two-cutoffs (b) approach, respectively.In the first approach, the threshold was calculated, maximizing sensitivity and fixing specificity at 90%.In the second approach, the lower threshold was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper threshold was obtained by maximizing sensitivity and fixing specificity at 95%.Participants who fall between these two cutoffs were classified in the intermediate group.Dots and error bars represent the actual statistic and 95% CI, respectively.Vertical dashed lines represent the maximal statistical value possible (1).For the intermediate value plots, colored bars represent the actual percentage and the error bar the 95% CI. c, Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in c for both single cutoff and two cutoffs.The horizontal dashed line is plotted at zero, representing the lack of difference between plasma and CSF biomarkers.We considered plasma and CSF biomarkers clinically equivalent if the 95% CI of the mean difference included zero.Differences in the number of participants in the intermediate group were scaled to a maximum of 1 to be comparable with the other differences.Dots and error bars represent the mean and 95% CI estimate from a bootstrapped sample.d, Histograms represent the distribution of the data colored by the imaging biomarker status.The vertical black line represents the threshold derived from the first approach (a), and red lines represent the lower and upper thresholds from the second approach (b).Tau PET positivity was assessed using an in-house previously validated threshold (SUVR > 1.32).Three individuals were excluded from the histograms in d (only for visualization purposes) due to very low values of plasma %p-tau217.CSF, cerebrospinal fluid; CI, confidence interval; NPV, negative predictive value; PPV, positive predictive value; SUVR, standardized uptake value ratio. We investigated whether the groups with intermediate fluid biomarker values also had intermediate values for the reference standardthat is, Aβ PET Centiloids or tau PET SUVR.We found that individuals with intermediate plasma %p-tau217 values had values for Aβ PET and tau PET that were near the cutoffs for abnormality (Extended Data Fig. 2).Additionally, the group with intermediate plasma %p-tau217 values had Aβ PET and tau PET values that were higher than the normal plasma %p-tau217 group and lower than the abnormal plasma %p-tau217 group (P < 0.001 in all cases).In the BioFINDER-2 cohort, the mean (s.d.) Centiloids was 0.4 (20.3) for the %p-tau217 negative group, 49.1 (36.5) for the %p-tau217 intermediate group and 91.4 (30.1) for the %p-tau217 positive group. Comparison to a clinical AD diagnosis Finally, we examined the accuracy of plasma %p-tau217 for clinical diagnosis of symptomatic AD versus other neurodegenerative diseases.This diagnosis was made based on clinical symptoms assessed by a dementia specialist and included consideration of AD biomarker testing by either CSF or Aβ PET.It is important to highlight that, if the clinical symptoms were not related to AD, the participant was classified in the other neurodegenerative diseases group even with positive AD biomarkers, as these results may indicate concomitant AD pathology.A description of specific diagnosis for the cognitively impaired participants is shown in Supplementary Table 4.In cognitively impaired individuals in the BioFINDER-2 cohort, we found that blood plasma %p-tau217 exhibited an AUC of 0.94 (95% CI: 0.92, 0.96) in distinguishing individuals with and without symptomatic AD (Supplementary Table 5), which was clinically equivalent to CSF p-tau181/Aβ42 (95%, 95% CI: 93%, 96%) and CSF Aβ42/40 (93%, 95% CI: 91%, 95%).Furthermore, plasma %p-tau217 had an overall accuracy of 86% (95% CI: 82%, 89%), a PPV of 89% (95% CI: 87%, 91%) and an NPV of 84% (95% CI: 77%, 89%) (Supplementary Table 6).Applying the two-cutoffs approach increased the diagnostic metrics to 93-94%, with 24% of the participants in the intermediate group (Supplementary Table 6). Sensitivity analyses Several sensitivity analyses were performed to support the results reported above.First, we assessed out-of-bag statistics in the BioFINDER-2 cohort for Aβ and tau PET positivity, in which the cutoffs and the statistics were derived in different individuals from the same cohort.These results were in line with the previous analyses, showing that plasma %p-tau217 was clinically equivalent to CSF biomarkers for predicting Aβ PET positivity using a single-cutoff approach (Supplementary Fig. 3a and Supplementary Table 7) and a two-cutoffs approach (Supplementary Fig. 3b and Supplementary Table 8).For tau PET, we generally observed higher estimates of plasma %p-tau217 compared to the two CSF biomarkers (Supplementary Fig. 4 and Supplementary Tables 7 and 8). Second, we derived fluid biomarker cutoffs in independent cohorts and tested them in BioFINDER-2 participants.Plasma %p-tau217 cutoffs were derived in Knight ADRC participants and CSF biomarker cutoffs in participants from the University of California, San Francisco (UCSF) (Supplementary Methods).The obtained results were similar to those detailed in the previous sections.In brief, the performances of plasma %p-tau217 were clinically equivalent to or slightly higher than those of CSF biomarkers when using both the single-cutoff approach (Extended Data Fig. 3a and Supplementary Table 7) and the two-cutoffs Comparison estimates among fluid biomarkers on predicting tau PET positivity in cognitively impaired patients from the BioFINDER-2 cohort.For the single-cutoff approach, the cutoffs of fluid biomarkers were derived by maximizing sensitivity and fixing specificity at 90% against each imaging outcome.For the two-cutoffs approach, the lower cutoff was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper cutoff was obtained by maximizing sensitivity and fixing specificity at 95%.Participants who fall between these two cutoffs were classified in the intermediate group.Differences between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown together with the mean values.We considered plasma and CSF biomarkers clinically equivalent if the 95% CI of the mean difference included zero and clinically superior if it did not include zero and favored plasma (>0).*Differences in the number of participants in the intermediate group were scaled to a maximum of 1 to be comparable with the other differences.Tau PET positivity was assessed using an in-house previously validated cutoff (SUVR > 1.32 for both cohorts in Braak I-IV).CSF, cerebrospinal fluid; NPV, negative predictive value; PPV, positive predictive value; SUVR, standardized uptake value ratio; CI, confidence interval Article https://doi.org/10.1038/s41591-024-02869-zapproach (Extended Data Fig. 3b and Supplementary Table 8) for prediction of Aβ positivity.Additionally, we examined whether the use of plasma p-tau217 as predictor with non-phosphorylated tau as covariate (rather than the ratio of p-tau217/non-phosphorylated tau (%p-tau217)) resulted in any significant change in our results.In summary, the differences between these two approaches were very small, as can be observed in Supplementary Figs. 5 and 6 and in Supplementary Tables 9 and 10. Finally, we also tested the consistency across time of our results in a subcohort of 40 Knight ADRC participants with available longitudinal plasma %p-tau217 measures (mean (s.d.) time = 3.03 (0.65) years).Only one (2.5%) of these participants changed %ptau217 biomarker status during follow-up testing, supporting the consistency of plasma %p-tau217 measures when plasma sampling and %ptau217 testing is repeated (Supplementary Fig. 7). Discussion The major finding of this study was that plasma %p-tau217 classifies both Aβ and tau PET status with very high accuracy (AUCs of 0.96 and 0.98) across two independent cohorts.When compared to clinically used and FDA-approved CSF tests, the performance of plasma %p-tau217 was clinically equivalent in classification of Aβ PET status and was superior in classification of tau PET status.Notably, in the cognitively impaired subcohorts, the PPV of plasma %p-tau217 was equivalent to the CSF tests, demonstrating that the blood test could confirm the presence of Aβ pathology as accurately as CSF tests.A blood test with such high performance could replace CSF testing or Aβ PET when determining the presence of brain Aβ pathology in patients with cognitive symptoms.Given the widespread acceptance and accessibility of blood collection, high-performance blood tests could enable AD biomarker testing on a greater scale than is currently possible and to a much broader population, thereby enabling more accurate diagnosis of AD worldwide. In patients with MCI and mild dementia who may be candidates for anti-amyloid treatments, plasma %p-tau217 classified Aβ PET status with an accuracy, a PPV and an NPV of approximately 90% when a standard approach using a single cutoff was applied.Accuracies of 90-95% are considered excellent or outstanding for the detection of pathology and match or exceed clinically used CSF tests.For instance, the FDA-approved Elecsys CSF p-tau181/Aβ42 test has, in previous studies, classified Aβ PET status with overall accuracies of 89-90% (refs.39-41), which was replicated in the present study.The performance of the FDA-approved Lumipulse CSF Aβ42/40 test is more complex to evaluate because different approaches have been applied, including using two cutoffs 42,43 , but in one large study the test classified Aβ PET status with an AUC of 0.97 (ref.44).Notably, Aβ PET and tau PET are not perfectly accurate in detection of neuropathology 45,46 , and, in the small proportion of cases that have discordant CSF and PET results, it is not clear whether this is due to inaccuracy of CSF or PET measures.Given some imprecision in the reference standard for amyloid positivity, FDA-appproved CSF assays as well as plasma %p-tau217 may be performing at the maximum level that is achievable. Plasma %p-tau217 also correctly classified Aβ PET positivity status for cognitively unimpaired participants with AUCs of 0.96 in both BioFINDER-2 and Knight ADRC.This is also consistent with a recent report from the AHEAD 3-45 study 47 supporting the utility of plasma %p-tau217 as a screening test for preclinical AD using a similar mass spectrometry platform.With such high performance, these blood tests have the potential to support Aβ pathology identification among preclinical populations and in participant recruitment for preventive trials assessing anti-amyloid drugs.Detection of Aβ positivity using mass spectrometry %p-tau217 in cognitively normal cohorts appears better than what has been reported when using plasma p-tau217 immunoassays, although this must be confirmed in head-to-head studies 22,23,[48][49][50] . In this study, we used Centiloids ≥ 37 as the primary measure of Aβ PET positivity based on the inclusion criteria of recent clinical trials for donanemab 7 .Given that Aβ PET status is normally assessed by visual assessment in clinical care, and the FDA and the European Medicines Agency (EMA) have approved visual reads of Aβ PET, we also included visual read as an additional outcome in the main cohort.The obtained results were very similar for both Aβ PET outcomes, demonstrating very high accuracy of plasma %p-tau217 for detecting Aβ pathology, which was clinically equivalent to that of CSF biomarkers.Notably, there was very high agreement between quantitative and visual read for Aβ PET status in our cohort (~95%), consistent with previous studies showing very high agreement between visual assessment and Aβ PET quantification 45,[51][52][53][54] . In addition to highly accurate classification of Aβ PET status, plasma %p-tau217 classified tau PET status with an overall accuracy, a PPV and an NPV of 87-88% in the cognitively impaired group of the main cohort.The CSF assays were also able to classify tau PET status but were inferior to plasma %p-tau217.Because tau PET is an excellent indicator of symptomatic AD 5 , the superior classification of tau PET status by plasma %p-tau217 suggests that this measure may have additional value in determining whether cognitive impairment is likely to be due to AD.Overall, the high performance of plasma %p-tau217 in classifying Aβ and tau PET status indicates that this BBM may be able to replace approved CSF and PET measures in the diagnostic workup of AD. As expected, the performance of plasma %p-tau217 improved after applying an approach using two cutoffs to categorize individuals as positive, negative or intermediate.Use of this approach for plasma %p-tau217 resulted in a PPV and an NPV of 95% for Aβ PET status with fewer than 20% of participants in the intermediate zone, which was clinically equivalent to the CSF assays.Notably, individuals with intermediate values of plasma %p-tau217 also had Aβ PET values close to the threshold used to determine Aβ PET status: they have borderline values across multiple modalities, indicating that they may have early AD brain pathological changes.For a more definitive result, these individuals could either repeat the same test at a later time or undergo testing with another type of diagnostic test (for example, PET or CSF).Notably, the two-cutoffs approach is currently employed for the FDA-approved CSF Lumipulse test 42,43 and has been suggested for AD BBMs 17,55 , especially when very high accuracy is needed.Very high confidence in Aβ status is especially important for patients who might be eligible for anti-amyloid immunotherapies, especially given the high costs associated with such therapies as well as the clinical resources required, including repeated infusions and magnetic resonance imaging scans.Tests with a PPV of at least 95% would be preferable so that fewer than 5% of patients receiving treatment would be amyloid negative.Such an approach using two cutoffs could also enable much faster and less expensive enrollment of participants into clinical trials because Aβ status could be determined using plasma %p-tau217 alone for the large majority of individuals 56 . The main strength of this study includes the use of a highperformance plasma %p-tau217 assay in combination with clinically used CSF and Aβ and tau PET biomarkers across two large and well-phenotyped cohorts.We also reported PPV and NPV estimates, in addition to sensitivity, as they are more clinically informative.Nonetheless, we acknowledge that these measures are influenced by the prevalence of the disease or pathology detected.In the present study, the Aβ positivity ranged between 50% and 74% in the two cognitively impaired populations, which agrees with most other memory clinic cohorts of patients with MCI or mild dementia.For example, in the large-scale IDEAS study, 55% of MCI and 70% of dementia cases were amyloid positive 12 .Limitations include the relatively few individuals in the Knight ADRC cohort with cognitive impairment and the lack of a sufficiently large group of individuals with both antemortem biomarker and postmortem data available.In addition, although hundreds of millions of mass spectrometry clinical tests are run every year for several clinical applications (for example, newborn screening, analysis of drugs of abuse and steroid analysis) 57 , they typically have a higher Article https://doi.org/10.1038/s41591-024-02869-zcost per assay than immunoassays, and the corresponding analytical platforms are also less widely available and require more technical and operational expertise.Nonetheless, to date, mass spectrometry measures of plasma p-tau217 have shown the best performance for assessing the presence of Aβ pathology compared to immunoassays 29 .Future head-to-head comparisons may address whether the benefits from higher accuracy provided by mass spectrometry assays outweigh the relative practicability and scalability offered by immunoassays.Finally, minoritized populations were not well enough represented in the study cohorts, even though many study participants had lower education levels and many comorbidities.Future studies should investigate the performance of plasma %p-tau217 in broader primary care-based populations. In summary, plasma %p-tau217 can be used to determine Aβ status with a PPV and an accuracy of 95% in more than 80% of cognitively impaired patients and shows clinically equivalent or superior performance to clinically used FDA-approved CSF-based tests in classification of Aβ and tau PET status.Implementation of blood %p-tau217 in clinical practice would substantially reduce the need for PET or CSF testing, thereby enhancing access to accurate AD diagnosis in clinics worldwide, and enable determination of amyloid status in patients with MCI or mild dementia who might benefit from anti-amyloid immunotherapies. Study design This study included participants from two independent observational cohorts: the BioFINDER-2 study from Sweden and the Knight ADRC study from the United States.The Swedish BioFINDER-2 study (NCT03174938) was described previously in detail 58 .The participants were recruited at Skåne University Hospital and the Hospital of Ängelholm in Sweden (dates of enrollment: April 2017 to June 2022) and included individuals who were cognitively unimpaired (either no cognitive concerns or subjective cognitive decline (SCD)) or cognitively impaired (classified as having MCI, AD dementia or various other neurodegenerative diseases) 23 .Participants were categorized as having MCI if they performed worse than −1.5 s.d. in any cognitive domain according to age and education stratified test norms, as previously described 58 .AD dementia was diagnosed if the individual was Aβ positive by PET or CSF and met the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, criteria for AD 59 .The Knight ADRC cohort was previously described and enrolls individuals into longitudinal observational research studies of memory and aging; most participants live in the greater metropolitan area of St. Louis, Missouri, USA 44 .Samples used for the current study were collected from participants between 6 February 2013 and 12 March 2020.Participants were assessed with the Clinical Dementia Rating (CDR) 60 , and individuals included in the current study were either cognitively unimpaired (CDR = 0) or cognitively impaired (CDR > 0) with a clinical syndrome typical of AD (either MCI or dementia) based on standard criteria 61 .Additionally, participants included had undergone both an Aβ PET and a tau PET scan within 2 years of CSF and had sufficient plasma available for analysis. Fluid biomarkers CSF AD biomarker measurements.CSF samples were collected and handled according to current international recommendations 44,62 .In the Swedish BioFINDER-2 study, CSF concentrations of Aβ42 and p-tau181 were measured using Roche Elecsys CSF electrochemiluminescence immunoassays on a fully automated cobas e 601 instrument (Roche Diagnostics).Aβ40 concentrations were measured with the Roche NeuroToolKit on cobas e 411 and e 601 instruments (Roche Diagnostics).The ratio of CSF p-tau181 to Aβ42 (p-tau181/Aβ42) as measured by Elecsys assays was validated 63 and FDA approved in December 2022 for the detection of Aβ plaques associated with AD for individuals with cognitive impairment.The Elecsys Aβ42/40 ratio was also examined.In the Knight ADRC cohort, CSF Aβ42, Aβ40 and p-tau181 concentrations were measured with an automated immunoassay platform (Lumipulse G1200, Fujirebio).The ratio of CSF Aβ42 to Aβ40 (Aβ42/40) as measured by Lumipulse assays was validated 64 and FDA approved in May 2022 for the detection of Aβ plaques associated with AD for individuals with cognitive impairment, and, in addition, the Lumipulse Aβ42/p-tau181 ratio was also examined. Blood %p-tau217 measurement.At the same session as CSF collection, blood was also collected from participants in a tube containing EDTA and centrifuged to separate plasma as previously described 65 .Blood plasma p-tau217 and non-p-tau217 were measured by liquid chromatography-tandem high-resolution mass spectrometry (LC-MS/HRMS) analysis as detailed in the Supplementary Methods.The %p-tau217 measure was calculated as the ratio of tau phosphorylated at residue 217 divided by the concentration of non-phosphorylated mid-region tau. Imaging biomarker outcomes.Detailed descriptions of imaging procedures in the BioFINDER-2 and Knight ADRC cohorts were previously reported 23,66,67 .Aβ PET was performed with the EMA/FDA-approved tracer [ 18 F]flutemetamol in the BioFINDER-2 cohort and with the FDA-approved tracer [ 18 F]florbetapir (AV45) or [ 11 C]Pittsburgh Compound B (PiB) in the Knight ADRC cohort.Mean cortical SUVR was calculated using the average signal from neocortical ROIs (bilateral orbitofrontal, medial orbitofrontal, rostral middle frontal, superior frontal, superior temporal, middle temporal and precuneus) with cerebellar gray matter as reference.SUVR values were then transformed to Centiloids, which harmonizes measures from different tracers and studies 68 .Aβ PET positivity was set at ≥37 Centiloids based on inclusion criteria in the TRAILBLAZER-ALZ studies that evaluated the clinical effects of the anti-Aβ immunotherapy donanemab 7 .Additionally, in the BioFINDER-2 study, [ 18 F]flutemetamol scans were also evaluated by visual read according to an FDA-approved protocol 69 . Tau PET scans were acquired with the [ 18 F]RO948 tracer in the BioFINDER-2 cohort and with the FDA-approved [ 18 F]flortaucipir tracer in the Knight ADRC cohort.These two tau PET tracers are structurally very similar and provide similar results in the cortex according to head-to-head comparisons 70 .SUVR values were calculated in a commonly used temporal meta-ROI, which includes the Braak I-IV regions and captures the regions most affected by tau, with the inferior cerebellar gray matter as reference.Previously determined thresholds were used to determine tau PET positivity (SUVR > 1.32 in both cohorts) 44,71 . Endpoints.The primary outcome was the classification of amyloid pathology as determined by Aβ PET imaging.Secondary outcomes included the detection of brain tau aggregates as determined by tau PET imaging and agreement with a clinical AD diagnosis based on clinical symptoms and clinically obtained biomarker results.Main analyses were performed in cognitively impaired participants as they are the population currently eligible for anti-amyloid treatments. Statistical analysis. Blood plasma %p-tau217, CSF p-tau181/Aβ42 and CSF Aβ42/40 were used as predictors in independent models.To evaluate the performance of the three fluid biomarkers in predicting the main outcomes (Aβ and tau PET status and clinical AD diagnosis), we used receiver operating characteristic (ROC) curves (pROC package 72 ).AUCs were calculated in all participants as well as for cognitively impaired (MCI and dementia) and cognitively unimpaired (controls and SCD) subgroups.DeLong's test included in the same R package was used to calculate mean and 95% CI differences of the plasma and CSF AUCs. Next, we evaluated the performance of these biomarkers using only cognitively impaired participants, as this group is more relevant to the intended use of these tests in clinical practice.We used two approaches to categorize patients based on their fluid biomarkers.First, we created two groups (that is, positive and negative) based on a threshold derived by maximizing the sensitivity while fixing the specificity at 90% against each outcome independently (cutpointr package 73 ).For this approach, we compared the accuracy, PPV, NPV and sensitivity of plasma %p-tau217 to the FDA-approved CSF biomarkers.In a second approach, we created three groups of participants (that is, positive, negative and intermediate) using two different thresholds, as recently described 17 .This was implemented independently for every outcome and cohort.The lower threshold was obtained by maximizing the specificity with the sensitivity fixed at 95%, whereas the upper threshold was obtained by the maximizing sensitivity with the specificity fixed at 95%.Participants with biomarker levels between these two thresholds were categorized as intermediate.For this approach, we compared the accuracy, PPV and NPV and the number of patients categorized as intermediate.In this approach, accuracy, PPV and NPV only took into account participants in the negative and positive groups as the intermediate group was assessed by the percentage of participants categorized on it. Statistics were calculated as the mean of bootstrapped sample (n = 1,000 resamples with replacement stratifying by the output), from which we also calculated the 95% CI.The bootstrapped sample was also used to calculate the difference of all plasma %p-tau217 statistics (reference) and those from the CSF biomarkers.We considered plasma and CSF biomarkers clinically equivalent if the 95% CI of the mean difference included zero and superior if the 95% CI did not include zero while favoring plasma results. Extended Data Fig. 1 | Comparison among fluid biomarkers on predicting Aβ PET visual read positivity in cognitively impaired patients of the BioFINDER-2 cohort with in-bag estimates.Prediction of Aβ PET visual read positivity in cognitively impaired participants from the BioFINDER-2 cohort, using a single cut-off (a) and two cut-offs (b) approaches, respectively.In the first approach, the cut-off was calculated maximizing sensitivity fixing specificity at 90%.In the second approach, the lower cut-off was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper cut-off was obtained by maximizing sensitivity fixing specificity at 95%.Participants that fall between these two cut-offs were classified in the intermediate group.Dots and error bars represent the actual statistic and 95%CI, respectively.Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in (c) for both single and two cut-offs.A horizontal dashed line is plotted at zero representing the lack of difference between plasma and CSF biomarkers. We considered plasma and CSF biomarkers clinically equivalent if the 95%CI of the mean difference included zero and clinically superior if it did not include zero and favored plasma (>0).Differences in number of participants in the intermediate group have been scaled to a maximum of one to be comparable to the other differences.Dots and error bars represent the mean and 95%CI estimate from a bootstrapped sample.Vertical dashed lines represent the maximal statistical value possible (1).For the intermediate values plots, coloured bars represent the actual percentage and error bar the 95%CI.Histograms (d) represent the distribution of the data coloured by the imaging biomarker status (coloured represent the positive group).Vertical black line represents the cut-off derived from the first approach (a), and red lines represent the lower and upper cut-offs from the second approach (B).Abbreviations: Aβ, amyloid-β, CI, confidence interval; CSF, cerebrospinal fluid; NPV, negative predictive value; PPV, positive predictive value.c, d) quantification.Fluid biomarkers were categorised using the two-cut-off approach.The lower cut-off was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper cut-off was obtained by maximizing sensitivity fixing specificity at 95%.Participants that fall between these two cut-offs were classified in the intermediate group.Dots represent individual participants.In all cases, central band of the boxplot represents the median of the group, the lower and upper hinges correspond to the first and third quartiles, and the whiskers represent the maximum/minimum value or the 1.5 IQR from the hinge, whatever is lower.Horizontal dashed lines represent the cut-off of positivity for each imaging marker (Aβ PET: ≥37 Centiloids, Tau PET: >1.32 SUVR for both cohorts).Abbreviations: Aβ, amyloid-β; CI, confidence interval; CSF, cerebrospinal fluid; IQR, inter-quantile range; NPV, negative predictive value; PPV, positive predictive value; SUVR, standardized uptake value ratio. Extended Data Fig. 3 | Comparison among fluid biomarkers on predicting Aβ PET positivity in cognitively impaired patients of the BioFINDER-2 cohort using external cut-offs.Prediction of Aβ PET positivity in cognitively impaired participants from the BioFINDER-2 cohort, using a single cut-off (a) and two cut-offs (b) approaches, respectively.In the first approach, the cut-off was calculated maximizing sensitivity fixing specificity at 90%.In the second approach, the lower cut-off was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper cut-off was obtained by maximizing sensitivity fixing specificity at 95%.Participants that fall between these two cut-offs were classified in the intermediate group.Dots and error bars represent the actual statistic and 95%CI, respectively.The external cut-off method derives the cut-offs in independent cohorts.Plasma %p-tau217 cut-offs were derived in the Knight ADRC cohort, and CSF biomarkers were derived in the UCSF cohort.Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in (c) for both single and two cut-offs.A horizontal dashed line is plotted at zero representing the lack of difference between plasma and CSF biomarkers.We considered plasma and CSF biomarkers clinically equivalent if the 95%CI of the mean difference included zero and clinically superior if it did not include zero and favored plasma (>0).Differences in number of participants in the intermediate group have been scaled to a maximum of one to be comparable to the other differences. Fig. 1 | Fig. 1 | Concordance of fluid and imaging biomarkers of amyloid and tau pathologies.a,b,d,e, Concordance of fluid biomarkers with Aβ and tau PET positivity in BioFINDER-2 (a and d) and Knight ADRC (b and e) participants.ROC curves including all participants are included in the first row.AUCs for all, cognitively impaired and cognitively unimpaired groups are shown in the next three columns, respectively.c,f, Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in c and f for both the BioFINDER-2 cohort (left) and the Knight ADRC (right) cohort.The horizontal dashed line is plotted at zero, representing the lack of difference between plasma Fig. 2 | Fig. 2 | Comparison among fluid biomarkers on predicting Aβ PET positivity in cognitively impaired patients of the BioFINDER-2 cohort.a,b, Prediction of Aβ PET positivity in cognitively impaired participants (n = 304) from the BioFINDER-2 cohort, using a single-cutoff (a) and a two-cutoffs (b) approach, respectively.In the first approach, the threshold was calculated, maximizing sensitivity and fixing specificity at 90%.In the second approach, the lower threshold was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper threshold was obtained by maximizing sensitivity while fixing specificity at 95%.Participants who fall between these two cutoffs were classified in the intermediate group.Dots and error bars represent the actual statistic and 95% CI (from bootstrapped n = 1,000 samples with replacement), respectively.c, Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in c for both single cutoff and two cutoffs.The horizontal dashed line is plotted at Fig. 3 | Fig. 3 | Comparison among fluid biomarkers on predicting tau PET positivity in cognitively impaired patients of the BioFINDER-2 cohort.a,b, Prediction of tau PET positivity in cognitively impaired participants from the BioFINDER-2cohort (n = 663), using a single-cutoff (a) and a two-cutoffs (b) approach, respectively.In the first approach, the threshold was calculated, maximizing sensitivity and fixing specificity at 90%.In the second approach, the lower threshold was obtained by maximizing specificity with sensitivity fixed at 95%, whereas the upper threshold was obtained by maximizing sensitivity and fixing specificity at 95%.Participants who fall between these two cutoffs were classified in the intermediate group.Dots and error bars represent the actual statistic and 95% CI, respectively.Vertical dashed lines represent the maximal statistical value possible(1).For the intermediate value plots, colored bars represent the actual percentage and the error bar the 95% CI. c, Bootstrapped differences (n = 1,000 resamples with replacement stratifying by the output) between the statistics using plasma %p-tau217 (reference) and CSF biomarkers are shown in Data Fig. 2 | Continuous Aβ and tau PET measures by categorized fluid biomarkers groups.Comparison between categorised fluid biomarkers levels and continuous measures of Aβ-(Centiloids, a, b) and tau PET (SUVR, Dots and error bars represent the mean and 95%CI estimate from a bootstrapped sample.Vertical dashed lines represent the maximal statistical value possible (1).For the intermediate values plots, coloured bars represent the actual percentage and error bar the 95%CI.Aβ PET positivity was assessed as Centiloids≥37.Abbreviations: Aβ, amyloid-β, CI, confidence interval; CSF, cerebrospinal fluid; NPV, negative predictive value; PPV, positive predictive value.
2024-02-23T06:17:12.529Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "30c842d7a3873b47def3250ca4bbf53b864d4a67", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7e58bd9938c9ade08febe45ea9553d148ad4df06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232125809
pes2o/s2orc
v3-fos-license
What Is It to Implement a Human-Robot Joint Action? Joint action in the sphere of human–human interrelations may be a model for human–robot interactions. Human– human interrelations are only possible when several pre-requisites are met, inter alia: (1) that each agent has a representation within itself of its distinction from the other so that their respective tasks can be coordinated; (2) each agent attends to the same object, is aware of that fact, and the two sets of “attentions” are causally connected; and (3) each agent understands the other’s action as intentional. The authors explain how human– robot interaction can benefit from the same threefold pattern. In this context, two key problems emerge. First, how can a robot be programed to recognize its distinction from a human subject in the same space, to detect when a human agent is attending to something, to produce signals which exhibit their internal state and make decisions about the goal-directedness of the other’s actions such that the appropriate predictions can be made? Second, what must Introduction In this chapter, we present what is it to implement a joint action between a human and a robot. Joint action is "a social interaction whereby two or more individuals coordinate their actions in space and time to bring about a change in the environment." (Sebanz et al. 2006: 70). We consider this implementation through a set of needed coordination processes to realize this joint action: Self-Other Distinction, Joint Attention, Understanding of Intentional Action, and Shared Task Representation. It is something that we have already talked about in Clodic et al. (2017) but we will focus here on one example. Moreover, we will speak here about several elements that are components of a more global architecture described in Lemaignan et al. (2017). We introduce a simple human-robot collaborative to illustrate our approach. This example has been used as a benchmark in a series of workshop "toward a Framework for Joint Action" (fja.sciencesconf.org) and is illustrated in Fig. 1. A human and a robot have the common goal to build a stack with four blocks. They should stack the blocks in a specific order (1,2,3,4). Each agent participates in the task by placing his/its blocks on the stack. The actions available to each agent are the following: take a block on the table, put a block on the stack, remove a block from the stack, place a block on the table, and give a block to the other agent. This presentation is a partial point of view regarding what is and can be done to implement a joint action between a robot and a human since it presents only one example and a set of software developed in our lab. It only intends to explain what we claim is needed to enable a robot to run such a simple scenario. At this point, it has to be noticed that from a philosophical point of view, we have been taught that some philosophers such as Seibt (2017) stressed that the robotics intentionalist vocabulary that we use is considered as problematic especially when robots are placed in social interaction spaces. In the following, we will use this intentionalist vocabulary in order to describe the functionalities of the robot, such as "believe" and "answers," because this is the way we describe our work in robotics and AI communities. However, to accommodate the philosophical concern, we would like to note that this can be considered as shorthand for "the robot simulates the belief," "the robot simulates an answer," etc. Thus, whenever robotic behavior is described with a verb that normally characterizes a human action, these passages can be read as a reference to the robot's simulation of the relevant action. Fig. 1 A simple human-robot interaction scenario: A human and a robot have the common goal to build a stack with four blocks. They should stack the blocks in a specific order (1, 2, 3, 4). Each agent participates in the task by placing his/its blocks on the stack. The actions available to each agent are the following: take a block on the table, put a block on the stack, remove a block from the stack, place a block on the table, and give a block to the other agent. Also, the human and the robot observe one another. Copyright laas/cnrs https://homepages.laas. fr/aclodic Self-Other Distinction The first coordination process is Self-Other Distinction. It means that "for shared representations of actions and tasks to foster coordination rather than create confusion, it is important that agents also be able to keep apart representations of their own and other's actions and intentions" (Pacherie 2012: 359). Regarding our example, it means that each agent should be able to create and maintain a representation of the world for its own but also from the point of view of the other agent. In the following, we will explain what the robot can do to build this kind of representation. The way a human (can) builds such representation for the robot agent (and on which basis) is still an open question. Joint Attention The second coordination process is Joint Attention. Attention is the mental activity by which we select among items in our perceptual field, focusing on some rather than others (see Watzl 2017). In a joint action setting, we have to deal with joint attention, which is more than the addition of two persons' attention. "The phenomenon of joint attention involves more than just two people attending to the same object or event. At least two additional conditions must be obtained. First, there must be some causal connection between the two subjects' acts of attending (causal coordination). Second, each subject must be aware, in some sense, of the object as an object that is present to both; in other words, the fact that both are attending to the same object or event should be open or mutually manifest (mutual manifestness)" (Pacherie 2012: 355). On the robot side, it means that the robot must be able to detect and represent what is present in the joint action space, i.e., the joint attention space. It needs to be equipped with situation assessment capabilities (Lemaignan et al. 2018;Milliez et al. 2014). In our example, illustrated in Fig. 2, it means that the robot needs to get: Fig. 2 Situation Assessment: the robot perceives its environment, builds a model of it, and computes facts through spatial reasoning to be able to share information with the human at a high level of abstrac-tion and realizes mental state management to infer human knowledge. Copyright laas/cnrs https://homepages.laas.fr/aclodic Fig. 3 What can we infer viewing this robot? There is no standard interface for the robot so it is difficult if not impossible to infer what this robot is able to do and what it is able to perceive (from its environment but also from the human it interacts with). Copyright laas/cnrs https:// homepages.laas.fr/aclodic • its own position, that could be done for example by positioning the robot on a map and localizing it with the help of its laser (e.g., using amcl localization (http://wiki.ros. org/amcl) and gmapping (http://wiki.ros.org/gmapping)) • the position of the human with whom it interacts with (e.g., here it is tracked through the use of a motion capture system, that's why the human wears a helmet and a wrist brace. So more precisely, in this example, the robot has access to the head position and the right hand position) • the position of the objects in the environment (e.g., here, a QR-code (https://en.wikipedia.org/wiki/QR_code) has been glued on each face of each block. These codes, and so, the blocks are tracked with one of the robot cameras. We get the 3D position of each block in the environment (e.g., with http://wiki.ros.org/ar_track_alvar)) However, each position computed by the robot is given as x, y, z, and theta position in a given frame. We cannot imagine to use such information to elaborate a verbal interaction with the human: "please take the block at position x = 7.5 m, y = 3.0 m, Z = 1.0 m, and theta = 3.0 radians in the frame map...". To overcome this limit, we must transform each position in an information that is understandable by (and hence shareable with) the human, e.g., (RedBlock is On Table). We can also compute additional information such as (GreenBlock is Visible By Human) or (BlueBlock is Reachable By Robot). This is what we call "spatial reasoning." Finally, the robot must also be aware that the information available to the human can be different from the one it has access to, e.g., an obstacle on the table can prevent her/him to see what is on the table. To infer the human knowledge, we compute all the information not only from the robot point of view but also from the human position point of view (Alami et al. 2011;Warnier et al. 2012;Milliez et al. 2014), it is what we call "mental state management." On the human side, we can infer that the human is able to have the same set of information from the situation. But joint attention is more than that. We have to take into account "mutual manifestness," i.e., "(...) each subject must be aware in some sense, of the object as an object that is present to both; in other words the fact that both are attending to the same object or event should be open or mutually manifest..." (Pacherie 2012: 355). It raises several questions. How can a robot exhibit joint attention? What cues the robot should exhibit to let the human to infer that joint attention is met? How can a robot know that the human it interacts with is really involved in the joint task? What are the cues that should be collected by the robot to infer joint attention? These questions are still open questions. To answer them, we have to work particularly on the way to make the robot more understandable and more legible. For example, viewing this robot in Fig. 3, what can one infer about its capabilities? Understanding of Intentional Action "Understanding intentions is foundational because it provides the interpretive matrix for deciding precisely what it is that someone is doing in the first place. Thus, the exact same physical movement may be seen as giving an object, sharing it, loaning it, moving it, getting rid of it, returning it, trading it, selling it, and on and on-depending on the goals and intentions of the actor" (Tomasello et al. 2005: 675). Understanding of intentional action could be seen as a building block of understanding intentions, it means that each agent should be able to read its partner's actions. To understand an intentional action, an agent should, when observing a partner's action or course of actions, be able to infer their partner's intention. Here, when we speak about partner's intention we mean its goal and its plan. It is linked to action-to-goal prediction (i.e., viewing and understanding the on-going action, you are able to infer the underlying goal) and goal-to-action prediction (i.e., knowing the goal you are able to infer what would be the action(s) needed to achieve it). On the robot side, it means that it needs to be able to understand what the human is currently doing and to be able to predict the outcomes of the human's actions, e.g., it must be equipped with action recognition abilities. The difficulty here is to frame what should and can be recognized since the spectrum is vast regarding what the human is able to do. A way to do that is to choose to consider only a set of actions framed by a particular task. On the other side, the human needs to be able to understand what the robot is doing, be able to infer the goal and to predict the outcomes of the robot's actions. It means, viewing a movement, the human should be able to infer what is the underlying action of the robot. That means the robot should perform movement that can be read by the human. Before doing a movement, the robot needs to compute it, it is motion planning. Motion planning takes as inputs an initial and a final configuration (for manipulation, it is the position of the arms; for navigation, it is the position of the robot basis). Motion planning computes a path or a trajectory from the initial configuration to the final configuration. This path could be possible but not understandable and/or legible and/or predictable for the human. For example, in Fig. 4, on the left, you see a path which is possible but should be avoided if possible, the one on the right should be preferred. In addition, some paths could be also dangerous and/or not comfortable for the human, as illustrated in Fig. 5. Humanaware motion planning (Sisbot et al. 2007;Kruse et al. 2013;Khambhaita and Alami 2017a, b) has been developed to enable the robot to handle the choice of a path that is acceptable, predictable, and comfortable to the human the robot interacts with. Figure 6 shows an implementation of a human-aware motion planning algorithm (Sisbot et al. 2007(Sisbot et al. , 2010Sisbot and Alami 2012) which takes into account safety, visibility, and comfort of the human. In addition, this algorithm is able to compute a path for both the robot and the human, which can solve a situation where a human action is needed or can be used to balance effort between the two agents. However, it is not sufficient. When a robot is equipped with something that looks like a head, for example, people tend to consider that it should act like a head because people anthropomorphize. It means that we need to consider the entire body of the robot and not only the base or the arms of the robot for the movement even if it is not needed to achieve the action (e.g., Gharbi et al. 2015;Khambhaita et al. 2016). This could be linked to the concept of coordination smoother which is "any kind of modulation of one's movements that reliably has the effect of simplifying coordination" (Vesper et al. 2010(Vesper et al. , p. 1001. The one at right is better from an interaction point of view since it is easily understandable by the human. However, from a computational point of view (and even from an efficiency if we just consider the robot action that needs to be performed) they are equivalent. Consequently, we need to take these features explicitly into account when planning robot motions. That is what human-aware motion planning aims to achieve. Copyright laas/cnrs https:// homepages.laas.fr/aclodic Fig. 5 Not "human-aware" positions of the robot. Several criteria should be taken into account, such as safety, comfort, and visibility. This is for the hand-over position but also for the overall robot position itself. Copyright laas/cnrs https:// homepages.laas.fr/aclodic Fig. 6 An example of human-aware motion planning algorithm combining three criteria: safety of the human, visibility of the robot by the human, and comfort of the human. The three criteria can be weighed according to their importance with a given person, at a particular location or time of the task. Copyright laas/cnrs https://homepages.laas. fr/aclodic Shared Task Representations The last coordination process is shared task representations. As emphasized by Knoblich and colleagues (Knoblich et al. 2011), shared task representations play an important role in goal-directed coordination. Sharing representations can be considered as putting in perspective all the processes already described, e.g., knowing that the robot and the human track the same block in the interaction scene through joint attention and that the robot is currently moving this block in the direction of the stack by the help of intentional action understanding make sense in the context of the robot and the human building a stack together in the framework of a joint action. To be able to share task representations, we need to have the same ones (or a way to understand them). We developed a Human-Aware Task Planner (HATP) based on Hierarchical Task Network (HTN) representation (Alami et al. 2006;Montreuil et al. 2007;Alili et al. 2009;Clodic et al. 2009;Lallement et al. 2014). The domain representation is illustrated in Fig. 7, it is composed of a set of actions (e.g., placeCube) and a set of tasks (e.g., buildStack) which combine action(s) and task(s). One of the advantages of such representation is that it is human readable. Here, placeCube (Agent R, Cube C, Area A) means that for an Agent R, to place the Cube C in the Area A, the precondition is that R has in hand the Cube C and the effects of the action is that R has no more the Cube C in hand but the object C is on the stack of Area A. It is possible to add cost and duration to each action if we want to weigh the influence of each of the actions. On the other hand, BuildStack is done by adding a cube (addCube) and then continue to build the stack (buildStack). Then each task is also refined until we get an action. HATP computes a plan both for the robot and the human (or humans) it interacts with as illustrated in Fig. 8. The workload could be balanced between the robot and the human; moreover, the system enables to postpone the choice of the actor at execution time ). However, one of the drawbacks of such representation is that it is not expandable. Once the domain is written, you cannot modify it. One idea could be to use reinforcement learning. However, reinforcement learning is difficult to use "as is" in a humanrobot interaction case. The reinforcement learning system needs to test any combination of actions to be able to learn the best one which could lead to nonsense behavior of the robot. This can be difficult to interpret for the human it interacts with and it will be difficult for him to interact with the robot, Fig. 7 HATP domain definition for the joint task buildStack and definition of the action placeCube: The action placeCube for an Agent R, a Cube C in an Area A, could be defined as follows. The precondition is that Agent R has the Cube C in hand before the action, the effect of the action is that Agent R does not have the Cube C anymore and the cube C is on the stack in Area A. Task buildStack combines addCube and buildStack. Task addCube combines getCube and putCube. Task getCube could be done either by picking the Cube or doing a handover. Copyright laas/cnrs https://homepages.laas.fr/aclodic and will lead to learning failure. To overcome this limitation, we have proposed to mix the two approaches by using HATP as a bootstrap for a reinforcement learning system (Renaudo et al. 2015;Chatila et al. 2018). With a planning system as HATP, we have a plan for both the robot and the human it interacts with but this is not enough. If we follow Knoblich and colleagues (Knoblich et al. 2011) idea, shared task representations do not only specify in advance what the respective tasks of each of the coagents are, they also provide control structures that allow agents to monitor and predict what their partners are doing, thus enabling interpersonal coordination in real time. This means that the robot not only need the plan, but also ways to monitor this plan. Besides the world state (cf. Fig. 2 section regarding situation assessment) and the plan, we developed a monitoring system that enables the robot to infer plan status and action status both from its point of view and from the point of view of the human as illustrated Fig. 9 (Devin and Alami 2016;Devin et al. 2017). With this information, the robot is able to adapt its execution in real time. For example, there may be a mismatch between action status on the robot side and on the human side (e.g., the robot waiting for an action from the human). Equipped with this monitoring, the robot can detect the issue and warn. The issue can be at plan status level, e.g., the robot considering that the plan is no longer achievable while it detects that the human continues to act. Conclusion We have presented four coordination processes needed to realize a joint action. Taking these different processes into account requires the implementation of dedicated software: self-other distinction → mental state management; joint attention → situation assessment; understanding of intentional action → action recognition abilities as well as humanaware action (motion) planning and execution; shared task representations → human-aware task planning and execution as well as monitoring. The execution of a joint action requires not only for the robot to be able to achieve its part of the task but to achieve it in a way that is understandable to the human it interacts with and to take into account the reaction of the human if any. Mixing execution and monitoring requires making some choices at some point, e.g., if the camera is needed to do an action, the robot cannot use it to monitor the human if it is not in the same field of view. These choices are made by the supervision system which manages the overall task execution from task planning to low-level action execution. We talked a little bit about how the human was managing these different coordination processes in a human-robot interaction framework and about the fact that there was still some uncertainty about how he was managing things. We believe that it may be necessary in the long term to give the human the means to better understand the robot at first. Monitoring the human side of the plan execution: besides the world state, the robot computes the state of the goals that need to be achieved, the status of the on-going plans as well of each action. It is done not only from its point of view but also from the point of view of the human. Copyright laas/cnrs https://homepages.laas.fr/aclodic Finally, what has been presented in this chapter is partial for at least two reasons. First, we have chosen to present only work done in our lab but this work already covers the execution of an entire task and in an interesting variety of dimensions. Second, we make the choice to not mention the way to handle communication or dialog, to handle data management or memory, to handle negotiation or commitments management, to enable learning, to take into account social aspects (incl. privacy) or even emotional ones, etc. However, it gives a first intuition to understand what needs to be taken into account to make a human-robot interaction successful (even for a very simple task).
2021-03-06T14:11:30.064Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8f3222554e2ae18135b617d1044890fe2bdea4b2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-54173-6_19.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "74ee4ca8f3f0d73059b2ebc56127f12bd55cca77", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
853706
pes2o/s2orc
v3-fos-license
Forecasting Monthly Electricity Demands: An Application of Neural Networks Trained by Heuristic Algorithms : Electricity demand forecasting plays an important role in capacity planning, scheduling, and the operation of power systems. Reliable and accurate prediction of electricity demands is therefore vital. In this study, artificial neural networks (ANNs) trained by different heuristic algorithms, including Gravitational Search Algorithm (GSA) and Cuckoo Optimization Algorithm (COA), are utilized to estimate monthly electricity demands. The empirical data used in this study are the historical data affecting electricity demand, including rainy time, temperature, humidity, wind speed, etc. The proposed models are applied to Hanoi, Vietnam. Based on the performance indices calculated, the constructed models show high forecasting performances. The obtained results also compare with those of several well-known methods. Our study indicates that the ANN-COA model outperforms the others and provides more accurate forecasting than traditional methods. Introduction Electric energy plays a fundamental role in business operations all over the world.Our world runs because electricity makes industries, homes, and services work.Therefore, electricity sources must be carefully managed and implemented in order to guarantee the efficient use of electricity. The key to this is to have accurate knowledge of future electricity demands, accurate capacity planning, scheduling, and operations of the power systems.Hence, reliable electricity demand forecasting is needed in order to guarantee that production can meet demand.However, it is difficult to forecast electricity demand because the demand series often contain unpredictable trends, high noise levels, and exogenous variables.Although demand forecasting is difficult to implement, the relevance to forecast the electricity demand has been a much-discussed issue in recent years.This has led to the development of various new tools and methods for forecasting. Since the accuracy of demand forecasting plays an important role in the success of efficiency planning, energy analysts need guidelines to select the most appropriate forecasting techniques in order to obtain accurate forecasts of electricity consumption trends and to schedule generator planning and maintenance.In general, electricity forecasting demand, accumulated on different time scales, is categorized into short-term, medium-term and long-term demands.The short-term demand forecasting carries out a prediction of the load or energy demand several hours or days ahead.This prediction is very important for the daily operation of facilities.Short-term demand is generally affected by daily life habits, weather conditions, and the temperature.On the other hand, medium-term and long-term demands, which can span periods from a week to a year, are affected by economic and demographic growth, and climate change.Medium-term forecasting provides a prediction of electric demand in the following weeks or months and long-term forecasting predicts the annual power peaks in the following years in order to plan grid extension.Due to the clear interest that medium-term demand forecasting presents in deregulated power systems, in this study, we therefore focus on monthly electric demand forecasting. Electricity demand forecasting is a complicated task since the demand is affected directly or indirectly by various factors primarily associated with the economy and the climate change.In the past, straight line extrapolations of historical energy consumption trends were adequate methods.However, with the emergence of alternative energies and technologies, fluctuating economic inflation, rapid change in energy prices, industrial development, and global warming issues, the modeling techniques that capture the effect of factors are increasingly necessary, such as average air pressure, average temperature, average wind velocity, rainfall, rainy time, average relative humidity, daylight time, and technological variables.The modelling techniques range from traditional methods, including autoregressive integrated moving average (ARIMA) and multiple linear regression (MLR) (both relying on mathematical approaches), to intelligent techniques, such as fuzzy logic and neural networks [1]. In the early development of forecasting approaches, the most commonly used methods were statistical techniques, such as the trend analysis and extrapolation.It is reasonably easy to apply these kinds of methods due to their simple calculations.Since the total electricity demand includes the demand of factories, enterprises, citizens, and the service industry, forecasting electricity usage requires certain knowledge of past demands in order to take into account the social evolution of future energy demand.Therefore, as past data is needed to forecast future data, a time series analysis of energy demand is usually used to predict future energy use.Time series forecasting is a powerful tool that is widely used to predict time evolution in a number of divergent applications.Different tools, including ARIMA and MLR, have also been developed in the field of time series analysis. Recently, artificial intelligence techniques have been found to be more effective than traditional methods.Among these, artificial neural networks (ANNs) have been widely applied in various application areas [2][3][4] as well as in the electricity demand forecasting area [5][6][7][8][9][10][11]. ANN is a parallel computing system that uses a large number of connecting artificial neurons.This approach is similar to the function of the biological neural networks.After being trained by historical data, ANNs can be used a prediction tool.Many researchers use ANN to solve electricity demand forecasting problems because of its speed and accuracy.Additionally, ANN can be easily implemented in the development of software.When applying the ANN for forecasting [12,13], most researchers focused on the multi-layer perception (MLP) neural network model.Back-propagation (BP) is the most commonly used training method for training an MLP network.However, many studies have pointed out drawbacks of this algorithm, including the tendency to be trapped in local minima [14] and having a slow convergence [15].Heuristic algorithms are known for their ability to produce optimal or near optimal solutions for optimization problems.In recent years, several heuristic algorithms-including genetic algorithms (GA) [16], particle swarm optimization (PSO) [17], ant colony optimization (ACO) [18] and differential evolution (DE) [19]-have been proposed for the purpose of training. Other than these, two heuristic algorithms, the Gravitational Search Algorithm (GSA), and Cuckoo Optimization Algorithm (COA), both inspired by the behavior of natural phenomena, were also developed for solving optimization problems.Through some benchmarking studies, these algorithms have been proven to be powerful and are considered to outperform other algorithms.The GSA, introduced by Rashedi [20], is based on the law of gravity and mass interactions.The comparison of the GSA with other optimization algorithms in some problems shows that the GSA performs well [20,21].The COA algorithm was developed by Rajabioun [22].The comparison of the COA with standard versions of PSO and GA also shows that the COA has superiority in fast convergence and near global optimal achievement [22,23].Moreover, GSA and COA algorithms are efficient optimization algorithms in terms of reducing the aforementioned drawbacks of back propagation.Since these algorithms are relatively new, they have yet to be compared with each other for many different applications. The merits of the GSA and COA algorithms and the success of ANNs in electricity demand forecasting have encouraged us to use these heuristic algorithms for training ANNs.In this study, several models for electricity demand forecasting have been developed and tested to provide monthly predictions.These models utilize ANNs trained by the three mentioned heuristic algorithms.The error criteria, such as root mean squared error (RMSE) and mean absolute percentage error (MAPE), were used as measures to justify the appropriate model. The rest of this paper is organized into five sections.After the introduction in Section 1, the literature review is provided in Section 2. The three heuristic algorithms are described in Section 3. Section 4 is dedicated to the research design.The experimental results are discussed in Section 5. Finally, Section 6 gives the conclusions. Literature Review The ANN has been widely used in different applications.This section provides a glimpse into the literature concerning the use of ANN in electricity demand forecasting.Feilat and Bouzguenda [7] developed a mid-term load forecasting model based on ANN.The proposed model was applied to the Al-Dakhiliya franchise area of the Mazoon Electricity Distribution (MZEC) Company, Oman.The model used monthly load data, temperature, humidity and wind speed from 2006 to 2010 as inputs.The performance indices and the simulation results showed that the forecasting accuracy was satisfactory.The obtained results were also compared with those obtained from the linear regression model.It was found that the ANN-based model outperformed the multiple linear regression method.Kandananond [5] applied different forecasting methods, including ARIMA, ANN, and MLR to forecast electricity demand in Thailand.His study used the historical data of the electricity demand in Thailand from 1986 to 2010.Based on the performance indices, the ANN approach outperformed the ARIMA and MLR methods.Santana et al. [9] used the MLP network with one hidden layer to forecast power consumption in Brazil.The algorithms used in the training of the MLP network were Levenberg-Marquardt and the back propagation.The results showed that the MLP networks presented exceptional results when studying a mid-term forecast.Azadeh et al. [10] used the MLP network to forecast electricity consumption.Monthly electricity consumption in Iran for the past 20 years was collected to train and test the network.The conventional regression model was also applied to the research problem.Through analysis of variance, actual data was compared with forecasting data obtained from the ANN and conventional regression models.It was shown that the ANN approach was superior for estimating the total electricity consumption.Azadeh et al. [11] proposed an artificial neural network (ANN) approach for annual electricity consumption in high energy consumption industrial sectors.Actual data from high energy consuming (intensive) industries in Iran from 1979 to 2003 was used.The ANN forecasting values were compared with actual data and the conventional regression model.The results also indicated that the MLP network can estimate the annual consumption with less error.Deng [24] presents a model based on the multilayer feed-forward neural network to forecast the energy demand for China.The model outperformed the linear regression model in terms of root mean squared error without any over-fitting problem.Hotunluoglu and Karakaya [25] forecasted Turkey's energy demand by the use of an artificial neural network.Three different scenarios were developed.The obtained energy demand forecasts are useful in future energy planning and policy making process.In [26], the ANN model was tested and compared with other forecasting methods including simple moving average, linear regression, and multivariate adaptive regression splines.It was concluded that the ANN model was effective at forecasting peak building electrical demand in a large government building sixty minutes into the future.Hernández et al. [27] presented a two-stage prediction model based on an ANN to forecast short-term load forecasting of the following day in a microgrid environment.The obtained mean absolute percentage error showed an overall improvement of 52%.Ryu et al. [28] proposed deep neural network-based models to predict the 24-h load pattern day-ahead based on weather, date and past electricity consumptions.The obtained results indicated that the proposed models demonstrated accurate and robust predictions compared to other forecasting models, e.g., mean absolute percentage error and relative root mean square error are reduced by 17% and 22% compared to the shallow neural network model and 9% and 29% compared to the double seasonal Holt-Winters model. The abovementioned studies revealed that ANN-based models have been successfully used in the area of power electricity forecasting.However, in order to increase the reliability of forecasting results of the ANN-based model, attention is needed to focus on optimizing the parameters of the model.In other words, training phase plays an important role in developing the ANN-based models. In the literature we examined, the BP algorithm, a gradient-based algorithm, has been widely used in the training phase.However, the BP algorithm has some drawbacks.The two recent algorithms, including GSA and COA, are efficient algorithms in terms of reducing the drawbacks of the BP. Taking into account the available literature, there is still room for improving the ANN-based models for electricity demand forecasting.In this paper, we propose a multilayer feed-forward network improved by the GSA and COA algorithms for forecasting electricity demand.The scientific contributions made by the current research are the new approaches applied herein.Although the models are developed for a specific application, they can be used as basic guides for other application areas. Heuristic Algorithms In this section, the heuristic algorithms, including GSA and COA used in the training phase are described. Gravitational Search Algorithm The GSA, proposed by Rashedi et al. [20], is based on the physical law of gravity and the law of motion.In the universe, every particle attracts every other particle with a gravitational force that is directly proportional to the product of their masses, and is inversely proportional to the square of the distance between them.The GSA can be considered as a system of agents, called masses, that obey the Newtonian laws of gravitation and masses.All masses attract each other through the gravity forces between them.A heavier mass has a bigger force. Consider a system with N masses in which the position of the ith mass is defined as follows: where x i d is the position of the ith agent in the dth dimension and n presents the dimension of search space.At a specific time, t, the force acting on mass i from mass j is defined as follows: where M aj denotes the active gravitational mass of agent j; M pi is the passive gravitational mass of agent i; G(t) represents the gravitational constant at time t; ε is a small constant; and R ij (t) is the Euclidian distance between agents i and j. The total force acting on agent i in dimension d is as follows: where rand j is a random number in [0, 1].According to the law of motion, the acceleration of agent i at time t in the dth dimension, a i d (t), is calculated as follows: where M ii (t) is the mass of object i.The next velocity of an agent is a fraction of its current velocity added to its acceleration.Therefore, the next position and the next velocity can be calculated as: The gravitational constant, G, is generated at the beginning and is reduced with time to control the search accuracy.It is a function of the initial value (G 0 ) and time (t): Gravitational and inertia masses are calculated by the fitness value.Fitness function is used in each iteration of the algorithm to evaluate the quality of all the proposed solutions to the problem in the current population.The fitness function evaluates how good a single solution in a population is, e.g., suppose that if we find for what x-value a function has its y-minimum, the fitness function for a unit might be the negative y-value (the smaller the value, the higher the fitness function).In general, the fitness value is the objective value of the optimization problem that we want to minimize or maximize.A heavier mass is a more efficient agent.This means that better agents have higher attractions and move more slowly.The gravitational and inertial masses are updated by the following equations: where fit i (t) denotes the fitness value of agent i at time t, and worst(t) and best(t) represents the weakest and strongest agents in the population, respectively.For a minimization problem, worst(t) and best(t) are as follows: For a maximization problem, The pseudo code of the GSA is given in Figure 1. Cuckoo Optimization Algorithm Rajabioun [22] developed an algorithm based on the cuckoo's lifestyle, named the Cuckoo Optimization Algorithm.The lifestyle of the cuckoo species and their characteristics were the basic motivations for the development of this evolutionary optimization algorithm.The cuckoo groups are formed in different areas that are called societies.The cuckoo population in each society consists of two types: mature cuckoos and eggs.The effort to survive among cuckoos constitutes the basis of COA.During the survival competition, some of the cuckoos or their eggs are detected and killed.Then, the survived cuckoo societies try to immigrate to a better environment and start reproducing and laying eggs.Cuckoos' survival effort hopefully may converge to a place in which there is only one cuckoo society, all having the same survival rates.Therefore, the place in which more eggs survive is the objective that COA wants to optimize.The fast convergence and global optima achievement of this algorithm have been proven through some benchmark problems.The pseudo code of the COA is presented in Figure 2. In COA, cuckoos lay eggs within a maximum distance from their habitats.This range is called the Egg Laying Radius (ELR).In the algorithm, ELR is defined as: Cuckoo Optimization Algorithm Rajabioun [22] developed an algorithm based on the cuckoo's lifestyle, named the Cuckoo Optimization Algorithm.The lifestyle of the cuckoo species and their characteristics were the basic motivations for the development of this evolutionary optimization algorithm.The cuckoo groups are formed in different areas that are called societies.The cuckoo population in each society consists of two types: mature cuckoos and eggs.The effort to survive among cuckoos constitutes the basis of COA.During the survival competition, some of the cuckoos or their eggs are detected and killed.Then, the survived cuckoo societies try to immigrate to a better environment and start reproducing and laying eggs.Cuckoos' survival effort hopefully may converge to a place in which there is only one cuckoo society, all having the same survival rates.Therefore, the place in which more eggs survive is the objective that COA wants to optimize.The fast convergence and global optima achievement of this algorithm have been proven through some benchmark problems.The pseudo code of the COA is presented in Figure 2. Cuckoo Optimization Algorithm Rajabioun [22] developed an algorithm based on the cuckoo's lifestyle, named the Cuckoo Optimization Algorithm.The lifestyle of the cuckoo species and their characteristics were the basic motivations for the development of this evolutionary optimization algorithm.The cuckoo groups are formed in different areas that are called societies.The cuckoo population in each society consists of two types: mature cuckoos and eggs.The effort to survive among cuckoos constitutes the basis of COA.During the survival competition, some of the cuckoos or their eggs are detected and killed.Then, the survived cuckoo societies try to immigrate to a better environment and start reproducing and laying eggs.Cuckoos' survival effort hopefully may converge to a place in which there is only one cuckoo society, all having the same survival rates.Therefore, the place in which more eggs survive is the objective that COA wants to optimize.The fast convergence and global optima achievement of this algorithm have been proven through some benchmark problems.The pseudo code of the COA is presented in Figure 2. In COA, cuckoos lay eggs within a maximum distance from their habitats.This range is called the Egg Laying Radius (ELR).In the algorithm, ELR is defined as: In COA, cuckoos lay eggs within a maximum distance from their habitats.This range is called the Egg Laying Radius (ELR).In the algorithm, ELR is defined as: where α is an integer used to handle the maximum value of ELR, and var hi and var low are the upper limit and lower limit of variables in an optimization problem.The society with the best profit value (the highest number of survival eggs) is then selected as the goal point (best habitat) to which other cuckoos should immigrate.In order to recognize which cuckoo belongs to which group, cuckoos are grouped by the K-means clustering method.When moving toward the goal point, each cuckoo only flies λ% of the maximum distance and has a deviation of φ radians.The parameters for each cuckoo are defined as follows: where λ ~U(0,1) means that λ is a random number (uniformly distributed) between 0 and 1. ω is a parameter to constrain the deviation from the goal habitat.A ω of π/6 is supposed to be enough for good convergence [22]. Research Design The following subjects were considered in developing theforecasting models. Historical Data Due to divergent climate characteristics in northern Vietnam, demand for electricity in Hanoi varies between the summer period (May-August) and the winter period.The demand increases to its full extent during summer and decreases significantly during the rest of the year.Figure 3 shows the monthly demand profile of the Hanoi over the years 2009-2013.The significant increase in electricity demand during the summer period is influenced by the need for operating air conditioners to overcome the high temperatures. where  is an integer used to handle the maximum value of ELR, and varhi and varlow are the upper limit and lower limit of variables in an optimization problem.The society with the best profit value (the highest number of survival eggs) is then selected as the goal point (best habitat) to which other cuckoos should immigrate.In order to recognize which cuckoo belongs to which group, cuckoos are grouped by the K-means clustering method.When moving toward the goal point, each cuckoo only flies λ% of the maximum distance and has a deviation of φ radians.The parameters for each cuckoo are defined as follows: where λ ~ U(0,1) means that λ is a random number (uniformly distributed) between 0 and 1. ω is a parameter to constrain the deviation from the goal habitat.A ω of π/6 is supposed to be enough for good convergence [22]. Research Design The following subjects were considered in developing theforecasting models. Historical Data Due to divergent climate characteristics in northern Vietnam, demand for electricity in Hanoi varies between the summer period (May-August) and the winter period.The demand increases to its full extent during summer and decreases significantly during the rest of the year.Figure 3 shows the monthly demand profile of the Hanoi over the years 2009-2013.The significant increase in electricity demand during the summer period is influenced by the need for operating air conditioners to overcome the high temperatures.Electricity consumption (MWh) is influenced by several related factors (as shown in Table 1), including month index, average air pressure, average temperature, average wind velocity, rainfall, rainy time, average relative humidity, and daylight time.The historical data regarding these factors were collected from January 2003 to December 2013; in other words, there are 132 monthly data samples.These data were used to determine a forecasting model for future electricity demand.The data used in this study were obtained from the Bureau of Statistics, the National Hydro- Electricity consumption (MWh) is influenced by several related factors (as shown in Table 1), including month index, average air pressure, average temperature, average wind velocity, rainfall, rainy time, average relative humidity, and daylight time.The historical data regarding these factors were collected from January 2003 to December 2013; in other words, there are 132 monthly data samples.These data were used to determine a forecasting model for future electricity demand.The data used in this study were obtained from the Bureau of Statistics, the National Hydro-Meteorological Service, and the Hanoi Power Company.The available data were divided into two groups.The first group is called the training dataset (84 samples) and includes the data over years 2003-2009 (seven years).The second group is called the testing dataset (48 samples) and includes the data over years 2010-2013 (four years).The training dataset served in model building, while the testing dataset was used for the validation of the developed models. Structure of the Neural Network A neural network, in which activations spread only in a forward direction from the input layer through one or more hidden layers to the output layer, is known as a multilayer feed-forward network.For a given set of data, a multi-layer feed-forward network can provide a good nonlinear relationship.Studies have shown that a feed-forward network, even with only one hidden layer, can approximate any continuous function [29].Therefore, a feed-forward network is an attractive approach [30].Figure 4 shows an example of a feed-forward network with three layers.In Figure 5, R, N, and S are the numbers of input, hidden neurons, and output, respectively; iw and hw are the input and hidden weights matrices, respectively; hb and ob are the bias vectors of the hidden and output layers, respectively; x is the input vector of the network; ho is the output vector of the hidden layer; and y is the output vector of the network.The neural network in Figure 5 can be expressed through the following equations: where f is an activation function. When implementing a neural network, it is necessary to determine the structure in terms of the number of layers and the number of neurons in the layers.The larger the number of hidden layers and nodes, the more complex the network is.A network with a structure that is more complicated than necessary may over fit the training data [31].This means that it may perform well on the data that is included in the training dataset but may perform poorly on the data in a testing dataset. The structure of an ANN is dictated by the choice of the numbers in the input, hidden, and output layers.Each data set has its own particular structure, and therefore determines the specific ANN structure.The number of neurons comprised in the input layer is equal to the number of features (input variables) in the data.The number of neurons in the output layer is equal to the number of output variables.In this study, the data set includes eight input variables and one output variable; hence, the numbers of neurons in the input and output layers are eight and one, respectively.The three layer feed-forward neural network is utilized in this work since it can be used to approximate Information 2017, 8, 31 9 of 15 any continuous function [32,33].Regarding the number of hidden neurons, the choice of a proper size of hidden layer has often been studied.However, a rigorous generalized method has not been found [4,34].Hence, the trial-and-error method is the most commonly used method for estimating the optimum number of neurons in the hidden layer.In this method, various network architectures are tested in order to find the optimum number of hidden neurons [2,3].In our study, the choice was also made through extensive simulation with different choices for the number of hidden nodes.For each choice, we obtained the performance of the concerned neural networks, and the number of hidden nodes providing the best performance was used for presenting results.The activation function from input to hidden is sigmoid.With no loss of generality, a commonly used form, f (n) = 2/(1 + e −2n ) − 1, is utilized, while a linear function is used from the hidden layer to the output layer.Input layer Hidden layer Output layer The structure of an ANN is dictated by the choice of the numbers in the input, hidden, and output layers.Each data set has its own particular structure, and therefore determines the specific ANN structure.The number of neurons comprised in the input layer is equal to the number of features (input variables) in the data.The number of neurons in the output layer is equal to the number of output variables.In this study, the data set includes eight input variables and one output variable; hence, the numbers of neurons in the input and output layers are eight and one, respectively.The three layer feed-forward neural network is utilized in this work since it can be used to approximate any continuous function [32,33].Regarding the number of hidden neurons, the choice of a proper size of hidden layer has often been studied.However, a rigorous generalized method has not been found [4,34].Hence, the trial-and-error method is the most commonly used method for estimating the optimum number of neurons in the hidden layer.In this method, various network architectures are tested in order to find the optimum number of hidden neurons [2,3].In our study, the choice was also made through extensive simulation with different choices for the number of hidden nodes.For each choice, we obtained the performance of the concerned neural networks, and the number of hidden nodes providing the best performance was used for presenting results.The activation function from input to hidden is sigmoid.With no loss of generality, a commonly used form, f(n) = 2/(1 + e −2n ) − 1, is utilized, while a linear function is used from the hidden layer to the output layer. Training Neural Networks by Heuristic Algorithms There are three ways of encoding and representing the weights and biases of ANN for every solution in evolutionary algorithms [15].They are the vector, matrix, and binary encoding methods.In this study, we utilized the vector encoding method and the objective function is to minimize SSE.The two mentioned heuristic algorithms were utilized to search near optimal weights and biases of neural networks.In order to make a comprehensive comparison, the differential evolution (DE) algorithm was also used to train the neural network.We refer to these models hereafter as ANN-GSA, ANN-COA, and ANN-DE.The amount of error is determined by the squared difference Training Neural Networks by Heuristic Algorithms There are three ways of encoding and representing the weights and biases of ANN for every solution in evolutionary algorithms [15].They are the vector, matrix, and binary encoding methods.In this study, we utilized the vector encoding method and the objective function is to minimize SSE.The two mentioned heuristic algorithms were utilized to search near optimal weights and biases of neural networks.In order to make a comprehensive comparison, the differential evolution (DE) algorithm was also used to train the neural network.We refer to these models hereafter as ANN-GSA, ANN-COA, and ANN-DE.The amount of error is determined by the squared difference between the target output and actual output.In the implementation of the heuristic algorithms to train a neural network, all training parameters, θ = {iw, hw, hb, ob}, are converted into a single vector of real numbers, as shown in Figure 5. x (N×R) x (N×R+1) x (N×R+S×N) x (N×R+S×N+1) x (N×R+S×N+N) x (N×R+S×N+N+1) x (N×R+S×N+N+S) input weights hidden weights hidden biases output biases Suppose that there are m input-target sets, the target, tkp, is the desired output for the given input xkp for k = 1, 2, …, m and p = 1, 2, ..., S; ykp and tkp are forecasting and actual values of pth output unit for sample k.Thus, network variables arranged as iw, hw, hb, and ob are to be changed to minimize an error function, E, such as the SSE (Sum of Squared Errors) between network outputs and desired targets: where ( ) Figure 6 describes how heuristic algorithms are being used to train ANN.Suppose that there are m input-target sets, the target, t kp , is the desired output for the given input x kp for k = 1, 2, . . ., m and p = 1, 2, ..., S; y kp and t kp are forecasting and actual values of pth output unit for sample k.Thus, network variables arranged as iw, hw, hb, and ob are to be changed to minimize an error function, E, such as the SSE (Sum of Squared Errors) between network outputs and desired targets: Figure 6 describes how heuristic algorithms are being used to train ANN.Suppose that there are m input-target sets, the target, tkp, is the desired output for the given input xkp for k = 1, 2, …, m and p = 1, 2, ..., S; ykp and tkp are forecasting and actual values of pth output unit for sample k.Thus, network variables arranged as iw, hw, hb, and ob are to be changed to minimize an error function, E, such as the SSE (Sum of Squared Errors) between network outputs and desired targets: Examining the Performance To compare the performances of different forecasting models, several criteria are used.These criteria are applied to the trained neural network to determine how well the network works.These criteria are used to compare forecasting values and actual values.They are as follows: Examining the Performance To compare the performances of different forecasting models, several criteria are used.These criteria are applied to the trained neural network to determine how well the network works.These criteria are used to compare forecasting values and actual values.They are as follows: Mean absolute percentage error (MAPE): this index indicates an average of the absolute percentage errors; the lower the MAPE, the better the model is: where t k is the actual (desired) value, y k is the forecasting value produced by the model, and m is the total number of observations. Root mean squared error (RMSE): this index estimates the residual between the actual value and desired value.A model has better performance if it has a smaller RMSE.An RMSE equal to zero represents a perfect fit: Mean absolute error (MAE): this index indicates how predicted values are close to the actual values: Correlation coefficient (R): this criterion reveals the strength of relationships between actual values and forecasting values.The correlation coefficient has a range from 0 to 1, and a model with a higher R means it has better performance: where t = 1 Experimental Results and Discussion The four models were coded and implemented in the Matlab environment (Matlab R2014a, the MathWorks Inc, Natick, MA, USA).As discussed earlier, the one-hidden layer feed-forward neural network architecture was used.The optimum number of neurons in the hidden layer was determined by varying their number, starting with a minimum of one, and then increasing one neuron each time.Hence, various network architectures were tested to achieve the optimum number of hidden neurons.The best performing ANN architecture for the dataset used was then identified, which provided the results with the smallest error values during the training.The best performing architectures for standard ANN, ANN-GSA, ANN-COA, and ANN-DE were found to be 8-6-1, 8-7-1, 8-5-1, and 8-9-1, respectively.A five-fold cross validation method was used to avoid an over-fitting problem.Different parameters of training algorithms were tried to obtain the best performance.For standard ANN, the Back-Propagation (BP) algorithm was used to train the neural networks; the learning and momentum rates were 0.4 and 0.3.For ANN-GSA, the parameters for the GSA algorithm were as follows: the number of initial population was 20 and the gravitational constant in Equation ( 7) was determined by the function G(t) = G 0 exp(−α × t/T), where G 0 = 100, α = 20, and T was the total number of iterations.For ANN-COA, the parameters were set as follows: the number of initial population was 20 and p% was 10%.For ANN-DE, the crossover rate Cr and the scale factor F were set to 0.9 and 0.85, respectively. In this study, the number of iterations was chosen as the stopping criterion.Table 2 gives the performance statistics on the test dataset for the ANN, ANN-GSA, ANN-COA, and ANN-DE at the 500th iteration and 1000th iteration.As can be seen from Table 2, the ANN-COA has smaller MAPE, RMSE, and MAE values as well as a bigger R value than those of the ANN, ANN-GSA and ANN-DE.This means that the ANN-COA had a better overall performance in forecasting.At the 1000th iteration, the performance statistics MAPE, RMSE, MAE, and R obtained by the ANN-COA model were calculated as 0.0577, 59,073, 49,238, and 0.9287, respectively.These results were highly correlated.At the 500th iteration, the ANN-GSA had a better performance than the ANN-DE.However, at the 1000th iteration, the ANN-CS outperformed the ANN-GSA.Figure 8 presents the time series of actual and forecasting values obtained using the three models.The trends in the plots of the time series suggest that the ANN-based models are appropriate for electricity demand forecasting.It can also be concluded that the standard ANN model had the worst performance due to the fact that the BP algorithm (a gradient-based algorithm) has the tendency to become trapped in local minima.Therefore, hereafter, the performance statistics of ANN are excluded in Figures 7 and 8.In order to evaluate the performance of the ANN-based models, the ARIMA and MLR methods were also applied to the problem.The details of these methods can be found in the relevant literature, which is beyond the scope of this work.After a few testing attempts, the ARIMA model was selected as ARIMA (2,1,1).These models were also implemented in Matlab R2014a.The results obtained by these models were recorded and are shown in Table 3.As can be seen from Table 3, the ARIMA had a better performance than the MLR.However, when compared with the results from Table 2, the ANN-based models surpassed the ARIMA and MLR.Based on the results presented in this section, it can be inferred that the ANN-based models perform better than traditional forecasting methods (ARIMA and MLR) and the ANN-COA model is clearly superior to its counterparts.Regarding the complexity of the models, the ARIMA model requires less computational time than the other models.ANN-based models are more complex, involving a network of processing elements. Conclusions Understanding electricity demand is a critical factor that is required for ensuring future stability and security.Executives and government authorities need this information for decision making in energy markets.In this study, a new approach based on ANNs and heuristic algorithms for electricity demand forecasting is proposed.The proposed approach and other well known forecasting methods, ARIMA and MLR, were used to forecast the electricity demand in Hanoi, Vietnam based on historical data from 2003 to 2013.The results indicate that the ANN-COA is the best model to fit the historical data.This study using the neural networks as a modelling tool for forecasting electricity demand has shown the benefits of the application of neural networks.Therefore, this work has made a contribution to the development of forecasting methods.Further studies may include different segments of electricity consumption, including residential, industrial, agricultural, government commerce, and city services.Province based forecasting is also essential for distribution companies.Technical loss should be taken into account when analyzing electricity demand because this parameter may have a tremendous impact. Figure 1 . Figure 1.Pseudo code of the Gravitational Search Algorithm (GSA). Figure 1 . Figure 1.Pseudo code of the Gravitational Search Algorithm (GSA). Figure 3 . Figure 3.The load time series from January 2009 to December 2013. Figure 3 . Figure 3.The load time series from January 2009 to December 2013. Figure 4 . Figure 4.A feed-forward network with three layers. Figure 4 . Figure 4.A feed-forward network with three layers. Figure 5 . Figure 5.The vector of training parameters. Figure 5 . Figure 5.The vector of training parameters. Figure 5 . Figure 5.The vector of training parameters. Figure 6 Figure 6 . Figure 6 describes how heuristic algorithms are being used to train ANN. Figure 6 . Figure 6.Using heuristic algorithm to train neural networks. the average values of t k and y k , respectively. Figure 7 . Figure 7.The forecasting performance of Artificial Neural Network trained by Gravitational Search Algorithm (ANN-GSA), Artificial Neural Network trained by Cuckoo Optimization Algorithm (ANN-COA), and Artificial Neural Network trained by Differential Evolution (ANN-DE). Figure 7 . Figure 7.The forecasting performance of Artificial Neural Network trained by Gravitational Search Algorithm (ANN-GSA), Artificial Neural Network trained by Cuckoo Optimization Algorithm (ANN-COA), and Artificial Neural Network trained by Differential Evolution (ANN-DE). Figure 8 Figure 8 depicts the RMSE values obtained in the training phase for the three models in 1000 iterations.At the 2000th iteration, the RMSE values of the ANN, ANN-GSA, ANN-COA, and ANN-DE were 73,482, 72,980, 53,308, and 64,358, respectively.The ANN-COA and ANN-GSA Figure 7 . Figure 7.The forecasting performance of Artificial Neural Network trained by Gravitational Search Algorithm (ANN-GSA), Artificial Neural Network trained by Cuckoo Optimization Algorithm (ANN-COA), and Artificial Neural Network trained by Differential Evolution (ANN-DE). Table 1 . Factors used for electricity forecasting. Table 2 . Performance statistics of the Artificial Neural Network, Artificial Neural Network trained by Gravitational Search Algorithm (ANN-GSA), Artificial Neural Network trained by Cuckoo Optimization Algorithm (ANN-COA), and Artificial Neural Network trained by Differential Evolution (ANN-DE). Table 3 . Performance statistics of the Autoregressive Integrated Moving Average (ARIMA) and Multiple Linear Regression (MLR). Table 3 . Performance statistics of the Autoregressive Integrated Moving Average (ARIMA) and Multiple Linear Regression (MLR).
2017-03-31T08:35:36.427Z
2017-03-10T00:00:00.000
{ "year": 2017, "sha1": "7062e6b2fceb9366ac31e009f1c1fd416b3a2a3c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2078-2489/8/1/31/pdf?version=1490011880", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "7062e6b2fceb9366ac31e009f1c1fd416b3a2a3c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
52514463
pes2o/s2orc
v3-fos-license
Dagger completions and bornological torsion-freeness We define a dagger algebra as a bornological algebra over a discrete valuation ring with three properties that are typical of Monsky-Washnitzer algebras, namely, completeness, bornological torsion-freeness and a certain spectral radius condition. We study inheritance properties of the three properties that define a dagger algebra. We describe dagger completions of bornological algebras in general and compute some noncommutative examples. Introduction In [6], Monsky and Washnitzer introduce a cohomology theory for non-singular varieties defined over a field k of nonzero characteristic. Let V be a discrete valuation ring with residue field k = V /πV , such that the fraction field K of V has characteristic 0. Let π ∈ V be a uniformiser. Monsky and Washnitzer lift the coordinate ring of a smooth affine variety X over k to a smooth commutative algebra A over V . The dagger completion A † of A is a certain subalgebra of the π-adic completion of A. If A is the polynomial algebra over V , then A † is the ring of overconvergent power series. The Monsky-Washnitzer cohomology is defined as the de Rham cohomology of the algebra K ⊗ V A † . The dagger completion is interpreted in [4] in the setting of bornological algebras, based on considerations about the joint spectral radius of bounded subsets. The main achievement in [4] is the construction of a chain complex that computes the rigid cohomology of the original variety X and that is strictly functorial. In addition, this chain complex is related to periodic cyclic homology. Here we continue the study of dagger completions. We define dagger algebras by adding a bornological torsion-freeness condition to the completeness and spectral radius conditions already present in [4]. We also show that the category of dagger algebras is closed under extensions, subalgebras, and certain quotients, by showing that all three properties that define them are hereditary for these constructions. The results in this article should help to reach the following important goal: define an analytic cyclic cohomology theory for algebras over the finite field k that specialises to Monsky-Washnitzer or rigid cohomology for the coordinate rings of smooth varieties over k. A general machine for defining such cyclic cohomology theories is developed in [5]. It is based on a class of nilpotent algebras, which must be closed under extensions. This is why we are particularly interested in properties hereditary for extensions. If S is a bounded subset of a K-algebra A, then its spectral radius (S) ∈ [0, ∞] is defined in [4]. If A is a bornological V -algebra, then only the inequalities (S) ≤ s for s > 1 make sense. This suffices, however, to characterise the linear growth bornology on a bornological V -algebra: it is the smallest V -algebra bornology with (S) ≤ 1 for all its bounded subsets S. We call a bornological algebra A with this property semi-dagger because this is the main feature of dagger algebras. Any bornological algebra A carries a smallest bornology with linear growth. This defines a semi-dagger algebra A lg . If A is a torsion-free, finitely generated, commutative V -algebra with the fine bornology, then the bornological completion A lg of A lg is the Monsky-Washnitzer completion of A. Any algebra over k is also an algebra over V . Equipped with the fine bornology, it is complete and semi-dagger. We prefer, however, not to call such algebras "dagger algebras." The feature of Monsky-Washnitzer algebras that they lack is torsion-freeness. The purely algebraic notion of torsion-freeness does not work well for bornological algebras. In particular, it is unclear whether it is preserved by completions. We call a bornological V -module A bornologically torsion-free if multiplication by π is a bornological isomorphism onto its image. This notion has very good formal properties: it is preserved by bornological completions and linear growth bornologies and hereditary for subalgebras and extensions. So A lg remains bornologically torsion-free if A is bornologically torsion-free. The bornological version of torsion-freeness coincides with the usual one for bornological V -modules with the fine bornology. Thus A lg is bornologically torsion-free if A is a torsion-free V -algebra with the fine bornology. A bornological V -module M is bornologically torsion-free if and only if the canonical map M → K ⊗ V M is a bornological embedding. This property is very important. On the one hand, we must keep working with modules over V in order to keep the original algebra over k in sight and because the linear growth bornology only makes sense for algebras over V . On the other hand, we often need to pass to the K-vector space K ⊗ V M -this is how de Rham cohomology is defined. Bornological vector spaces over K have been used recently to do analytic geometry in [1][2][3]. The spectral radius of a bounded subset of a bornological V -algebra A is defined in [4] by working in K ⊗ V A, which only works well if A is bornologically torsion-free. Here we define a truncated spectral radius in [1, ∞] without reference to K ⊗ V A, in order to define semi-dagger algebras independently of torsion issues. We prove that the properties of being complete, semi-dagger, or bornologically torsion-free are hereditary for extensions. Hence an extension of dagger algebras is again a dagger algebra. To illustrate our theory, we describe the dagger completions of monoid algebras and crossed products. Dagger completions of monoid algebras are straightforward generalisations of Monsky-Washnitzer completions of polynomial algebras. Basic notions In this section, we recall some basic notions on bornological modules and bounded linear maps between them. See [4] for more details. We also study the inheritance properties of separatedness and completeness for submodules, quotients and extensions. Let V be a complete discrete valuation ring. that is algebraically exact and such that f is a bornological embedding and g a bornological quotient map. Equivalently, g is a cokernel of f and f a kernel of g in the additive category of bornological V -modules. A split extension is an extension with a bounded V -linear map s : and a sequence (δ n ) n∈N in V with lim δ n = 0 and x n − x ∈ δ n · S for all n ∈ N. It is a Cauchy sequence if there are S ∈ B M and a sequence (δ n ) n∈N in V with lim δ n = 0 and x n − x m ∈ δ j · S for all n, m, j ∈ N with n, m ≥ j. We call a subset S of M closed if x ∈ S for any sequence in S that converges in M to x ∈ M . These are the closed subsets of a topology on M . Bounded maps preserve convergent sequences and Cauchy sequences. Thus they are continuous for these canonical topologies. Separated bornological modules. We call M separated if limits of convergent sequences in M are unique. If M is not separated, then the constant sequence 0 has a non-zero limit. Therefore, M is separated if and only if {0} ⊆ M is closed. And M is separated if and only if any S ∈ B M is contained in a π-adically separated bounded V -submodule. If M is not separated, then the constant sequence 0 in M converges to some non-zero x ∈ M . That is, there are a bounded subset S ⊆ M and a null sequence (δ n ) n∈N in V with x − 0 ∈ δ n · S for all n ∈ N. Since g is a bornological quotient map, there are x ∈ M and S ∈ B M with g(x) = x and g(S) = S . We may choose y n ∈ S with x = δ n · y n and y n ∈ S with g(y n ) = y n . So g(x − δ n y n ) = 0. Thus the sequence (x − δ n y n ) lies in f (M ). It converges to x, which does not belong to f (M ) because x = 0. So f (M ) is not closed. This finishes the proof of (2). We prove (3). Let x ∈ M belong to the closure of {0} in M . That is, there are S ∈ B M and a null sequence (δ n ) n∈N in V with x ∈ δ n · S for all n ∈ N. Then g(x) ∈ δ n · g(S) for all n ∈ N. This implies g(x) = 0 because M is separated. So there is y ∈ M with f (y) = x. And f (y) = x ∈ δ n · S. Choose x n ∈ S with f (y) = δ n · x n . We may assume δ n = 0 for all n ∈ N because otherwise x ∈ δ n · S is 0. Since M is torsion-free, δ n · x n ∈ f (M ) implies g(x n ) = 0. So we may write x n = f (y n ) for some y n ∈ M . Since f is a bornological embedding, the set {y n : n ∈ N} in M is bounded. Since M is separated and y = δ n · y n , we get y = 0. Here V and ∞ n=1 V /(π n ) are π-adically separated, but M is not: the constant sequence 1 in M converges to 0 because 1 = 1 − π n x n + π n x n ≡ π n x n in M . Completeness. We call a bornological V -module M complete if it is separated and for any S ∈ B M there is T ∈ B M so that all S-Cauchy sequences are T -convergent. Equivalently, any S ∈ B M is contained in a π-adically complete bounded V -submodule (see [4]). By definition, any Cauchy sequence in a complete bornological V -module has a unique limit. It is somewhat similar to the proof of (4). Next we prove (2). Assume that M is complete, that M is torsion-free, and that f (M ) is not closed in M . We are going to prove that M is not separated. There is a sequence (x n ) n∈N in M for which f (x n ) n∈N converges in M towards some x / ∈ f (M ). So there is a bounded set S ⊆ M and a sequence (δ k ) k∈N in V with lim δ k = 0 and f (x n ) − x ∈ δ n · S for all n ∈ N. We may assume without loss of generality that the sequence of norms |δ n | is decreasing: let δ * n be the δ m for m ≥ n with maximal norm. Then f (x n ) − x ∈ δ n · S ⊆ δ * n · S and still lim δ * n = 0. We may write f (x n ) − x = δ * n y n with y n ∈ S. If m < n, then δ * m g(y m ) = −g(x) = δ * n g(y n ) and hence δ * m (g(y m ) − g(y n )δ * n /δ * m ) = 0. Since M is torsion-free, this implies g(y m ) = g(y n )δ * n /δ * m for all n > m. So there is z m,n ∈ M with y m + f (z m,n ) = y n δ * n /δ * m . We even have z m,n ∈ f −1 (S), which is bounded because f is a bornological embedding. We get and hence x n − x m = δ * m z m,n for n > m. This witnesses that the sequence (x n ) n∈N is Cauchy in M . Since M is complete, it converges towards some y ∈ M . Then f (x n ) converges both towards f (y) ∈ f (M ) and towards x / ∈ f (M ). So M is not separated. This finishes the proof of (2). Next we prove (3). If f (M ) is not closed, then Lemma 2.1 shows that M is not separated and hence not complete. Conversely, we claim that M is complete if f (M ) is closed. Lemma 2.1 shows that M is separated. Let S ∈ B M . There is S ∈ B M with g(S) = S because g is a bornological quotient map. And there is T ∈ B M so that any S-Cauchy sequence is T -convergent. We claim that any S -Cauchy sequence is g(T )-convergent. So let (x n ) n∈N be an S -Cauchy sequence. Thus there is a null sequence (δ n ) n∈N in V with x n − x m ∈ δ j · S for all n, m, j ∈ N with n, m ≥ j. As above, we may assume without loss of generality that the sequence of norms |δ n | is decreasing. Choose any x 0 ∈ M with g(x 0 ) = x 0 . For each n ∈ N, choose y n ∈ S with x n+1 − x n = δ n · g(y n ). Let x n := x 0 + δ 0 · y 0 + · · · + δ n−1 · y n−1 . Then g(x n ) = x n . And x n+1 − x n = δ n · y n ∈ δ n · S. Since |δ n | is decreasing, this implies x m − x n ∈ δ n · S for all m ≥ n. So the sequence (x n ) n∈N is S-Cauchy. Hence it is T -convergent. Thus g(x n ) = x n is g(T )-convergent as asserted. This finishes the proof of (3). Finally, we prove (4). So we assume M and M to be complete. If M is torsion-free, then M is separated by Lemma 2.1. Hence the second statement in (4) is a special case of the first one. Let S ∈ B M . We must find T ∈ B M so that every S-Cauchy sequence is T -convergent. Since M is separated, this says that it is complete. Since M is complete, there is a π-adically complete V -submodule T 0 ∈ B M that contains g(S). Since g is a bornological quotient map, there is The proof of this claim will finish the proof of the theorem. Let (x n ) n∈N be an S-Cauchy sequence. So there are δ n ∈ V and y n ∈ S with lim |δ n | = 0 and x n+1 −x n = δ n ·y n . As above, we may assume that |δ n | is decreasing and that δ 0 = 1. Since g(y n+k ) ∈ g(S) ⊆ T 0 and T 0 is π-adically complete, the following series converges in T 0 : Sincew n ∈ T 0 , there is w n ∈ T 1 with g(w n ) =w n . So Let z n := y n + w n − δn+1 δn w n+1 . A telescoping sum argument shows that So z n ∈ f (M ). And z n ∈ S + T 1 + The following examples show that the technical extra assumptions in (2) and (4) in Theorem 2.3 are necessary. They only involve extensions of V -modules with the bornology where all subsets are bounded. For this bornology, bornological completeness and separatedness are the same as π-adic completeness and separatedness, respectively, and any extension of V -modules is a bornological extension. Example 2.9. We modify Example 2.2 to produce an extension of V -modules N N N where N and N are π-adically complete, but N is not π-adically separated and hence not π-adically complete. We let N := V /(π) = k. We let N be the π-adic completion of the V -module M of Example 2.2. That is, This is indeed π-adically complete. So is The kernel of the quotient map q : This is a k-vector space, and it contains the k-vector space ∞ n=0 k. Since any k-vector space has a basis, we may extend the linear functional Let L := ker σ ⊆ ker q and let N := N 1 /L. The map q descends to a surjective π-linear map N N . Its kernel is isomorphic to N k/ ker σ ∼ = k = N . The functional σ : N k → k vanishes on δ 0 − δ k for all k ∈ N, but not on δ 0 . When we identify N k ∼ = ker q, we map δ k to π k δ k ∈ N 1 . So δ 0 and π k δ k get identified in N , but δ 0 does not become 0: it is the The completion M of a bornological V -module M is a complete bornological V -module with a bounded V -linear map M → M that is universal in the sense that any bounded V -linear map from M to a complete bornological V -module X factors uniquely through M . Such a completion exists and is unique up to isomorphism (see [4]). We shall describe it more concretely later when we need the details of its construction. 2.3. Vector spaces over the fraction field. Recall that K denotes the quotient field of V . Any V -linear map between two K-vector spaces is also K-linear. So K-vector spaces with K-linear maps form a full subcategory in the category of V -modules. A V -module M comes from a K-vector space if and only if the map is invertible. We could define bornological K-vector spaces without reference to V . Instead, we realise them as bornological V -modules with an extra property: Given a bornological V -module M , the tensor product K ⊗ M := K ⊗ V M with the tensor product bornology (see [4]) is a bornological K-vector space because multiplication by π is a bornological isomorphism on K. Spectral radius and semi-dagger algebras A bornological V -algebra is a bornological V -module A with a bounded, V -linear, associative multiplication. We do not assume A to have a unit element. We fix a bornological algebra A throughout this section. We recall some definitions from [4]. Let ε = |π|. Let S ∈ B A and let r ≤ 1. There is a smallest integer j with ε j ≤ r, namely, log ε (r) . Define Let ∞ n=1 r n S n be the V -submodule generated by ∞ n=1 r n S n . That is, its elements are finite V -linear combinations of elements in ∞ n=1 r n S n . Definition 3.1. The truncated spectral radius 1 If A is an algebra over the fraction field K of V , then we may define ∞ n=1 r −n S n also for 0 < r < 1. Then the full spectral radius (S) ∈ [0, ∞] is defined like 1 (S), but without the restriction to r ≥ 1. If A is bornologically torsion-free, then it is safe to define (S; B A ) := (S; B K⊗A ) for S ∈ B A . This is useful to study tube algebras, but shall not be needed in this article. We shall need the following strengthening of this statement, which is implicit in the proofs in [4]: This inequality for all j ∈ N ≥1 implies 1 (S) = 1. Its image under q is also bounded, and this is ∞ j=0 π j S j+1 . So 1 (S; B C ) = 1 and C is semi-dagger. Now assume that A and C are semi-dagger. We show that ∞ l=1 (π 2 S j ) l is bounded in B for all S ∈ B B , j ∈ N ≥1 . This implies 1 (S; B B ) = 1 by Lemma 3.5. Since C is semi-dagger, 1 (q(S); B C ) = 1. Thus S 2 := ∞ l=1 q(πS j ) l is bounded in C by Lemma 3.5. Since q is a quotient map, there is T ∈ B B with q(T ) = S 2 . We may choose T with πS j ⊆ T . For each x, y ∈ T , we have q(x·y) ∈ S 2 ·S 2 ⊆ S 2 = q(T ). Hence there is ω(x, y) ∈ T with x · y − ω(x, y) ∈ i(A). Let This is contained in T 2 − T . So Ω ∈ B B . And T 2 ⊆ T + Ω. By construction, Ω is also contained in i(A). Since i is a bornological embedding, i −1 (Ω) is bounded in A. Since A is semi-dagger, we have 1 (i −1 (Ω); B A ) = 1. So ∞ n=1 (π · Ω) n is bounded. Thus the subset .12]). If A is semi-dagger, then so is its completion A. Let A lg be the completion of A lg . This algebra is both complete and semidagger by Proposition 3.7. The canonical bounded homomorphism A → A lg is the universal arrow from A to a complete semi-dagger algebra, that is, any bounded homomorphism A → B for a complete semi-dagger algebra B factors uniquely through it. This follows immediately from the universal properties of the linear growth bornology and the completion. Bornological torsion-freeness Let M be a bornological module over V . Recall the bounded linear map π M : M → M , m → π · m, defined in (2.10). Proof. Let j : N → M be the inclusion map, which is a bornological embedding by assumption. Since π M is a bornological embedding, so is π M • j = j • π N . Since j is a bornological embedding, this implies that π N is a bornological embedding. That is, N is bornologically torsion-free. We have seen that being bornologically torsion-free is hereditary for submodules. The obvious counterexample k = V / πV shows that it cannot be hereditary for quotients. Next we show that it is hereditary for extensions: Proof. The exactness of the sequence 0 → ker π M → ker π M → ker π M shows that π M is injective. Let S ∈ B M be contained in πM . We want a bounded subset S ∈ B M with π · S = S. We have q(S) ⊆ q(π · M ) ⊆ π · M , and q(S) ∈ B M because q is bounded. Since M is torsion-free, there is T ∈ B M with π ·T = q(S). Since q is a bornological quotient map, there is T ∈ B M with q(T ) = T . Thus q(π · T ) = q(S). So for any x ∈ S there is y ∈ T with q(π · y) = q(x). Since i = ker(q), there is a unique z ∈ M with x − πy = i(z). Let T be the set of these z. Since x ∈ π · M by assumption and M is bornologically torsion-free, we have z ∈ π · M . So T ⊆ π · M . And T is bounded because T ⊆ i −1 (S − π · T ) and i is a bornological embedding, Since M is bornologically torsion-free, there is a bounded subset U ∈ B M with π · U = T . Then S ⊆ π · T + i(π · U ) = π · (T + i(U )). Theorem 4.6. If M is bornologically torsion-free, then so is its bornological completion M . The proof requires some preparation. We must look closely at the construction of completions of bornological V -modules. Since taking quotients may create torsion, the information above is not yet precise enough to show that completions inherit bornological torsion-freeness. This requires some more work. First we write M in a certain way as an inductive limit, using that it is bornologically torsion-free. T,S (L T ) = L S . By Lemma 4.8, the kernel Z S = ker(i ∞,S ) is contained in L S for all S. Since π L S is a bornological isomorphism, the subsets π · Z S ⊆ Z S are also bornologically closed, and they satisfy i −1 T,S (π · Z T ) = π · i −1 T,S (Z T ) = π · Z S . Hence Z S ⊆ π · Z S for all S by Lemma 4.8. Thus Z S ⊆ L S is a K-vector subspace in M S . So the quotient M S /Z S is still bornologically torsion-free. And any element of M S /Z S that is divisible by π j lifts to an element in π j · M S . Any bounded subset of M is contained in i ∞,S ( S) for some bounded V -submodule S ⊆ M , where we view S as a subset of M S . Let j ∈ N. To prove that M is bornologically torsion-free, we must show that π −j i ∞,S ( S) is bounded. Let x ∈ M satisfy π j x ∈ i ∞,S ( S). We claim that x = i ∞,S (y) for some y ∈ M S with π j y ∈ S. This implies that π −j · i ∞,S ( S) is bounded in M . It remains to prove the claim. There are a bounded V -submodule T ⊆ M and z ∈ M T with x = i ∞,T (z). We may replace T by T + S to arrange that T ⊇ S. Let w ∈ S satisfy π j x = i ∞,S (w). This is equivalent to Proposition 4.7 ([4]). A completion of a bornological V -module M exists and is constructed as follows. Write Then π j z = π j i T,S (y). This implies z = i T,S (y) because M T is torsion-free. This proves the claim. Finally, we show that being bornologically torsion-free is compatible with linear growth bornologies: Proposition 4.11. If A is a bornologically torsion-free V -algebra, then so is also has linear growth. And Since T is bounded in A and A is bornologically torsion-free, π −1 · T := {x ∈ A : π · x ∈ T } is also bounded. We have π −1 S ⊆ π −1 T 1 ⊆ π −1 · T + T 2 . This is bounded in A lg . Dagger algebras Definition 5.1. A dagger algebra is a complete, bornologically torsion-free, semidagger algebra. Proof. All three properties defining dagger algebras are hereditary for extensions by Theorems 2.3, 3.6 and 4.5. We have already seen that there are universal arrows A → A tf ⊆ K ⊗ A, A → A lg , A → A from a bornological algebra A to a bornologically torsion-free algebra, to a semi-dagger algebra, and to a complete bornological algebra, respectively. We now combine them to a universal arrow to a dagger algebra: Theorem 5.3. Let A be a bornologically torsion-free algebra. Then the canonical map from A to A † := (A tf ) lg is the universal arrow from A to a dagger algebra. That is, any bounded algebra homomorphism from A to a dagger algebra factors uniquely through A † . If A is already bornologically torsion-free, then A † ∼ = A lg . Proof. The bornological algebra A † is complete by construction. It is semi-dagger by Proposition 3.7. And it is bornologically torsion-free by Proposition 4.11 and Theorem 4.6. So it is a dagger algebra. Let B be a dagger algebra. A bounded homomorphism A → B factors uniquely through a bounded homomorphism A tf → B by Proposition 4.4 because B is bornologically torsion-free. This factors uniquely through a bounded homomorphism (A tf ) lg → B because B is semi-dagger. And this factors uniquely through a bounded homomorphism (A tf ) lg → B because B is complete. So A † has the asserted universal property. If A is bornologically torsion-free, then A ∼ = A tf and hence A † ∼ = A lg . Definition 5.4. We call A † the dagger completion of the bornological V -algebra A. Dagger completions of monoid algebras As a simple illustration, we describe the dagger completions of monoid algebras. The monoid algebra of N j is the algebra of polynomials in j variables, and its dagger completion is the Monsky-Washnitzer algebra of overconvergent power series equipped with a canonical bornology (see [4]). The case of general monoids is similar. The monoid algebra V [S] of S over V is defined by its universal property: if B is a unital V -algebra, then there is a natural bijection between algebra homomorphisms By definition, the submodule M n consists of all finite sums of terms x s δ s with x s ∈ π j · V and (s) ≤ n(j + 1) for some j ∈ N or, equivalently, (s)/n ≤ j + 1 ≤ ν(x s ) + 1. and that a subset of V [S] † is bounded if and only if all its elements satisfy (6.1) for the same c > 0. The growth condition (6.1) does not depend on the word length function because the word length functions for two different generating sets of S are related by linear inequalities ≤ a and ≤ a for some a > 0. Now we drop the assumption that S be finitely generated. Then we may write S as the increasing union of its finitely generated submonoids. By the universal property, the monoid algebra of S with the fine bornology is a similar inductive limit in the category of bornological V -algebras, and its dagger algebra is the inductive limit in the category of dagger algebras. for any finitely generated S ⊆ S, we may identify this inductive limit with a subalgebra of V [S] as well, namely, the union of V [S ] † over all finitely generated submonoids S ⊆ S. That is, V [S] † is the set of elements of V [S] that are supported in some finitely generated submonoid S ⊆ S and that satisfy (6.1) for some length function on S . We may also twist the monoid algebra. Let V × = {x ∈ V : |x| = 1} and let c : S × S → V × be a normalised 2-cocycle, that is, , c] as a V -algebra. They satisfy the commutation relation And this already dictates the multiplication , equipped with a twisted multiplication satisfying (6.5). Dagger completions of crossed products Let A be a unital, bornological V -algebra, let S be a finitely generated monoid and let α : S → End(A) be an action of S on A by bounded algebra homomorphisms. The crossed product A α S is defined as follows. Its underlying bornological V -module is A α S = s∈S A with the direct sum bornology. So elements of A α S are formal linear combinations s∈S a s δ s with a s ∈ A and a s = 0 for all but finitely many s ∈ S. The multiplication on A α S is defined by This makes A α S a bornological V -algebra. What is its dagger completion? It follows easily from the universal property that defines A ⊆ A α S that (A α S) † ∼ = (A † α † S) † ; here we use the canonical extension of α to the dagger completion A † , which exists because the latter is functorial for bounded algebra homomorphisms. Therefore, it is no loss of generality to assume that A is already a dagger algebra. It is easy to show that (A S) † is the inductive limit of the dagger completions (A S ) † , where S runs through the directed set of finitely generated submonoids of S. Hence we may also assume that S is finitely generated to simplify. First we consider the following special case: If T is α-invariant, so is the V -module generated by T . Therefore, α is uniformly bounded if and only if any bounded subset of A is contained in a bounded, α-invariant V -submodule. If A is complete, then the image of T in A is also α-invariant because the maps α s are bornological isomorphisms. Hence we may assume in this case that T in Definition 7.1 is a bounded, α-invariant π-adically complete V -submodule. Proof. If α is uniformly bounded, then A is the bornological inductive limit of its α-invariant bounded V -submodules. The action of α restricts to any such submodule T and then extends canonically to its π-adic completion T . Then the image of T in A is S-invariant as well. This gives enough S-invariant bounded V -submodules in A. So the induced action on A is uniformly bounded. If the action α on A is uniformly bounded, then so is the action id B ⊗ α on B ⊗ A for any bornological algebra B. In particular, the induced action on K ⊗ A is uniformly bounded. Since the canonical map The restriction of the uniformly bounded action of S on K ⊗ A to this invariant subalgebra inherits uniform boundedness. So the induced action on A tf is uniformly bounded. Any subset of linear growth in A is contained in ∞ j=0 π j T j+1 for a bounded V -submodule T . Since α is uniformly bounded, T is contained in an α-invariant bounded V -submodule U . Then ∞ j=0 π j U j+1 ⊇ ∞ j=0 π j T j+1 is α-invariant and has linear growth. So α remains uniformly bounded for the linear growth bornology. The uniform boundedness of the induced action on the dagger completion A † follows from the inheritance properties above and Theorem 5.3. Example 7.3. Let S be a finite monoid. Any bounded action of S by bornological algebra endomorphisms is uniformly bounded because we may take T = s∈S α s (U ) in Definition 7.1. Example 7.4. We describe a uniformly bounded action of Z on the polynomial algebra A := V [x 1 , . . . , x n ] with the fine bornology. So a subset of A is bounded if and only if it is contained in (V + V x 1 + · · · + V x n ) k for some k ∈ N ≥1 . Let a ∈ GL n (V ) ⊆ End(V n ) and b ∈ V n . Then is an algebra automorphism α 1 of A with inverse (α −1 )). This generates an action of the group Z by α n := α n 1 for n ∈ Z. If a polynomial f has degree at most m, then the same is true for α 1 f and α −1 f , and hence for α n f for all n ∈ Z. That is, the subsets (V + V x 1 + · · · + V x n ) k in A for k ∈ N ≥1 are α-invariant. So the action α on A is uniformly bounded. Proposition 7.2 implies that the induced action on V [x 1 , . . . , x n ] † is uniformly bounded as well. Proposition 7.5. Let S be a finitely generated monoid with word length function . Let A be a dagger algebra and let α : S → End(A) be a uniformly bounded action by algebra endomorphisms. Then (A S) † ⊆ s∈S A. A formal series s∈S a s δ s with a s ∈ A for all s ∈ S belongs to (A S) † if and only if there are ε > 0 and T ∈ B A with a s ∈ π ε (s) T for all s ∈ S, and a set of formal series is bounded in (A S) † if and only if ε > 0 and T ∈ B A for its elements may be chosen uniformly. Proof. We first describe the linear growth bornology on A V [S]. Let B be the set of all subsets U ⊆ A S for which there are T ∈ B A and ε > 0 such that any element of U is of the form s∈S a s δ s with a s ∈ π ε (s) T for all s ∈ S. We claim that B is the linear growth bornology on A S. The inclusion V [S] ⊆ A V [S] induces a bounded algebra homomorphism V [S] lg → (A V [S]) lg . We have already described the linear growth bornology on V [S] in Section 6. This implies easily that all subsets in B have linear growth: write π ε (s) a s δ s = a s · π ε (s) δ s . We claim, conversely, that any subset of A S of linear growth is contained in B . All bounded subsets of A S are contained in B . It is routine to show that B is a V -algebra bornology. We only prove that the bornology B has linear growth. Since α is uniformly bounded, any T ∈ B A is contained in a bounded, α-invariant V -submodule T 2 . Then T 3 := ∞ j=0 π j T j+1 2 is a bounded, α-invariant V -submodule with π · T 2 3 ⊆ T 3 and T ⊆ T 3 . If a s ∈ π ε (s) T 3 and a t ∈ π ε (t) T 3 , then π 2 · a s · α t ∈ π 2+ ε (s) + ε (t) T 2 3 ⊆ π ε (s·t) πT 2 3 ⊆ π ε (s·t) T 3 because 1 + ε (s) + ε (t) ≥ ε (s · t) . This implies So any subset in B is contained in U ∈ B with π 2 · U 2 ⊆ U . By induction, this implies (π 2 U ) k ·U ⊆ U for all k ∈ N. Hence ∞ j=0 π 2k U k+1 is in B . Now Lemma 3.4 shows that the bornology B has linear growth. This proves the claim that B is the linear growth bornology on A S. Since A as a dagger algebra is bornologically torsion-free, so is A S. So (A S) † is the completion of (A S) lg = (A S, B ). It is routine to identify this completion with the bornological V -module described in the statement. Propositions 7.2 and 7.5 describe the dagger completion of A S for a uniformly bounded action of S on A even if A is not a dagger algebra. Namely, the universal properties of the crossed product and the dagger completion imply . , x k ] † α † Z) † . The latter is described in Proposition 7.5. Namely, (V [x 1 , . . . , x k ] † α † Z) † consists of those formal series n∈Z a n δ n with a n ∈ V [x 1 , . . . , x k ] † for which there are ε > 0 and a bounded V -submodule T in V [x 1 , . . . , x k ] † such that a n ∈ π ε|n| T for all n ∈ Z; notice that |n| is indeed a length function on Z. And a subset is bounded if some pair ε, T works for all its elements. We combine this with the description of bounded subsets of V [x 1 , . . . , x k ] † in Section 6: there is some δ > 0 so that a formal power series m∈N k b m x m belongs to T if and only if b m ∈ π δ|m| V for all m ∈ N k . Here we use the length function |(m 1 , . . . , m k )| = k j=1 m j . We may merge the parameters ε, δ > 0 above, taking their minimum. So (V [x 1 , . . . , x k ] Z) † consists of the formal series n∈Z,m∈N k a n,m x m δ n with a n,m ∈ π ε(|n|+|m|) V or, equivalently, ν(a n,m ) + 1 > ε(|n| + |m|) for all n ∈ Z, m ∈ N k . If the action of S on A is not uniformly bounded, then the linear growth bornology on A S becomes much more complicated. It seems unclear whether the description below helps much in practice. Let F ⊆ S be a finite generating subset containing 1. Any bounded subset of A S is contained in s∈F T · δ s N for some N ∈ N and some T ∈ B A with 1 ∈ T . Therefore, a subset of A S has linear growth if and only if it is contained in the V -submodule generated by ∞ n=1 π εn (T · {δ s : s ∈ F }) n for some ε > 0, T ∈ B A . Using the definition of the convolution, we may rewrite the latter set as ∞ n=1 s1,...,sn∈F π εn · T · α s1 (T ) · α s1s2 (T ) · · · α s1···sn−1 (T ) δ s1···sn .
2018-09-16T18:08:07.840Z
2018-08-06T00:00:00.000
{ "year": 2018, "sha1": "453608cf6924cbf17a0a2efc619e9e4cb4a1c4be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1808.02067", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "453608cf6924cbf17a0a2efc619e9e4cb4a1c4be", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
115873106
pes2o/s2orc
v3-fos-license
Fault Localization by Comparing Memory Updates between Unit and Integration Testing of Automotive Software in an Hardware-inthe-Loop Environment During the inspection stage, an integration test is performed on electronic automobile parts that have passed a unit test. The faults found during this test are reported to the developer, who subsequently modifies the source code. If the tester provides the developer with memory usage information (such as functional symbol or interface signal), which works differently from normal operation in failed Hardware-in-the-Loop (HiL) testing (even when the tester has no source code), that information will be useful for debugging. In this paper, we propose a fault localization method for automotive software in an HiL environment by comparing the analysis results of updated memory between units and integration tests. Analyzing the memory usage of a normally operates unit test, makes it possible to obtain memory-updated information necessary for the operation of that particular function. By comparing this information to the memory usage when a fault occurs during an integration test, erroneously operated symbols and stored values are presented as potential root causes of the fault. We applied the proposed method to HiL testing for an OSEK/VDX-based electronic control unit (ECU). As a result of testing using fault injection, we confirmed that the fault causes can be found by checking the localized memory symbols with an average of 5.77%. In addition, when applying this methodology to a failure that occurred during a body control module (BCM) (which provides seat belt warnings) test, we could identify a suspicious symbol and find the cause of the test failure with only 8.54% of localized memory symbols. Introduction As the number of electronic control units (ECU) in automobiles increases, so does the functional complexity of automotive software.Therefore, the possibility of systemic ECU problems also increases [1].Thus, testing has become a key process in the development of vehicle ECUs.The automobile industry develops and manufactures the electronics and their software via original equipment manufacturers (OEMs).Accordingly, a tester for the automobile company conducts an acceptance or integration test on the ECU and automotive software (ECU/SW) developed by a separate manufacturing company [2]. Figure 1 shows the process of electronic component development and testing utilized by the automotive industry and OEMs.The developer receives the requirement from the designer and develops the ECU/SW.The tester receives the developed ECU/SW and uses the Hardware-in-the-Loop (HiL) simulator to test unit or integrated functions without source code.The faults detected during the test are then reported to the developers for modification [3].At this time, the test results that the tester reports include only the test script and the corresponding pass/fail information (i.e., the expected value for a test condition and the actual output value of the ECU/SW).Therefore, the developers need to rebuild the same testing scenario to correct the reported failure.If the tester can provide debugging information on the internal operations when a failure occurs, the developer can easily resolve the cause of the failure [4]. that the tester reports include only the test script and the corresponding pass/fail information (i.e., the expected value for a test condition and the actual output value of the ECU/SW).Therefore, the developers need to rebuild the same testing scenario to correct the reported failure.If the tester can provide debugging information on the internal operations when a failure occurs, the developer can easily resolve the cause of the failure [4].When developing embedded systems such as ECUs, developers can use existing debugging tools to obtain internal operating information on particular software failures.However, those tools are not available for the HiL testing of ECU/SWs for vehicles that the tester conducts for the following reasons.First, an in-circuit emulator (ICE, for example, Trace32, Multi-ICE), which is generally used as a debugging tool in an embedded system, requires a dedicated connector as a debugging interface.In the case of a completed ECU, the debugging interface is rarely exposed to the exterior of the component.If the debugging interface is not taken into consideration from the design stage of the ECU, the debugging tool cannot be used to check internal operation during an ECU/SW HiL test [5].Second, even if the ECU is modified to connect the debugging interface, internal operation monitoring using debugging tools is unsuitable for the HiL test environment.The HiL test is based on a test scenario of the test script and the host PC, the HiL simulator and the ECU/SW are organically executed.In order to apply the method of step by step observation of the software using the break point with the existing debugger, the entire HiL test environment should be suspended.Accordingly, it is impractical to use the debugging tool to pause and observe a suspected buggy spot on a running system [6]. There are studies that use software fault localization methods to acquire information needed for debugging without directly using a debugging tool.The conventional method of software fault localization has evolved to find faults efficiently based on source code [7].However, the tester tests the ECU/SW in a black box without source code.As a result, it is difficult to apply the conventional source-code-based method of to HiL testing.Therefore, in order to understand the internal operation when a fault occurs during the HiL test, a method that does not use the source code and the debugger are needed without affecting the test flow performed on the test script basis. Memory dump analysis can be used as a way to find the cause of a fault without the source code.In the data-flow analysis, it is called DU chains (or DU pairs), where data is defined (D) and then used (U) (i.e., "A = B + C"; A is "define;" B and C are "use").Therefore, according to the DU chain, the results of all right-hand side (RHS) expressions processed by the CPU are stored on the left-hand side (LHS).Owing to the nature of these computer systems, the footprints of important data remain in memory when the software is executed.The method used to analyze the memory dump is involving analyzing the stack and variables at the moment when the fault occurred [8].However, in When developing embedded systems such as ECUs, developers can use existing debugging tools to obtain internal operating information on particular software failures.However, those tools are not available for the HiL testing of ECU/SWs for vehicles that the tester conducts for the following reasons.First, an in-circuit emulator (ICE, for example, Trace32, Multi-ICE), which is generally used as a debugging tool in an embedded system, requires a dedicated connector as a debugging interface.In the case of a completed ECU, the debugging interface is rarely exposed to the exterior of the component.If the debugging interface is not taken into consideration from the design stage of the ECU, the debugging tool cannot be used to check internal operation during an ECU/SW HiL test [5].Second, even if the ECU is modified to connect the debugging interface, internal operation monitoring using debugging tools is unsuitable for the HiL test environment.The HiL test is based on a test scenario of the test script and the host PC, the HiL simulator and the ECU/SW are organically executed.In order to apply the method of step by step observation of the software using the break point with the existing debugger, the entire HiL test environment should be suspended.Accordingly, it is impractical to use the debugging tool to pause and observe a suspected buggy spot on a running system [6]. There are studies that use software fault localization methods to acquire information needed for debugging without directly using a debugging tool.The conventional method of software fault localization has evolved to find faults efficiently based on source code [7].However, the tester tests the ECU/SW in a black box without source code.As a result, it is difficult to apply the conventional source-code-based method of to HiL testing.Therefore, in order to understand the internal operation when a fault occurs during the HiL test, a method that does not use the source code and the debugger are needed without affecting the test flow performed on the test script basis. Memory dump analysis can be used as a way to find the cause of a fault without the source code.In the data-flow analysis, it is called DU chains (or DU pairs), where data is defined (D) and then used (U) (i.e., "A = B + C"; A is "define;" B and C are "use").Therefore, according to the DU chain, the results of all right-hand side (RHS) expressions processed by the CPU are stored on the left-hand side (LHS).Owing to the nature of these computer systems, the footprints of important data remain in memory when the software is executed.The method used to analyze the memory dump is involving analyzing the stack and variables at the moment when the fault occurred [8].However, in the HiL test, a test failure occurs when the output of the ECU does not meet the expected value.Therefore, the test failure determination and the occurrence of the fault may manifest at different times and it is difficult to dump the memory by specifying the fault occurrence timing.Accordingly, the memory dump-based debugging method cannot be applied as-is to the HiL test.If it is applied to the HiL test, it is necessary to trace both the updated memory data during the test and the timing information on the fault occurrence.For that reason, in the preliminary work [9], we developed a fault localization method that utilizes an updated data frequency when the failure occurs.However, all update symbols derived by the input without a clear criterion are presented as fault candidates.There is no guarantee that an updated symbol will be associated with a fault while the input is reflected.In the opposite case, if the test fails because important symbols are not updated, the defect candidate will not include the symbol because it has no update frequency.Therefore, in addition to the memory update information available at the time of the fault, a criterion for judging abnormal operation is required.If it is possible to obtain the memory usage information from the ECU during normal operation, it can serve as reference information to be observed in order to judge the cause of the fault in the memory-updated information acquired at the time of fault. In this paper, we propose a method of fault localization for automotive software in an HiL environment.This is accomplished by comparing analysis results of updated memory between a normal unit test and a failed integration test without the source code.First, analyzing the memory usage of the unit test in normal operation can identify the memory update information required for the operation on the function, such as used memory addresses, corresponding symbols, stored values and updated frequency.The memory usage information of the identified unit test is compared with the memory usage information at the time of the fault.The symbols necessary for the operation of the specific function are compared and presented as fault candidates (Invalid updated or fixed symbols and stored values).As a result, a tester at the OEM can provide the developer with the fault occurrence time, malfunctioning symbols and stored values during an integration test in the HiL environment.He can accomplish this by using the operation information from the unit test without the source code and the debugging tools.The proposed method is applied to an HiL test of an OSEK/VDX-based ECU/SW.As a result of testing using fault injection, we confirmed that fault causes can be found by checking the localized memory symbols at an average of 5.77% by the proposed method.In addition, when applying this methodology to a failure that occurred during a body control module (BCM) (which provides seat belt warnings) test, we could identify suspicious symbols and find the cause of the test failure with only 8.54% of localized memory symbols.In this paper, we can provide debugging information for suspicious symbols and memory usage in an ECU/SW integration test in the HiL environment. This paper is organized as follows.In Section 2, we analyze HiL test limitations and existing fault localization methods.In Section 3, we define the process of fault localization and the memory-updated information that can be collected during HiL testing of ECUs.In Section 4, we propose a fault localization method and in Section 5, we describe how the method is applied and then provide our evaluation.In Section 6, we conclude the paper and present future work. Related Work To test electrical automotive parts, a tester uses an HiL simulator in a black box environment without source code.In this environment, when a fault occurs, the tester can provide only limited information, such as the test script and a pass/fail confirmation, to the developers who must do the debugging.In this section, we examine the limitations of HiL testing, software fault localization methods and the studies that have applied memory analysis to debugging. Hardware-in-the-Loop (HiL) Testing for Automotive Software The performance of automotive software is affected by both software and hardware problems.Therefore, the software must be evaluated on actual hardware and its behavior must be tested and verified.The test is performed according to the overall integration level under development.The HiL test is a method for evaluating the hardware on which software is installed [10].The HiL test constructs the physical environment in which the hardware operates with a simulator and evaluates whether the hardware meets certain input and output requirements [6]. Figure 2 shows an example of an HiL test environment.The host PC provides the test script to the HiL simulator.The HiL simulator then gives an input signal to the system under test (SUT) based on the script and it confirms the result.If the output value is equal to the expected value, a "pass" is delivered; otherwise, a "failure" is delivered to the host PC.This process runs automatically based on the script.However, an HiL test that utilizes such a simulator can only evaluate the output as the input of the defined SUT.In other words, if a fault occurs, there is no information on the internal operation that causes the fault.The results only include inputs that do not meet the requirements (i.e., the expected value-the oracle).Therefore, when a fault occurs during an HiL test, it is necessary to hunt for the internal operation of the employed SUT.constructs the physical environment in which the hardware operates with a simulator and evaluates whether the hardware meets certain input and output requirements [6]. Figure 2 shows an example of an HiL test environment.The host PC provides the test script to the HiL simulator.The HiL simulator then gives an input signal to the system under test (SUT) based on the script and it confirms the result.If the output value is equal to the expected value, a "pass" is delivered; otherwise, a "failure" is delivered to the host PC.This process runs automatically based on the script.However, an HiL test that utilizes such a simulator can only evaluate the output as the input of the defined SUT. In other words, if a fault occurs, there is no information on the internal operation that causes the fault. The results only include inputs that do not meet the requirements (i.e., the expected value-the oracle).Therefore, when a fault occurs during an HiL test, it is necessary to hunt for the internal operation of the employed SUT.Commonly used methods for debugging general embedded systems include an in-circuit emulator (ICE) and a logic analyzer.The HiL test is based on a test script scenario.Debugging tools that utilize ICE, such as Trace32 and Multi-ICE, require synchronization with the HiL simulator to provide instruction-level control [5].In addition, the HiL test of the integration process is intended for completed parts, so ICE connectors may not be exposed; this makes it difficult to apply the methodology to HiL testing.Logic analyzers are devices that capture I/O signals and observes the timing relationship between the signals [11].They can detect the failed signals by using the relationship between the signals, time differences and so forth.It is easy to find the fault signal but the fault signal is unsuitable for finding the internal cause that created the fault.Furthermore, background knowledge at the level of I/O signals and operation timing is required and it is unlikely that anyone other than a developer would have that level of expertise.Therefore, there is a need for a method that can be applied to the HiL test for testers who perform integration testing of the software installed on the completed hardware.Commonly used methods for debugging general embedded systems include an in-circuit emulator (ICE) and a logic analyzer.The HiL test is based on a test script scenario.Debugging tools that utilize ICE, such as Trace32 and Multi-ICE, require synchronization with the HiL simulator to provide instruction-level control [5].In addition, the HiL test of the integration process is intended for completed parts, so ICE connectors may not be exposed; this makes it difficult to apply the methodology to HiL testing.Logic analyzers are devices that capture I/O signals and observes the timing relationship between the signals [11].They can detect the failed signals by using the relationship between the signals, time differences and so forth.It is easy to find the fault signal but the fault signal is unsuitable for finding the internal cause that created the fault.Furthermore, background knowledge at the level of I/O signals and operation timing is required and it is unlikely that anyone other than a developer would have that level of expertise.Therefore, there is a need for a method that can be applied to the HiL test for testers who perform integration testing of the software installed on the completed hardware. Software Fault Localization Finding the location where a fault occurred during debugging is costly and time consuming [12].Therefore, many studies have been conducted on fault localization methods.Among these methods, there is a technique that finds errors by using information related to the operating elements of the program.Most methods are white-box-based methods that use a source code because a developer can find and fix the cause of the fault.Thus, there are memory-based studies performed using software footprints that can be applied to a black box without source code. There is a method that locates faults by measuring the code executed by the program [13,14].This method is called code-coverage-based fault localization (CBFL).Code coverage is one of the test measures, which means that codes are covered during testing.For each statement measured in each test case, suspected areas are calculated by coverage by pass or fail signals.The key idea is that the code executed in the failed test case is the cause of the failure.The CBFL method presents the rank of the statement in order of suspicion.Tarantula [13] and Ochiai [14] are typical methods used for calculating the suspicion of CBFL.However, the CBFL method is unsuitable for testers in OEM environments where the source code is difficult to obtain. There is also a mutation-based fault localization (MBFL) method that utilizes the mutation of the program.This method identifies suspected mutations and finds the point at which the fault occurred with the statement that caused the mutation [15,16].A mutation is created by modifying only one statement.The mutation applies the test case that the original performed and kills if the result is the same as the original.The remaining mutations are then mutations that affect the outcome.The MBFL method calculates suspicion with statements that affect the outcome.Typical methods include Metallaxis [15] and Museum [16].The MBFL method has the disadvantage of creating impractical levels of mutation by creating multiple mutations in each statement in the original program.In order to solve this problem, studies have applied the CBFL method [17] or generate mutation efficiently through test case optimization [18].However, the MBFL method still has a disadvantage in that it takes a significant amount of time to test with a large number of mutations.HiL testing is unsuitable because it is difficult to control the execution speed differently from software testing. In addition, there are fault location methods extended from the CBFL method that statistically access characteristic elements of program execution.In Reference [19], statistically defines the density and type of faults based on CBFL method and considers multiple faults present in the program as interference rather than individual approaches.And in Reference [20], the PageRank algorithm is applied to the existing CBFL method to weight the rank.These studies presented the statistical approach to test the results and the effects of source code on faults with density and rank weights.However, it is difficult to measure the covered source code in the HiL test, so the source-code-based methods are not applicable. However, analyzing the memory that has the execution trace of the program can grant access to the fault without the source code.One of the traditional debugging methods involves analyzing the memory dump.This method analyzes the behavior of the program based on how the memory is used at the OS level.However, this method should support a memory dump at the OS level and generally a memory dump occurs when the program terminates due to a serious fault.The HiL test does not know when a fault occurs and the HiL test cannot pause for the memory dump because the host PC, simulator and SUT work together according to the test script.Therefore, it is difficult to apply it directly to an ECU/SW HiL test. For this reason, we periodically dumped the memory in previous works for fault localization in HiL tests [4,9,21].These studies assume that a fault has occurred in the process of determining the output by the input.Therefore, the timing of the interval in which the output is induced by the input applied in the HiL simulator, the address updated in the interval and the corresponding symbol are provided as fault candidates.However, these studies have two problems.The first is that all update symbols derived by the input without a clear criterion are presented as fault candidates.There is no guarantee that an updated symbol will be associated with a fault while the input is reflected.The other problem concerns the opposite case.The fault candidate does not include the symbol if the test fails due to not updating the important symbol.Therefore, in addition to the memory update information at the time of the fault, a criterion for judging abnormal operation is required.By comparing the memory usage at the failed operation in a specific function with the memory usage in normal operation, it is possible to check the memory which is used incorrectly (invalid or fixed symbols and stored values).As a result, it is necessary to compare the memory usage between the unit and integration tests in order to obtain debugging information regarding faults that occur during the HiL integration test.It is difficult to use the source code and existing debugging tools to obtain this information. Preparations for Fault Localization This section describes the overall process of the updated memory-based fault localization method and explains how to construct memory-updated information by processing available test data and memory usage without source code. Fault Localization Process In an integration test, the ECUs that have passed unit tests are inspected under various conditions.As a result of the integration test, the ECU can be divided into "pass" or "fail" categories according to the test case.In the "fail" case, the ECU has operated normally and passed the unit test but faults were found in the integration test.In this paper, we focus on fault localization in the integration test by using memory information that normally operates during the unit test.Figure 3 shows the process of fault localization using memory-updated information. information at the time of the fault, a criterion for judging abnormal operation is required.By comparing the memory usage at the failed operation in a specific function with the memory usage in normal operation, it is possible to check the memory which is used incorrectly (invalid or fixed symbols and stored values).As a result, it is necessary to compare the memory usage between the unit and integration tests in order to obtain debugging information regarding faults that occur during the HiL integration test.It is difficult to use the source code and existing debugging tools to obtain this information. Preparations for Fault Localization This section describes the overall process of the updated memory-based fault localization method and explains how to construct memory-updated information by processing available test data and memory usage without source code. Fault Localization Process In an integration test, the ECUs that have passed unit tests are inspected under various conditions.As a result of the integration test, the ECU can be divided into "pass" or "fail" categories according to the test case.In the "fail" case, the ECU has operated normally and passed the unit test but faults were found in the integration test.In this paper, we focus on fault localization in the integration test by using memory information that normally operates during the unit test.Figure 3 shows the process of fault localization using memory-updated information.At the first step, we perform HiL testing and data collection.As a result, we collect test results, test scripts, memory data and executable files of the software used for testing.Next, we perform static analysis of the executable file to extract the symbol names and their assigned addresses.Then, we analyze the memory snapshots that are periodically dumped and compute the update frequency of the specific address.In this step, the analyzed result is used to map the assigned addresses and symbols and to generate memory update information for each address.The third step identifies the frame range of memory data that responds to the input of the function being tested.Here, there are two types of symbols: normally operated symbols and symbols suspected of operating abnormally within the frame range.The last step compares both memory update information of the previous stage.If the suspicious symbols are different from the memory-updated information in normal At the first step, we perform HiL testing and data collection.As a result, we collect test results, test scripts, memory data and executable files of the software used for testing.Next, we perform static analysis of the executable file to extract the symbol names and their assigned addresses.Then, we analyze the memory snapshots that are periodically dumped and compute the update frequency of the specific address.In this step, the analyzed result is used to map the assigned addresses and symbols and to generate memory update information for each address.The third step identifies the frame range of memory data that responds to the input of the function being tested.Here, there are two types of symbols: normally operated symbols and symbols suspected of operating abnormally within the frame range.The last step compares both memory update information of the previous stage.If the suspicious symbols are different from the memory-updated information in normal operation, they can be regarded as abnormal; these symbols are called fault candidates.In the last stage, the memory-updated information on symbols determined to be abnormal is provided as the fault candidates.The key is to find the fault candidates so that the developers can use them as debugging information. Data Collection In order to provide debugging information for automotive ECU/SW faults, we collect the following three types of data and extract important information from them during the HiL Test.First, we collect test data related to the experiments performed, such as test results and test scripts.The results and scripts, including the pass/fail testing criteria, are the basis for distinguishing between normal operation and failure operation and the script contains the I/O specification of the tested function.Second, we collect the executable file of the software running on the ECU.Static analysis of the executable file provides the basis for identifying the symbolic name of the allocated address, which is essential information.This information appears in the form of a pair (address, symbol-name).Finally, we examine all of the raw data by dumping the memory.Because raw memory data is difficult to understand, memory usage can be checked as to how each symbol has changed over time using the symbols obtained from the executable file. In the fault localization process, the first step collects test data and raw memory data and prepares them for analysis.The HiL test environment is modified to collect raw memory data and periodically collects memory during testing.In addition, the test result is confirmed when the test is finished and test specifications are analyzed in test scripts. We have added a data collector and a test agent for obtaining memory data from the HiL test environment described in Section 2. In Figure 4, a test executor is expressed as a program that handles the test scripts instead of the host PC in the existing HiL test environment.A data collector and a test agent are responsible for collecting memory data.The data collector collects memory data from the host PC and determines whether they have been updated based on the changes in values.The test agent sends memory data within the address range to be observed in the SUT during the test time according to the request of the data collector.The communication between the test agent and the data collector uses a vehicle communication network, such as a CAN. Appl.Sci.2018, 8, x FOR PEER REVIEW 7 of 22 operation, they can be regarded as abnormal; these symbols are called fault candidates.In the last stage, the memory-updated information on symbols determined to be abnormal is provided as the fault candidates.The key is to find the fault candidates so that the developers can use them as debugging information. Data Collection In order to provide debugging information for automotive ECU/SW faults, we collect the following three types of data and extract important information from them during the HiL Test.First, we collect test data related to the experiments performed, such as test results and test scripts.The results and scripts, including the pass/fail testing criteria, are the basis for distinguishing between normal operation and failure operation and the script contains the I/O specification of the tested function.Second, we collect the executable file of the software running on the ECU.Static analysis of the executable file provides the basis for identifying the symbolic name of the allocated address, which is essential information.This information appears in the form of a pair (address, symbolname).Finally, we examine all of the raw data by dumping the memory.Because raw memory data is difficult to understand, memory usage can be checked as to how each symbol has changed over time using the symbols obtained from the executable file. In the fault localization process, the first step collects test data and raw memory data and prepares them for analysis.The HiL test environment is modified to collect raw memory data and periodically collects memory during testing.In addition, the test result is confirmed when the test is finished and test specifications are analyzed in test scripts. We have added a data collector and a test agent for obtaining memory data from the HiL test environment described in Section 2. In Figure 4, a test executor is expressed as a program that handles the test scripts instead of the host PC in the existing HiL test environment.A data collector and a test agent are responsible for collecting memory data.The data collector collects memory data from the host PC and determines whether they have been updated based on the changes in values.The test agent sends memory data within the address range to be observed in the SUT during the test time according to the request of the data collector.The communication between the test agent and the data collector uses a vehicle communication network, such as a CAN.When the HiL test is finished, the test results and the script are collected as data.The test reports that the developer receives from the tester for debugging includes metadata about the test, such as the date/time of the test, the result and the test script.Through the test script, the developer recognizes the occurrence of the fault, reproduces the fault condition and starts debugging.Therefore, it is possible to extract the meta information and specification of the test by analyzing the collected test script and the test result.The test script in Figure 2 contains the test conditions and the expected value to check after the required operation time.The "write" command inputs the test condition into the SUT and the "inspect" command checks the expected value and the output value from the SUT.By When the HiL test is finished, the test results and the script are collected as data.The test reports that the developer receives from the tester for debugging includes metadata about the test, such as the date/time of the test, the result and the test script.Through the test script, the developer recognizes the occurrence of the fault, reproduces the fault condition and starts debugging.Therefore, it is possible to extract the meta information and specification of the test by analyzing the collected test script and the test result.The test script in Figure 2 contains the test conditions and the expected value to check after the required operation time.The "write" command inputs the test condition into the SUT and the "inspect" command checks the expected value and the output value from the SUT.By analyzing the test script in this manner, it is possible to identify the function being tested by the signal name.The data that can be acquired through the test are summarized in Table 1 below.The test case has I/O information of the function to be inspected and the test script includes a series of test cases. Data Analysis In Figure 3, the second step analyzes the updated memory data and the executable file.The amount of memory data collected depends on the set range and time but the amount collected over a period of tens milliseconds is too enormous to check raw data values.Therefore, we focus on the symbols with the values updated by input stimulus.As explained in the Introduction, the results of all right-hand side (RHS) expressions processed by the CPU are stored on the left-hand side (LHS) according to the DU chain.Therefore, we focus on LHS symbols with the updated values. Static Analysis of the Execution File The executable file is statically analyzed to extract execution information from the software.The tester receives an executable file from the developer in a binary form rather than source code to test an automotive ECU/SW.To statically analyze an executable file, a binary utility is used [22].For instance, the Objdump is used for the memory section table and the NM is used for the symbol list.First, we have to obtain a memory section table, which provides memory partitioning information based on usage, including structure information used by the executable.Figure 5a, which is an example of a memory section table, shows the section name, the size of each section, the start address (Virtual Memory Address, VMA) and characteristics such as alignment and flags.This section table contains information on the address range to be dumped.In the figure, ".text" in 1 indicates the part where code is loaded on memory and 2 is a section for variables used in software execution.The section of ".data" is for the variables with initial values and ".bss" is for the variables without initial values.The variable sections that can be updated with values that correspond to LHS are dumped and analyzed.At this point, our method focuses on the value changes in the address.Therefore, there is a limit to not using static addresses such as local variables in the stack and dynamic allocation variables in the heap.However, in the coding rules (MISRA-C: 2004 Rule 18.3, 20.4) for automotive software, it is recommended that memory should not be reused or dynamically allocated [23].In this paper, we propose a fault localization method for static addresses only. Data Analysis In Figure 3, the second step analyzes the updated memory data and the executable file.The amount of memory data collected depends on the set range and time but the amount collected over a period of tens milliseconds is too enormous to check raw data values.Therefore, we focus on the symbols with the values updated by input stimulus.As explained in the Introduction, the results of all right-hand side (RHS) expressions processed by the CPU are stored on the left-hand side (LHS) according to the DU chain.Therefore, we focus on LHS symbols with the updated values. Static Analysis of the Execution File The executable file is statically analyzed to extract execution information from the software.The tester receives an executable file from the developer in a binary form rather than source code to test an automotive ECU/SW.To statically analyze an executable file, a binary utility is used [22].For instance, the Objdump is used for the memory section table and the NM is used for the symbol list.First, we have to obtain a memory section table, which provides memory partitioning information based on usage, including structure information used by the executable.Figure 5a, which is an example of a memory section table, shows the section name, the size of each section, the start address (Virtual Memory Address, VMA) and characteristics such as alignment and flags.This section table contains information on the address range to be dumped.In the figure, ".text" in ① indicates the part where code is loaded on memory and ② is a section for variables used in software execution.The section of ".data" is for the variables with initial values and ".bss" is for the variables without initial values.The variable sections that can be updated with values that correspond to LHS are dumped and analyzed.At this point, our method focuses on the value changes in the address.Therefore, there is a limit to not using static addresses such as local variables in the stack and dynamic allocation variables in the heap.However, in the coding rules (MISRA-C: 2004 Rule 18.3, 20.4) for automotive software, it is recommended that memory should not be reused or dynamically allocated [23].In this paper, we propose a fault localization method for static addresses only.Second, obtain the symbol list that contains the actual names of each address for use in displaying fault candidates.The symbol name obtained is optionally used to help understand the result of fault localization.As shown in Figure 5b, the list of symbols acquired from a binary file using static analysis includes the size, starting address, type and name of the symbol.The symbol type D in 3 represents the ".data" section in the section table of 2 .The first column is the address where the symbol is and the next column is the size of the symbol in bytes.Thus, when a symbol list is interpreted, a symbol named UART_BAUDRATE will be the symbol that is 4 bytes in a data section that starts at address 0x2000.That is, even if there is no source code, we can obtain the name, size and memory location of the symbol used in the source code by extracting the symbol list through the static analysis.However, some build options cannot extract the symbol lists.Therefore, the proposed method displays only the memory address when the symbol list cannot be extracted. Computation of Memory-Updated Information The HiL tests require a periodic memory dump to trace the running software.Because the tester is prohibited from using additional storage space inside the ECU for testing, we previously developed a method to transfer large amounts of data while taking into account the communication load of an ECU [21].Using this method, the memory sections for variables can be dumped periodically without data loss and beyond the bandwidth of CAN.The memory data of the k-th dump is defined as a k-frame at the interval of the period T of the main task of the system from the 0-frame in Equation ( 1) and is represented by F k . F k is a set of values corresponding to each address at the point of dumping.Therefore, F k can be regarded as a memory snapshot at the k-th point.The ECU state at a specific point can be confirmed using this memory snapshot.By examining the values of memory stored in the frame, we can trace the ECU states at time intervals.A change in value in a specific address means that a new value has been updated to that location.This indicates that the symbol corresponding to that address was used as a left-hand side variable in the program.Accordingly, "MU (Memory Updated)" is defined as Equation (2): In Equation ( 2), the memory updated (MU A, k ) compares V A, k−1 with V A, k .If both values are equal, the value is 0; otherwise, the value is 1.If MU A, k is 1, it means that the address A is updated in the k-frame and that the ECU has performed an operation related to the address A. By accumulating the MU between specific ranges, we know how frequent the address is used.We refer to it as the memory-updated frequency (MUF A, R ) which is defined as shown in Equation (3). MUF A, R represents the number of times the updates occurred from the range of the R.start to the R.end of the frame index to which address A is to be observed.If we trace the MUF A, R for each address, we can know the addresses used during the operations that are performed over the specific frame range (R). Equation ( 4) defines the memory data (MD A, R ) using the previous equation.In Equation ( 4), memory data (MD A, R ) includes the address A, the update frequency of the address in the range R of the frame observed (MUF A, R ) and the value set of each frame of the address.We define memory-updated information (MUI) as shown in Equation ( 5), together with the symbol name (Sym A ) of the address A obtained by static analysis and MD A, R . Sym A is the symbol name of the address A, R is a set of the frame index (5) Algorithm 1 shows an algorithm for generating MUI.In line 7-12, the update is determined by the change of the value in each frame according to Equation (2) and the update frequency is calculated in line 13 how many times the value has changed in the full range according to Equation (3).For the updated memory according to Equations ( 4) and ( 5), the symbol, the address, the update frequency and the values in each frame are stored as line 15.The MUI can be obtained by repeating lines 6-16 for all addresses.As a result, memory usage such as the memory addresses, the corresponding symbol name, the changes in values and the updated frequency can be identified by the analyzed MUI.Therefore, we are ready to proceed with the fault localization by comparing the memory usage of normal and failed operation. Fault Localization Method Using Memory Updates By collecting the memory usage of an ECU/SW that has passed its unit test, normal operating criteria can be created.An integration test is conducted to verify the problems that might occur in the integration of the unit functions of an ECU that has already passed a unit test.In other words, the integration between unit functions checks the transfer of the values, exception handling, timing delay and so on.Faults that may occur in this integration test can be compared with the criteria for normal operation to determine the failed signal.This section describes how data is prepared to apply the proposed method for fault localization and it explains how to identify the major symbols involved in the operation of the function.Finally, we propose a primary algorithm for finding fault candidates for integration tests using the normal operation symbols of unit tests. Data from Test Specification and Memory Updates In the previous section, we prepare memory-updated information (MUI) by accumulating the frequency of the specific address and by extracting the symbol names from the executable file.Additionally, we have to obtain a test specification such as show the number of inputs, the test condition, the expected value and the input interval from the test script. Depending on the test suite, the details of the script may vary.However, the test condition and the expected values are essential.The test condition is input after initialization to confirm the normal operation of the function.The interval means time duration between inputs-between initialization and input or between inputs.The following is summarized.Expected value-Expected value including initial value of output signal Figure 6 shows an example of MUI. Figure 6a,b are the memory snapshots in color.Figure 6a shows a set of MU A, n of n-th frame in red, which means "updated".The addresses of the white area that look like the background means there are no changes in the values at that point of n-th frame.In (b), it is possible to identify the updated frequency of each address in the range of the full-frame.The updated frequency is visualized using different color palettes (white, yellow, green, blue, red).As frequency increases, the color changes from white to red and turns red if an address is updated on all frames.In (c), you can see the additional details of the MUI.The symbol "request" in the first line is assigned to address 0x2037 and the total number of updates is 10 because the value is continuously changed from #488 to #497 in every frame.Additionally, we have to obtain a test specification such as show the number of inputs, the test condition, the expected value and the input interval from the test script.Depending on the test suite, the details of the script may vary.However, the test condition and the expected values are essential.The test condition is input after initialization to confirm the normal operation of the function.The interval means time duration between inputs-between initialization and input or between inputs.The following is summarized.Expected value-Expected value including initial value of output signal Figure 6 shows an example of MUI. Figure 6a,b are the memory snapshots in color.Figure 6a shows a set of MUA, n of n-th frame in red, which means "updated".The addresses of the white area that look like the background means there are no changes in the values at that point of n-th frame.In (b), it is possible to identify the updated frequency of each address in the range of the full-frame.The updated frequency is visualized using different color palettes (white, yellow, green, blue, red).As frequency increases, the color changes from white to red and turns red if an address is updated on all frames.In (c), you can see the additional details of the MUI.The symbol "request" in the first line is assigned to address 0x2037 and the total number of updates is 10 because the value is continuously changed from #488 to #497 in every frame. Identification of Input-Driven Updated Range Now, we have to focus on reducing the number of the captured memory frames.Because we use the MUI, we do not need entire frames that are periodically collected.In Figure 3, the third step identifies the Input-Driven Updated Range (IDUR).When the software is executed, the footprint remains in memory.The software is executed according to the input signal provided by the simulator and input-driven output can be traced by analyzing the memory.In Reference [9], we proposed the Identification of Input-Driven Updated Range Now, we have to focus on reducing the number of the captured memory frames.Because we use the MUI, we do not need entire frames that are periodically collected.In Figure 3, the third step identifies the Input-Driven Updated Range (IDUR).When the software is executed, the footprint remains in memory.The software is executed according to the input signal provided by the simulator and input-driven output can be traced by analyzing the memory.In Reference [9], we proposed the IDUR identification method using a moving average technique and tracing the trend of updating memory through the entire frames.However, it is difficult to obtain the exact updated range because it is identified only by trend without precise criteria.Thus, we propose a new algorithm that can improve the method of IDUR identification to obtain an exact range.Figure 7 compares the IDUR identified by the existing moving average method of [9] and the proposed method. IDUR identification method using a moving average technique and tracing the trend of updating memory through the entire frames.However, it is difficult to obtain the exact updated range because it is identified only by trend without precise criteria.Thus, we propose a new algorithm that can improve the method of IDUR identification to obtain an exact range.Figure 7 compares the IDUR identified by the existing moving average method of [9] and the proposed method.The blue line indicates the number of updated addresses throughout the frame and the dashed line shows the trend of the number using the moving average method.Looking at the 50th frame of the graph, you can see that the number of updated addresses increases and the number of updates increases again near the 250th frame after 200 frames (Tin = 200).This is obvious evidence that the input signal reflects the update of the memory value.Therefore, in order to handle only the data driven by the input, we need to find the exact range called IDUR.The algorithm of the proposed IDUR identification method is shown in Algorithm 2. The blue line indicates the number of updated addresses throughout the frame and the dashed line shows the trend of the number using the moving average method.Looking at the 50th frame of the graph, you can see that the number of updated addresses increases and the number of updates increases again near the 250th frame after 200 frames (T in = 200).This is obvious evidence that the input signal reflects the update of the memory value.Therefore, in order to handle only the data driven by the input, we need to find the exact range called IDUR.The algorithm of the proposed IDUR identification method is shown in Algorithm 2. if MUF(α,R) ≤ N in then 5: if updated frames interval = T in then 6: IDUA ← α 7: END FOR Identify Input Driven Updated Range: 8: FOR each input j of N in DO 9: K j ≡ {∀k j |MU(α, k j ) = 1, α in IDUA} // 10: R.start ← min(K j ) // first updated frame of input number j 11: R.end ← max(K j ) // last updated frame of input number j 12: IDUR[j] = {index of IDUR, R.start, size of R range} 13: END FOR First, we find an address with MUF smaller than the number of inputs (N in ) written in the test script (line 4).Here, the interval of the updated frame is equal to the input time interval (T in ).That is, it finds an address that is updated at the same time interval of the input.We define a set of the addresses as Input-Driven Updated Addresses (IDUA) (lines 5-6).The identification method of IDUR is based on the order of the updated frames of the IDUA and determines the frame range from the first frame (R.start) to the last frame (R.end) of each order as the IDUR (lines 8-13).As a result, the IDUR identified by the proposed algorithm is determined as shown by the shaded area in Figure 7. Therefore, we can focus on only input-driven data by reducing the number of captured memory frames using the algorithm of IDUR identification. Comparison of Difference between Memory Updates of Normal and Failed Operations In the fourth step, the normal operation executed during the unit test is compared with the failed operation during the integration test.The memory usage pattern generated from the normal operation of each unit test has already been obtained through the previous steps.We define the IDUA of the unit test as functional symbols for the normal operation of the unit function.That is, the IDUA reacts directly when the function is activated by the input.Therefore, we can find the cause of the fault by observing the IDUA of the unit function within the IDUR of integration test.When analyzing the memory symbols, there are symbols required during the function operation, infrastructure symbols used for OS operation and communication and temporary symbols such as buffers and counters [24].Therefore, the MUI of the integrated function is composed of IDUA of each unit functions and integrated function, the infrastructure symbols and the temporary symbols.If the integration test is operating normally, the IDUA of the unit function should behave similar to unit test. As the fault candidates, we present the MUI of the IDUA identified in each unit function and the failed integration test.In Equation ( 6), we define the fault candidates (FC) as the MUI, which consists of the IDUAs within the frame range-IDUR, the address of the symbol, the update frequency and a set of values for each frame.The fault candidates (FC) are defined by Equation (6). Finding fault using memory updated information has two implications.One is that the update is not made at the time when the update should be made.The other is that the update is made at the correct time but it is updated to the wrong value.Therefore, we proposed a two-step fault localization method using the fault candidates.The first step is to find the "not updated" symbols where the value of MUF is zero in IDUR.These symbols are symbols that are not used as the integrated function among the IDUA of the unit functions or are affected by the fault.The next step is to check the revised value of the updated symbols whose MUF is one or more in the IDUR.Based on the value, you can check which symbol has been updated to the incorrect value.In other words, it is possible to find faults by first identifying symbols that are not updated and then by identifying the cause of the failure operation with the changed values of the updated symbols. For example, the "Emergency Stop Signal" (ESS) is a function that quickly flashes the brake lamp in an emergency stop situation.Specifically, the "Advanced ESS" (Adv ESS) is connected to the function that automatically turns on the emergency lamp when the vehicle is completely stopped.At this time, the "Adv ESS" consists of an integrated function of "ESS" and "Emergency Signal".Therefore, by observing the IDUA of the integrated function, the cause of the fault in the "Adv ESS" can be found.Figure 8 presents an example of fault candidates for the "Adv ESS".It shows the MUI for the IDUA of each "Emergency Signal" and "ESS" from the top of Figure 8.Each IDUA (from function A and B) is updated in order of the cmd* symbol, the flag* symbol and the status* symbol after the update of the mCan* symbol.Figure 8 also shows fault candidates for the integrated functions.The cells highlighted in yellow are the values that have changed in the address and the symbols highlighted as the shaded bars are the symbols that were not updated.The FC contains the union of IDUA of each unit function and the symbols belonging to each unit function are marked as a and b .Among these symbols, the symbols marked with b have been similarly updated for unit B and the integrated function but the symbols with a marks are not similar.There are symbols highlighted by the shaded bar in the symbols marked a and when you look at the highlighted cell of the symbol, you can see that the value of flagIndicatorLamp has been changed from 96 to 192.Therefore, the "ESS" (function B) is operated normally but it can be seen that a fault has occurred by connecting to the "Emergency Signal" (function A). B and the integrated function but the symbols with ⓐ marks are not similar.There are symbols highlighted by the shaded bar in the symbols marked ⓐ and when you look at the highlighted cell of the symbol, you can see that the value of flagIndicatorLamp has been changed from 96 to 192.Therefore, the "ESS" (function B) is operated normally but it can be seen that a fault has occurred by connecting to the "Emergency Signal" (function A). Evaluation In this section, we evaluate the proposed method in both environments.First, a fault is injected into the HiL test environment of the OSEK/VDX-based ECU using the mutation technique and it is confirmed that the fault is included in the fault candidates.In addition, we analyze fault candidates for two types of faults: fixed (called "not updated") symbols and invalid values.Next, it also evaluates whether the fault candidate includes the cause of failure when applying a warning test for seat belt usage, which is a failed test case of BCM. Testbed for Fault Injection For the evaluation, we have constructed the SUT with an NXP MC9S12X [25] family ECU and OSEK/VDX-based SW. Figure 9 shows the HiL environment used for testing.As shown in the figure, the environment consists of an SUT with three ECUs, a test executor, a test interface and a monitoring system that collects and stores memory.Consisting of three separate ECUs, the SUT has 10 unit functions that handle steering and forward functions (N1), communication and vehicle propulsion (N2) and peripheral sensing and rearward functions (N3).Therefore, the memory region used by each unit function is statically allocated to each node. Evaluation In this section, we evaluate the proposed method in both environments.First, a fault is injected into the HiL test environment of the OSEK/VDX-based ECU using the mutation technique and it is confirmed that the fault is included in the fault candidates.In addition, we analyze fault candidates for two types of faults: fixed (called "not updated") symbols and invalid values.Next, it also evaluates whether the fault candidate includes the cause of failure when applying a warning test for seat belt usage, which is a failed test case of BCM. Testbed for Fault Injection For the evaluation, we have constructed the SUT with an NXP MC9S12X [25] family ECU and OSEK/VDX-based SW. Figure 9 shows the HiL environment used for testing.As shown in the figure, the environment consists of an SUT with three ECUs, a test executor, a test interface and a monitoring system that collects and stores memory.Consisting of three separate ECUs, the SUT has 10 unit functions that handle steering and forward functions (N1), communication and vehicle propulsion (N2) and peripheral sensing and rearward functions (N3).Therefore, the memory region used by each unit function is statically allocated to each node.The fault is injected by the mutation method [26].Because the proposed method targets faults after a successful unit test is completed, the fault to be injected must pass the unit test but fail the integration test.In Table 2, the 80 C language mutation operators are classified according to their applicability to each function and the passage of the unit test is confirmed.Table 3 shows the faults injected for each function.In Table 3, we did not use VTWD and ORRN for fault injection.VTWD mutates the variable by adding or subtracting 1, so the result is similar to CGCR.ORRN mutates a relational operator in an "if" statement and behaves similarly to a STRI that forces the state in an "if" statement.Therefore, among all mutation operators, operators with similar operations are not applied.Table 4 summarizes the mutation operators used in the experiments [26].The fault is injected by the mutation method [26].Because the proposed method targets faults after a successful unit test is completed, the fault to be injected must pass the unit test but fail the integration test.In Table 2, the 80 C language mutation operators are classified according to their applicability to each function and the passage of the unit test is confirmed.Table 3 shows the faults injected for each function.In Table 3, we did not use VTWD and ORRN for fault injection.VTWD mutates the variable by adding or subtracting 1, so the result is similar to CGCR.ORRN mutates a relational operator in an "if" statement and behaves similarly to a STRI that forces the state in an "if" statement.Therefore, among all mutation operators, operators with similar operations are not applied.Table 4 summarizes the mutation operators used in the experiments [26]. Experimental Result Through an example of localizing the injected faults, we show how to find faults using our proposed method and evaluate the result based on the localization rate of the fault candidates.First, we analyzed the memory-updated information from unit test of the "Left-Turn Signal" function.Next, we test for a fault in both cases (fixed symbols and invalid values) at fault index #1 and #12 of Table 3.Finally, we evaluate the experimental results as the ratio of localization.The data used in the experiment is provided as Material. Memory-Updated Information of Unit Test The experiment performs according to the process of illustrated in Figure 3. First, a unit test is performed to check whether a function is normally operated.The unit test sets inputs for initial state setting and functional testing and confirms that the function normally operates.The updated information is analyzed by the collected memory data of unit test.At the same time, the test specifications are analyzed in the test script and include the test conditions, expected values and time intervals between inputs.IDUR is identified using memory-updated information and test specifications.Refer to Figure 10a."Left-turn Signal" initializes as 0 at 200 ms and inputs to a test condition value (input value) of 32 at 700 ms.If the left signal lamp changes from 0 to 1 (the expected value), it is determined to be normal.In (b) left, mCanLampModeSet is initialized as 0 from frame number #27, then the values of the other symbols are updated in order.The statuslLeftSignalLamp is initialized as 0 from frame number #31.In (b) right, mCanLampModeSet is updated from #77 as 32 to the test condition value and is sequentially updated until #81 and the statusLeftSignalLamp is output as 1, which is the same as the expected value.Because the SUT and the simulator are not synchronized, the timestamp of the test specification and the updated frame number of the MUI may be different.However, the input time difference between the test specification (500 ms) and the MUI (50 frames) is the same (one frame is 10 ms). Experimental Result Through an example of localizing the injected faults, we show how to find faults using our proposed method and evaluate the result based on the localization rate of the fault candidates.First, we analyzed the memory-updated information from unit test of the "Left-Turn Signal" function.Next, we test for a fault in both cases (fixed symbols and invalid values) at fault index #1 and #12 of Table 3.Finally, we evaluate the experimental results as the ratio of localization.The data used in the experiment is provided as Supplementary Material. Memory-Updated Information of Unit Test The experiment performs according to the process of illustrated in Figure 3. First, a unit test is performed to check whether a function is normally operated.The unit test sets inputs for initial state setting and functional testing and confirms that the function normally operates.The updated information is analyzed by the collected memory data of unit test.At the same time, the test specifications are analyzed in the test script and include the test conditions, expected values and time intervals between inputs.IDUR is identified using memory-updated information and test specifications.Refer to Figure 10a."Left-turn Signal" initializes as 0 at 200 ms and inputs to a test condition value (input value) of 32 at 700 ms.If the left signal lamp changes from 0 to 1 (the expected value), it is determined to be normal.In (b) left, mCanLampModeSet is initialized as 0 from frame number #27, then the values of the other symbols are updated in order.The statuslLeftSignalLamp is initialized as 0 from frame number #31.In (b) right, mCanLampModeSet is updated from #77 as 32 to the test condition value and is sequentially updated until #81 and the statusLeftSignalLamp is output as 1, which is the same as the expected value.Because the SUT and the simulator are not synchronized, the timestamp of the test specification and the updated frame number of the MUI may be different.However, the input time difference between the test specification (500 ms) and the MUI (50 frames) is the same (one frame is 10 ms). Finding the Fault by the Fixed Symbols (in the Case of the Fault Index #1) This fault means that the symbols required to be updated by function operations are not updated.In Table 3, the fault index #1 causes the "Front Turn Signal" function to the malfunction due to a consecutive command.The "Front Turn Signal" has three unit tests as left, right and emergency lamp.Therefore, the fault candidates include the IDUA of three unit tests.Figure 11 shows the fault candidates of the fault index #1.In (a), it switches from "Left Turn Signal" into "Right Turn Signal," and (b) switches from "Emergency Signal" into "Right Turn Signal" (i.e., mCanLampModeSet: in (a), 32 → 64 and in (b), 96 → 64).The test condition values are input in #290 and #790, respectively and have to be updated in the same order as the IDUA of unit test but the flagIndicatorLamp is not updated.When cmdLeftSignalLamp of 1 in (a) or cmdEmergencyLamp of 1 in (b) is changed to 0, cmdRightSignalLamp is changed to 1 but all status* symbols of 2 are not updated.This shows that there is a malfunction in flagIndicatorLamp between 1 and 2 .The fault index #1 uses OBEA to mutate the assignment operator into a bitwise assignment operator (see the Table 3).Therefore, if a new value comes in when a value already exists, the "|=" operation is performed and it malfunctions.This fault means that the symbols required to be updated by function operations are not updated.In Table 3, the fault index #1 causes the "Front Turn Signal" function to the malfunction due to a consecutive command.The "Front Turn Signal" has three unit tests as left, right and emergency lamp.Therefore, the fault candidates include the IDUA of three unit tests.Figure 11 shows the fault candidates of the fault index #1.In (a), it switches from "Left Turn Signal" into "Right Turn Signal," and (b) switches from "Emergency Signal" into "Right Turn Signal" (i.e., mCanLampModeSet: in (a), 32 → 64 and in (b), 96 → 64).The test condition values are input in #290 and #790, respectively and have to be updated in the same order as the IDUA of unit test but the flagIndicatorLamp is not updated.When cmdLeftSignalLamp of ① in (a) or cmdEmergencyLamp of ① in (b) is changed to 0, cmdRightSignalLamp is changed to 1 but all status* symbols of ② are not updated.This shows that there is a malfunction in flagIndicatorLamp between ① and ②.The fault index #1 uses OBEA to mutate the assignment operator into a bitwise assignment operator (see the Table 3).Therefore, if a new value comes in when a value already exists, the "|=" operation is performed and it malfunctions.This fault was found in the test that gave the 12 consecutive commands for 8 s.Of the 3714 addresses allocated on N1 (see the Figure 9, Node of handle steering and forward functions), this test has 187 memory addresses updated.Of the 187 updated memory addresses, the fault candidates that the developer has to check include only nine memory addresses (9-IDUAs of Figure 11).Moreover, without checking a total of 800 frames (generated during 8 s), this test only checks 120 frames, using IDUR for each of the 12 inputs.In summary, developers can find the fault by checking the value at IDUR for 4.8% of the total memory symbols used.The fault localization ratio can be defined by Equation (7).This fault means that the symbols required for function operations are updated with an incorrect value.In Table 3, the fault index #12 is a malfunction due to the existing command state.If "Adv ESS" is activated while the turn signal is on, it is malfunctioning.Figure 12, which shows the fault This fault was found in the test that gave the 12 consecutive commands for 8 s.Of the 3714 addresses allocated on N1 (see the Figure 9, Node of handle steering and forward functions), this test has 187 memory addresses updated.Of the 187 updated memory addresses, the fault candidates that the developer has to check include only nine memory addresses (9-IDUAs of Figure 11).Moreover, without checking a total of 800 frames (generated during 8 s), this test only checks 120 frames, using IDUR for each of the 12 inputs.In summary, developers can find the fault by checking the value at IDUR for 4.8% of the total memory symbols used.The fault localization ratio can be defined by Equation (7).This fault means that the symbols required for function operations are updated with an incorrect value.In Table 3, the fault index #12 is a malfunction due to the existing command state.If "Adv ESS" is activated while the turn signal is on, it is malfunctioning.Figure 12, which shows the fault candidates of fault index #12, can find signals and reasons for the failure."Adv ESS" turns the "Emergency Signal" on when the car stops after the "ESS" has been activated.In the figure, a is the IDUA of unit test of the "Emergency Signal," b is the IDUA of unit test of the "ESS," and c is the IDUA of integration test of the "Adv ESS."At this point, the three symbols related to the "Emergency Signal" of a are highlighted by the shaded bar.It can be assumed that a fault has occurred in the symbol associated with the "Emergency Signal."In the fault index #1, the front turn signal acted as an "Emergency Signal" when the flagIndicatorLamp showed 96.However, after statusPropulsion is updated to 112 at frame number #224 (after the vehicle has stopped), flagIndicatorLamp is updated to an incorrect value of 192 at frame number #226. Result We applied our proposed method to the other 10 indexed faults in Table 3.As a result, we could find all the causes of the faults injected by reviewing the presented fault candidates.The results of the experiment are summarized in Table 5.As a result of the fault injection experiment, we could find the cause of the fault by only checking 5.77% of the updated symbols on average. Result We applied our proposed method to the other 10 indexed faults in Table 3.As a result, we could find all the causes of the faults injected by reviewing the presented fault candidates.The results of the experiment are summarized in Table 5.As a result of the fault injection experiment, we could find the cause of the fault by only checking 5.77% of the updated symbols on average.The proposed method is applied not only to our test bed but also to the commercial BCM seatbelt warning test.The BCM used in the experiment is an OSEK/VDX-based OS and is an SPC5604B BOLERO [27] -based ECU.The experiment dumped 29.4 kB of memory in the same 10 ms cycle as the system main task [9].In this experiment, 1019 addresses were updated between the total 29.4 kB of allocated memory.Among them, 87 candidate symbols are localized and the localization rate is 8.54%.The previous method suggests fault candidates as the suspect region within the updated memory region rather than the address level.Therefore, the reduction rates (size of fault candidate region/size of the updated memory region) of fault candidates on average were about 22.42% (2 kB/8.7 kB) and about 19.21% (4.7 kB/24 kB).This shows a significant performance improvement when compared to the existing result [9]. As a result, the proposed method was localized at an average 5.77% in the test bed and 8.54% in the commercial BCM.Experiments were performed on two types of ECUs running different OS that conform to the OSEC/VDX standard.The MC9S12X is 16-bit and the SPC5604 is a 32-bit micro controller, which differs in terms of core family [24,26].Nevertheless, when we experimented with the proposed fault localization process, we could derive the fault candidates, including the cause of fault, by analyzing memory usage.This shows that the proposed method is applicable to ECUs based on OSEK/VDX.In other words, the memory usage related to the failed operation can be presented as debugging information through the memory usage in the normal operation for finding the fault occurring in the HiL test environment without the debugging tool or the source code.However, the proposed method has limitations in the case of a signal that processes the continuous values like analog signals, when the update does not have a significant meaning. Conclusions In this paper, we proposed a fault localization method for automotive software in an HiL environment by comparing updated memory between the passed unit test and the failed its integration test.Our proposed method collects memory by dumping it based on the main task cycle during an HiL test.By analyzing the updated information in the collected memory, we can identify the input-driven updated address (IDUA).The fault candidates are localized by comparing the memory-updated information of the failed integration test based on the IDUA identified during the successful unit test.As an experimental result, the fault candidates were localized to 5.77% in the test bed and 8.54% in the commercial BCM.This means that if 100 symbols are used in an integration test, the developer can debug by checking only 6 or 9 symbols. The advantages of the proposed method are as follows.First, fault localization is possible in a black box environment where the source code is difficult to use.Traditional fault localization methods based on source code are difficult to apply to a black box environment but the proposed method is applicable without source code.Second, debugging information can be obtained without using existing debugging tools.The proposed method can dump the memory for each main task cycle of the system and observe the state change over time.Therefore, it is possible to obtain information in a similar fashion to that achieved by observing a system using existing debugging tools.Third, fault localization is possible without having the background knowledge of a developer because a failed signal can be found using normal operating information as criteria.The proposed method utilizes a unit test to obtain the memory usage information during normal operation and then uses it as a criterion to localize the faults.Therefore, our proposed method can reduce the debugging time invested by developers by providing fault candidates based on the memory-updated information without the source code and existing debugging tools. Our method has a limitation for signals that are continuously changing, such as analog signals for which the update is less meaningful.However, it is powerful for discrete signals such as digital I/O.We also believe that the highlight of the table that presents the fault candidates is that it can convey information visually.Therefore, we plan are in the study to locate the faults through the visualization of MUI tables. Figure 1 . Figure 1.Development and testing processes of ECU/SW by OEM.ECU/SW: electronic control unit/automotive software; OEM: original equipment manufacturers; HiL: Hardware-in-the-Loop. Figure 1 . Figure 1.Development and testing processes of ECU/SW by OEM.ECU/SW: electronic control unit/automotive software; OEM: original equipment manufacturers; HiL: Hardware-in-the-Loop. Figure 3 . Figure 3. Process of fault localization through analysis of updated memory. Figure 3 . Figure 3. Process of fault localization through analysis of updated memory. Figure 5 . Figure 5. Memory section table and symbol List: (a) Memory section table; (b) Symbol list. Algorithm 1 . 7 : The Algorithm for Memory Updated Information INPUT: value set, symbol list, frame range(R) OUTPUT: MUI, memory updated information 1: val(α,k) ≡ the value of the address α at k-frame 2: MU(α,k) ≡ the value of the address α changed in k-frame 3: MUF(α,R) ≡ the frequency of the MU(α,k) in R range 4: sym(α) ≡ symbol name of the address α in symbol list 5: MUI(α,R) ≡ the memory updated information of the address α in R range 6: FOR each address α DO Update Decision: FOR each frame k of frame range(R) DO 8: IF value change of address α in frame k THEN 9: MU(α,k) ← 1: updated(true) 10: ELSE 11: MU(α,k) ← 0: non-updated(false) 12: END FOR Consist Memory Updated Information: 13: MUF(α,R) ← sum of MU in range R 14: sym(α) ← find α in the symbol list for displaying 15: MUI(α,R) ← {address α, memory updated frequency, value set, symbol name} 16: END FOR • Number of Input(N in )-The number of inputs including initialization in the test script • Input interval(T in )-Time interval between inputs • Test condition-Input values including initialization of input signal • Appl.Sci.2018, 8, x FOR PEER REVIEW 11 of 22 • Number of Input(Nin)-The number of inputs including initialization in the test script • Input interval(Tin)-Time interval between inputs • Test condition-Input values including initialization of input signal • Figure 7 . Figure 7. Identification of Input-Driven Updated Range. Algorithm 2 . 3 : The Algorithm for Input Driven Updated Range INPUT: N in , Number of Input, T in , Input Interval OUTPUT: IDUA, Input Driven Updated Address, IDUR, Input Driven Updated Range 1: MUF (α,R) ≡ the frequency of the MU(α,k) in R range 2: MU(α,k) ≡ the value of the address α changed in k-frame Set Input Driven Updated Address: FOR each address α DO 4: Figure 8 . Figure 8. Example of Finding the Faults in the "Adv Emergency Stop Signal". Figure 8 . Figure 8. Example of Finding the Faults in the "Adv Emergency Stop Signal". ) 5 . 2 . 3 . Finding the Fault by the Invalid Values (in the Case of the Fault Index #12) Fault 2 . 3 . Finding the Fault by the Invalid Values (in the Case of the Fault Index #12) Appl.Sci.2018, 8, x FOR PEER REVIEW 18 of 22 candidates of fault index #12, can find signals and reasons for the failure."Adv ESS" turns the "Emergency Signal" on when the car stops after the "ESS" has been activated.In the figure, ⓐ is the IDUA of unit test of the "Emergency Signal," ⓑ is the IDUA of unit test of the "ESS," and ⓒ is the IDUA of integration test of the "Adv ESS."At this point, the three symbols related to the "Emergency Signal" of ⓐ are highlighted by the shaded bar.It can be assumed that a fault has occurred in the symbol associated with the "Emergency Signal."In the fault index #1, the front turn signal acted as an "Emergency Signal" when the flagIndicatorLamp showed 96.However, after statusPropulsion is updated to 112 at frame number #224 (after the vehicle has stopped), flagIndicatorLamp is updated to an incorrect value of 192 at frame number #226. Table 1 . Example of test information. Table 1 . Appl.Sci.2018, 8, x FOR PEER REVIEW 8 of 22 analyzing the test script in this manner, it is possible to identify the function being tested by the signal name.The data that can be acquired through the test are summarized in Table 1 below.The test case has I/O information of the function to be inspected and the test script includes a series of test cases.Example of test information. Table 2 . Mutation operator selection for fault injection. Table 3 . List of fault injection by the selected mutation operators. Table 2 . Mutation operator selection for fault injection. Table 3 . List of fault injection by the selected mutation operators. Table 4 . List of mutation operators. Table 4 . List of mutation operators. Appl.Sci.2018, 8, x FOR PEER REVIEW 17 of 22 5.2.2.Finding the Fault by the Fixed Symbols (in the Case of the Fault Index #1)
2019-04-16T13:29:07.930Z
2018-11-15T00:00:00.000
{ "year": 2018, "sha1": "db37631a5dd57c89cc4e11406629eba34291f4c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/11/2260/pdf?version=1542291532", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "db37631a5dd57c89cc4e11406629eba34291f4c2", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
24309846
pes2o/s2orc
v3-fos-license
Comparing plasmonic and dielectric gratings for absorption enhancement in thin-film organic solar cells We theoretically investigate and compare the influence of square silver gratings and one-dimensional photonic crystal (1D PC) based nanostructures on the light absorption of organic solar cells with a thin active layer. We show that, by integrating the grating inside the active layer, excited localized surface plasmon modes may cause strong field enhancement at the interface between the grating and the active layer, which results in broadband absorption enhancement of up to 23.4%. Apart from using silver gratings, we show that patterning a 1D PC on top of the device may also result in a comparable broadband absorption enhancement of 18.9%. The enhancement is due to light scattering of the 1D PC, coupling the incoming light into 1D PC Bloch and surface plasmon resonance modes. ©2011 Optical Society of America OCIS codes: (350.6050) Solar energy; (240.6680) Surface plasmons; (040.5350) Photovoltaic; (250.5403) Plasmonics; (050.2770) Gratings. References and links 1. H. Hoppe and N. S. Sariciftci, “Organic solar cells: an overview,” J. Mater. Mater. Res. 19(07), 1924–1945 (2004). 2. M. Agrawal and P. Peumans, “Broadband optical absorption enhancement through coherent light trapping in thin-film photovoltaic cells,” Opt. Express 16(8), 5385–5396 (2008). 3. S. H. Park, A. Roy, S. Beaupre, S. Cho, N. Coates, J. S. Moon, D. Moses, M. Leclerc, K. Lee, and A. J. Heeger, “Bulk heterojunction solar cells with internal quantum efficiency approaching 100%,” Nat. Photonics 3(5), 297– 302 (2009). 4. G. F. Burkhard, E. T. Hoke, S. R. Scully, and M. D. McGehee, “Incomplete exciton harvesting from fullerenes in bulk heterojunction solar cells,” Nano Lett. 9(12), 4037–4041 (2009). 5. T. Soga, Nanostructured Materials for Solar Energy Conversion, (Elsevier Science, Amsterdam, 2006). 6. H. A. Atwater and A. Polman, “Plasmonics for improved photovoltaic devices,” Nat. Mater. 9(3), 205–213 (2010). 7. G. E. Jonsson, H. Fredriksson, R. Sellappan, and D. Chakarov, “Nanostructures for enhanced light absorption in solar energy devices,” Int. J. Photoenergy 2011, 939807 (2011). 8. S. S. Kim, S. I. Na, J. Jo, D. Y. Kim, and Y. C. Nah, “Plasmon enhanced performance of organic solar cells using electrodeposited Ag nanoparticles,” Appl. Phys. Lett. 93(7), 073307 (2008). 9. Y. A. Akimov, W. S. Koh, and K. Ostrikov, “Enhancement of optical absorption in thin-film solar cells through the excitation of higher-order nanoparticle plasmon modes,” Opt. Express 17(12), 10195–10205 (2009). 10. H. Shen, P. Bienstman, and B. Maes, “Plasmonic absorption enhancement in organic solar cells with thin active layers,” J. Appl. Phys. 106(7), 073109 (2009). 11. A. Abass, H. Shen, P. Bienstman, and B. Maes, “Angle insensitive enhancement of organic solar cells using metallic gratings,” J. Appl. Phys. 109(2), 023111 (2011). 12. C. Min, J. Li, G. Veronis, J. Y. Lee, S. Fan, and P. Peumans, “Enhancement of optical absorption in thin-film organic solar cells through the excitation of plasmonic modes in metallic gratings,” Appl. Phys. Lett. 96(13), 133302 (2010). 13. R. A. Pala, J. White, E. Barnard, J. Liu, and M. L. Brongersma, “Design of plasmonic thin-film solar cells with broadband absorption enhancements,” Adv. Mater. (Deerfield Beach Fla.) 21(34), 3504–3509 (2009). #156086 $15.00 USD Received 6 Oct 2011; revised 2 Nov 2011; accepted 4 Nov 2011; published 28 Nov 2011 (C) 2011 OSA 2 January 2012 / Vol. 20, No. S1 / OPTICS EXPRESS A39 14. C. Heine and R. H. Morf, “Submicrometer gratings for solar energy applications,” Appl. Opt. 34(14), 2476–2482 (1995). 15. M. Niggemann, M. Glatthaar, A. Gombert, A. Hinsch, and V. Wittwer, “Diffraction gratings and buried nanoelectrodes-architectures for organic solar cells,” Thin Solid Films 451–452, 619–623 (2004). 16. M. Kroll, S. Fahr, C. Helgert, C. Rockstuhl, F. Lederer, and T. Pertsch, “Employing dielectric diffractive structures in solar cells a numerical study,” Phys. Status Solidi A 205(12), 2777–2795 (2008). 17. G. F. Burkhard, E. T. Hoke, and M. D. McGehee, “Accounting for interference, scattering, and electrode absorption to make accurate internal quantum efficiency measurements in organic and other thin solar cells,” Adv. Mater. (Deerfield Beach Fla.) 22(30), 3293–3297 (2010). 18. Y. Park, E. Drouard, O. El Daif, X. Letartre, P. Viktorovitch, A. Fave, A. Kaminski, M. Lemiti, and C. Seassal, “Absorption enhancement using photonic crystals for silicon thin film solar cells,” Opt. Express 17(16), 14312– 14321 (2009). 19. O. El Daif, E. Drouard, G. Gomard, A. Kaminski, A. Fave, M. Lemiti, S. Ahn, S. Kim, P. Roca I Cabarrocas, H. Jeon, and C. Seassal, “Absorbing one-dimensional planar photonic crystal for amorphous silicon solar cell,” Opt. Express 18(S3), A293–A299 (2010). 20. P. Bermel, C. Luo, L. Zeng, L. C. Kimerling, and J. D. Joannopoulos, “Improving thin-film crystalline silicon solar cell efficiencies with photonic crystals,” Opt. Express 15(25), 16986–17000 (2007). 21. D. Duché, E. Drouard, J. J. Simon, L. Escoubas, P. Torchio, J. Le Rouzo, and S. Vedraine, “Light harvesting in organic solar cells,” Sol. Energy Mater. Sol. Cells 95, S18–S25 (2011). 22. COMSOL, www.comsol.com Introduction Thin-film organic solar cells (OSCs) have attracted intensive research interest due to their potential for low-cost photovoltaic devices [1].However, the development of these devices is hampered by their low efficiency, since the active layer thickness must be smaller than the exciton diffusion length [2], which causes a limited photon absorption rate.To overcome the diffusion length problem, the concept of bulk heterojunction (BHJ) was introduced [3].However, even with a BHJ the thickness is still restricted to thin films on the order of 200 nm for optimal electronic properties [4].Above these thicknesses, the energy conversion efficiency drops since free-carrier recombination becomes significant [5]. Several techniques have been introduced to improve the optical absorption of OSCs [6,7].Many involve the use of metallic nanostructures which causes the excitation of surface plasmon modes.These modes can offer several ways to enhance the absorption, while reducing the bulk volume and the thickness of the active layer.One of those is resonant scattering caused by the scattering of the incoming light on metallic nanoparticles.It leads to light coupled and trapped into the absorbing layer [8].Apart from using metallic nanoparticles as scatterers, they are used as optical antennas to convert incident light into localized surface plasmon modes, which result in a strong field enhancement around the particle or a near-field enhancement due to plasmonic near-field coupling between particles to enhance absorption efficiency [9,10].In addition, introducing periodic structures has been shown to be a very promising way to efficiently trap light and enhance absorption efficiency [11][12][13][14][15][16].Enabling the coupling of sunlight to plasmonic guided modes and/or exciting localized surface plasmon modes can be done, for example, by simply engineering the metallic back contact, which has been demonstrated to be an effective way to boost the optical absorption [11].Recently, much attention has been given to the usage of plasmonic gratings embedded on top of the active layer [12,13].Min et al. [12] proposed a design in which the grating structure is embedded in the transparent electrode with a very thin active layer of around 15 nm, which may cause big challenges in fabrication, but could lead to a large exciton collection efficiency.Currently, the thinnest thin-film OSCs that have been successfully fabricated are around 48-60 nm active layer thickness [17].Their absorption rate, however, is still quite limited. In this work, we employ a silver (Ag) periodic grating placed inside the active layer to improve the optical absorption of a thin organic photovoltaic cell (60 nm active layer thickness).Silver is chosen since it allows for low metal absorption loss and it also has a beneficial surface plasmon resonance wavelength suitable for enhancing a P3HT:PCBM organic solar cell.With the grating integrated inside the active layer a strong field enhancement was observed for surface plasmon modes excited at the interface between the grating and the active layer.In addition, there is a resonant near field enhancement due to the excitation of propagating surface plasmon modes on the metal back contact enabled via coupling between the localized surface plasmon mode of the grating and surface plasmon mode of the metal back contact.These coupled plasmon modes contribute to broadband absorption enhancement in the active layer. Apart from using plasmonic nanostructures to enhance the performance of thin-film solar cells, patterning 1D or 2D PC on the cell may also improve their absorption efficiency.Recent advances in using these PC based structures for enhanced absorption have been reported in thin-film solar cells [18][19][20][21].However, directly having the active solar material in a PC structure as proposed by these groups may lead to electronic problems, as it would mean more free surfaces which can act as charge carrier traps.In this work, we consider using a 1D PC on top of a photovoltaic cell, instead of directly inside the active layer, and compare it with a conventional plasmonic grating structure.It is interesting to see how far the absorption enhancement given by dielectric structures can compete with plasmonic structures for enhanced near-field enhancement.This enhancement indeed leads to increased absorption, but it also implies that a major part of the absorption is occurring close to the metal surface, which makes the generated charge carriers more susceptible to many loss mechanisms associated with the metal and does not necessarily translate into conversion to electric energy. In Section 2, we will describe a solar cell structure and modeling method.In section 3, an absorption enhancement in thin-film organic solar cells by plasmonic and dielectric gratings is presented followed by conclusions in Section 4. Solar cell structure and modeling method The sketch of the reference device structure is depicted in Fig. 1.The transparent anode is made by 120 nm thick indium tin oxide (ITO) and deposited on a 40 nm thick highly conductive hole transport layer, poly (3, 4-ethylenedioxythiophene): poly (styrenesulfonate) (PEDOT:PSS).The latter is a polymer with good thermal and chemical stability, and good flexibility.The commonly used active layer BHJ material consists of the electron donor poly(3-hexylthiophene) (P3HT) and the electron acceptor (6,6)-phenyl-C61-butyric-acidmethyl ester (PCBM).Results for 60 nm thick polymer layer (P3HT:PCBM) with 1:1 weight ratio are presented here, in contact with the cathode made of silver (Ag).All material properties can be found in [10,11]. There are many numerical techniques that have been successfully employed to calculate the light absorption of the active layer of solar cells, including finite-difference time-domain (FDTD) methods, finite-element methods (FEM), the transfer matrix model (TMM) and the rigorous coupled wave analysis (RCWA).In this work, the calculation of light absorption is carried out by two-dimensional (2D) FEM, as implemented in the COMSOL Multiphysics software package [22].We assume the light illumination on the device as incident plane waves under TM polarization with wavelengths of 300-800 nm which is the region of interest for the P3HT:PCBM material.Periodic boundary conditions are set at the left and right boundaries, while perfectly matched layer (PML) absorbing boundary conditions are used at the top and bottom boundaries of the computational domain.The absorption in the active layer is calculated by integrating the divergence of the Poynting vector (power flow) which is then normalized with input power. Enhanced absorption by square Ag gratings It has been demonstrated that the use of metallic gratings on top of the solar cell (on top of the contacts), or integrated inside the PEDOT electrode, or even partially in the active layer (half the grating inside the active layer and half in the PEDOT electrode), can result in optical field enhancement, and can lead to larger optical absorption. Putting gratings on top of solar cell can result in high scattering of light and trap light effectively in the active layer in the wavelength range 400-600 nm.Efficient scattering from the top surface, however, requires larger structures, which are not easily integrated in organic device structures.Integrating gratings in PEDOT and partially inside the active layer can generate the excitation of localized surface plasmon resonances (LSPRs) to obtain near-field enhancement and result in enhanced light absorption.In this work, we propose to integrate optical gratings more inside the active layer.Apart from just offering strong light scattering and effective light trapping in the active layer, near field enhancement by the plasmonic structure is utilized to the fullest when the metal is embedded in the active material and we show effective excitation of LSPR modes at the interface between the grating and the active layer.In addition, this results in a resonant near field enhancement due to the excitation of propagating surface plasmon modes (SPRs) on the metal back contact enabled via coupling between the LSPR of the grating and SPR of the metal back contact.These coupled plasmon modes contribute to broadband absorption enhancement in the active layer. The sketch of the proposed device structure is shown in Fig. 2.This can cause the excitation of localized surface plasmon resonance (LSPR) modes at the interface between the grating and the active layer leading to strong field enhancement.In addition, these LSPR modes can couple to SP modes occurring at the interface between the active layer and the Ag back contact, which results in absorption enhancement in the active layer.Now, we start to look for the optimal dimension W and periodicity P of the grating to ensure highest absorption enhancement.Here, the enhancement factor is defined as 0 0 ( , , ) ( ) where A grating and A 0 are the absorption in the active layer with and without the grating as a function of wavelength, respectively.Figure 3 depicts the absorption enhancement for various sizes of the square grating and the periodicity.It is found that a maximum enhancement of up to 23.4% is obtained at the optimal size of 46 nm and periodicity of 350 nm.By calculating the absolute absorption of this optimum structure, we found that the active layer absorbs 58.7% of the AM 1.5G spectrum in the wavelength range 300-800 nm.The absorption spectra of the OSC with this optimal grating and the flat cell are shown in Fig. 4, where a broadband absorption enhancement is observed.The enhancement is due to the excitation of localized surface plasmon modes at the interface between the Ag grating and the active layer.Figure 5 shows contour plots of the absorption profiles, capped at a certain maximum value for clarity.It is observed that, at the wavelength of 350 nm, light is mostly absorbed by the Ag grating, which causes less absorption than the flat cell.However, this only results in negligible loss, as the solar energy for wavelengths below 350 nm is already limited.In the more relevant range, 410-650 nm, the localized surface plasmon modes excited at the interface between the grating and the active layer cause strong field enhancement around the interface.The field is mainly distributed in the active layer rather than inside the metal as we go to longer wavelengths.This results in an improvement of the absorption efficiency of the active layer, particularly around the edges of the metal grating.Furthermore, Fig. 4 shows a peak in the absorption spectrum at λ = 750 nm.This absorption resonance is sensitive to the distance between the grating and the Ag back contact (see Fig. 6).Upon investigating the field pattern at these peaks for various sizes of the grating (Fig. 6), we see that the SPR mode is excited at the interface between the active layer and the Ag back contact.This mode is coupled to the LSPR mode of the grating resulting in a strong near-field enhancement.The inset of Fig. 6 shows electric field curves along the edge of the optimal grating (W = 46nm, P = 350nm).A very strong enhancement compared to the flat cell is observed around the corner of the grating.In addition, from Fig. 6 we can see that when W increases, the resonance wavelength redshifts.Unsurprisingly, smaller grating-electrode distances lead to larger field enhancements.However, the resulting absorption peak is shifted far away from the wavelength region of interest as the distance decreases.Therefore a tradeoff is observed, and the optimal width is W = 48 nm.Gratings with W ≥ 48 nm have stronger near-field enhancements but the resonance peak is shifted far away from the wavelength of interest. Enhanced absorption by 1D PC based structures Besides using metallic nanostructures integrated in the device to boost the absorption, patterning 1D or 2D PCs on the solar cell may also result in absorption enhancement in thinfilm solar cells [18][19][20][21].In this case, the incident light can be coupled into Bloch modes of the PC, with increased photon travelling time in the active material.In this section, we simulate a 1D PC pattern of the ITO and PEDOT layers, see Fig. 7, and investigate how it affects the absorption of the OSC.In this case, special care is needed in designing the PC to enable light coupling into Bloch modes in the desirable wavelength region.The geometrical parameters of the PC should be scanned to find an optimal configuration.Using the same definition of enhancement factor, we find the optimal structure defined by W (width of ITO and PEDOT) and P (periodicity), as shown in Fig. 8.The maximum enhancement factor with this structure is limited to 18.9% corresponding to the optimal W = 180 nm and P = 375 nm.The calculated absolute total absorption of the AM 1.5G solar spectrum in the wavelength range 300-800 nm for this dielectric grating structure is around 56.14%. The absorption spectra of the OSC with this optimal 1D PC and the flat cell are depicted in Fig. 4. Significant broadband absorption enhancement via scattering and interference effects is achieved with the 1D PC structure, similar to the plasmonic geometry considered above.At certain wavelengths, e.g.370 nm or 520 nm, most of the incoming light is focused into the active layer, as seen in Fig. 9, producing enhanced absorption.Absorption enhancement does not happen for all wavelengths though.For instance, at the wavelength of 400 nm, less absorption is observed since only little part of the incoming light is scattered into the active layer and most of it is reflected back.This periodic dielectric structure also excites an SPR mode at the interface of the active layer and the Ag back contact at λ = 670 nm.Examining the field pattern of this mode (Fig. 10), we see strong field enhancement confined close to the metal surface, typical of a surfaceplasmon mode.This resonant mode also strongly influences the enhancement factor.The corresponding resonant wavelength is linearly shifted with an increase of periodicity P, as seen in Fig. 11.Care should be paid to make sure that the resonance mode still resides within the wavelength range of interest.Besides enhancement due to light scattering and SPR mode excitation, another resonance peak is observed at 440 nm wavelength.By modal analysis of the structure, we found a Bloch mode at k = 0 existing along the plane of the structure at 446 nm, which is close to the peak in the absorption spectrum.The field profile of the mode can be seen in Figs.12(a) and 12(b), which show the amplitude of the electric and magnetic fields, respectively.Significant absorption enhancement is obtained by exciting this mode, although the field profile is mainly distributed above the active absorbing solar cell layer.Figure 12(c) shows the electric field distribution in the case of excitation at normal incidence at 440 nm wavelength.We see a correspondence between the profiles in Figs.12(a) and 12(c), indicating that the absorption peak is really due to the excitation of this mode.Further optimization of the field profile to further shift the distribution into the active layer can still be performed by tuning the thickness of the ITO and PEDOT layer.In addition, the field profile significantly extends into the air layer above the structure, which is expected as the mode propagates through air in certain regions of the periodic structure.This could be reduced by filling the gaps with some dielectric substrate.The use of a periodic dielectric structure may be more advantageous than the plasmonic geometry considered above, as the absorption enhancement is more evenly distributed throughout the active material.In metallic periodic structures the absorption enhancement, especially by LSPRs, tends to be concentrated close to the metal, which increases the possibility that the absorbed photons will not contribute to current generation.This is due to either improper passivation of surface states on the metal interface or other exciton quenching effects which depend on the distance from a metal interface.If the absorption enhancement by the metal structure is large enough, these loss mechanisms may not be so detrimental.We show here, however, the possibility of achieving comparable, although still smaller, absorption enhancement with a periodic dielectric structure.More studies need to be done, but the comparison here indicates that we may not need to rely on metal nanostructures to achieve a good absorption enhancement. Conclusions We have demonstrated the possibility of achieving absorption enhancement in thin-film OSCs by integrating Ag gratings inside the thin active layer or alternatively patterning the 1D PC on top of the cell.Ag gratings cause the excitation of LSPR modes at the grating surface and the coupling of these modes with SPR modes at the back contact surface.1D PC based structures achieve enhancement due to scattering and light coupling into Bloch modes, and the excitation of SPR modes at the back contact surface.The realized optimal absorption enhancement in the case of Ag gratings is about 23.4%, and in the case of 1D PC we found about 18.9%.Although the enhancement factor from the 1D PC is smaller than that of the Ag grating, the fabrication of the former structure may be much easier, and still give significant enhancement. Fig. 2 . Fig. 2. Cross-section of OSC with Ag square grating integrated inside the active layer. Fig. 3 . Fig. 3. Absorption enhancement in active layer for various sizes of Ag square grating with width (W) and periodicity (P). Fig. 4 . Fig. 4. Absorption in the active layer for the bare OSC (dotted line), the cell with Ag square grating (red solid line), and 1D PC (blue solid line). Fig. 8 . Fig. 8. Absorption enhancement in the active layer varying the ITO width (W) and periodicity (P). Fig. 11 . Fig. 11.Absorption in the active layer of a flat cell and of a 180 nm width 1D PC cell, for various periodicities. Fig. 12 . Fig. 12.(a) Electric and (b) magnetic field profiles of a guided Bloch mode at 446 nm.(c) Electric field profile in the case of normal incidence plane wave upon the structure at wavelength λ = 440 nm.
2017-06-18T14:31:52.294Z
2012-01-02T00:00:00.000
{ "year": 2012, "sha1": "6ede116968cb60226e78455a6d2e3c344093ecbd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.20.000a39", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6ede116968cb60226e78455a6d2e3c344093ecbd", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
247838749
pes2o/s2orc
v3-fos-license
Odour enhances the sense of presence in a virtual reality environment Virtual reality (VR) headsets provide immersive audio-visual experiences for users, but usually neglect to provide olfactory cues that can provide additional information about our environment in the real world. This paper examines whether the introduction of smells into the VR environment enhances users’ experience, including their sense of presence through collection of both psychological and physiological measures. Using precise odour administration with an olfactometer, study participants were exposed to smells while they were immersed in the popular PlayStation VR game “Resident Evil 7”. A within-subject study design was undertaken where participants (n = 22) walked-through the same VR environment twice, with or without the introduction of associated congruent odour stimuli. Directly after each gameplay, participants completed a questionnaire to determine their sense of presence from the overall gameplay and their sense of immersion in each of the virtual scenes. Additionally, physiological measurements (heart rate, body temperature and skin electrodermal activity) were collected from participants (n = 11) for each gameplay. The results showed the addition of odours significantly increased participants’ sense of spatial presence in the VR environment compared to VR with no odour. Participants also rated the realism of VR experience with odour higher compared to no odour, however odour addition did not result in change in emotional state of participants (arousal, pleasure, dominance). Further, the participants’ physiological responses were impacted by the addition of odour. Odour mediated physiological changes were dependent on whether the VR environment was novel, as the effect of odour on physiological response was lost when participants experienced the aroma on the second gameplay. Overall, the results indicate the addition of odours to a VR environment had a significant effect on both the psychological and physiological experience showing the addition of smell enhanced the VR environment. The incorporation of odours to VR environments presents an opportunity to create a more immersive experience to increase a person’s presence within a VR environment. In addition to gaming, the results have broader applications for virtual training environments and virtual reality exposure therapy. Introduction Smell has been a vital part of human evolution and plays an essential role in our day-to-day lives. The ability to sense odours in the environment affects our day to day decisions, enabling us to judge the edibility of particular items of food, avoid environmental hazards and communicate with others [1]. A smell or odour can induce strong emotional feelings, alter behaviour and can act as a stimulus to the retrieval of autobiographical memory [2][3][4][5]. However, in the hierarchy of human senses, smell or olfaction is often underappreciated and inappropriately considered inferior when compared with olfactory performance of other mammals and to the other human sense modalities [6]. Nonetheless, the importance of odour in our everyday lives is recognized by the food, cosmetics and cleaning product sectors, for example, who invest considerable time and resources in the creation of fragrances that best suit their product-and affect the customers' perception of its desirability [7,8]. Virtual reality (VR) aims to create an immersive experience which transports the user to another world with established terminology describing various VR aspects. The term Presence in VR means how much a user believes that he/she is inside a virtual world. It is generally defined as a user's subjective sensation of "being there" in a scene depicted by a medium [9]. The term Immersion is used to describe an objective measure on how good the hardware and system technology is (e.g. video resolution, audio sampling rates, etc), and how many senses are engaged (e.g. vision, hearing) [10]. The term Reality or Realism is used to determine how closely the virtual world replicates a real-world counterpart. So usually immersive hardware and realistic media would be thought to increase the levels of presence in a virtual experience, although the link is not always so simple. As technology has evolved, the level of immersion in VR environments has increased, with huge improvements in visual and aural fidelity, and in the accuracy of head and gesture tracking. The development of head tracking with six degrees of freedom means that users are able to look at any part of the virtual world; spatial audio techniques enable the generation of realistic, immersive sounds that can be linked to simulated objects as they move around a VR scene; and, tactile gamepad controllers and other haptic devices provide mechanisms for simulating touch. All in all, there is a clear trend towards the creation of completely immersive and multisensory experiences that enhance a consumer presence in a VR environment. However, despite the recognition of the importance of smell in many aspects of our daily lives, the use of smells in VR remains a novelty and relatively under explored in comparison to sound and vision. Several publications have assessed the use of smell in a VR environment. Nakamoto & Yoshikawa [11] delivered scents designed to match scenes in a short animated film, and observed that audiences found scenes that featured transitions between contrasting smells were most 'impressive'. Jones et al. [12], exploring the potential of odour-enhanced virtual environments for military training applications, examined the experiences of 30 participants who played a combat computer game while exposed to three different scents-two of which were congruent with the game scenes (ocean smells on the beach, and a 'musty' smell in a fort). They found that in this context, the additional odours did not measurably enhance participants' sense of immersion. Munyan et al. [13] assessed the combination of olfactory stimuli on presence in anxiety ridden environments to help with exposure therapy where users performed a simple VR task revolving around losing their keys in a fairground. Results indicated that while participants felt an increase in presence with the smells, there was no measurable increase in levels of anxiety. Ischer et al. [14] developed a 3D immersive environment with advanced control of odour delivery, however did not assess the impact of odour on the VR experience. Baus and Bouchard [15], explored the effects of pleasant and unpleasant smells on participants' sense of presence in a virtual environment. They exposed VR participants to pleasant (apple/cinnamon), unpleasant (urine) and ambient scents. While the scents were not directly related to the virtual environment, users that received smells reported an increase in their sense of presence. Oddly, those exposed to the urine (negative) smell described a higher sense of presence than those exposed to the pleasant smell. They speculate that this may have been because the urine smell was perceived as being stronger than the pleasant smell. PLOS ONE Findings from a subsequent study [16] indicated that when the scents matched the scene (e.g. when the 'pleasant' scent of cinnamon apple pie was presented in a virtual reality scene featuring two pies on a bench) the ability of participants to detect the odour increased. However, in the later study, the overall sense of presence, as measured by the Independent Television Commission Sense of Presence Inventory [ITC-SOPI] questionnaire [17], did not increase significantly. Taken together, these studies suggest that odours are more readily perceived when they are matched to visually concordant scents, and that 'unpleasant' odours may have a greater effect on the sense of presence. Further, the previous studies generally have limitations with one or more of the following: (i) methodological limitations in study design or development of smells, (ii) crude or simple VR environments (i.e. poor reality) and, (iii) unsophisticated mechanisms to deliver odours to a participant (without the ability to switch rapidly between odours) and therefore, not replicating the reality of how smells are encountered/perceived in the real world. The aim of the current paper is to examine whether the introduction of smells into the VR environment enhances users' sense of presence. The current study assessed the impact of odour addition on a participants experience in a VR environment by collecting responses using (i) questionnaires on presence, realism and emotion and (ii) to determine if changes in physiological measures were evident (heart rate, body temperature and electrodermal activity (EDA)). While both of these methods have been previously used in VR research, use of physiological measures to assess increased presence has yielded mixed results to date [18]. Given previous research, we propose that unpleasant odours, presented with concordant visuals in VR should increase the sense of presence. We deliberately chose a VR experience that featured scenes with strong-smelling artefacts (rotten food, etc) to maximise the likelihood of increasing users' sense of presence. The main outcome measure and hypothesis was the odour delivered in a VR environment would increase a participant's sense of presence. A secondary measure was that due to the fear and scare elements of the VR experience, we would observe a heightened physiological response when the odour was delivered compared to the same VR environment without odour. Finally, the methods employed in the current research overcome the limitations of prior research through the use of both a well-developed VR environment and sophisticated delivery of odours to better reflect how smells are encountered in the real world. Understanding the effect of odours on a person in a VR environment will help designers decide whether and when to integrate olfactory cues for different applications (e.g. virtual games, treatment of post-traumatic stress disorder, training, etc). Overview In order to explore the impact of adding simulated odours to the VR user experience, we conducted a study in which participants played a commercial VR game enhanced with the controlled addition of synthesized odours. The simulated odours used were developed to be congruent with the VR environment experienced by the participant. A PlayStation VR headset playing the game Resident Evil 7 was used to create a controlled multisensory experience. The game was augmented with smells generated by an olfactometer, which delivered odour volatiles via a soft plastic tube fixed underneath a participant's nose to enable free rotation of the head. The olfactometer ensured real-time, precise, computer-controlled odour was delivered to the participant. Until now, only a handful of attempts have been made to integrate odours into the VR field using an olfactometer [19]. Participants The research conducted was approved by both CSIRO human ethics committee (2019_031_LR) and University of Technology, Sydney, Human Research Ethics Committee (2013000135 2019-3) and written informed consent was obtained from each participant prior to completing the experiment. Participants were recruited locally through email adverts (University and CSIRO) and social media posts. Inclusion criteria were for participants between the age of 18 to 60 years, free from current upper respiratory tract infection and free of known fragrance, smell allergies or sensitivity reactions. Additionally, it was recommended that individuals who were sensitive to unpleasant images to not participate in the study. Participants were provided with a gift card to cover time and travel costs associated with completing the experiment. Participants attended one session where they completed the same section of the Resident Evil VR game twice, once with and without simulated odours. Participants were randomly assigned whether they received the odour in the first or second gameplay. After the first VR gameplay, participants were given a 15-minute break and after each of VR gameplay participants completed a questionnaire about their experience. VR and experimental set-up The experimental setup is shown in Fig 1 and split into different zones. Zone 1 was used to brief participants and complete post-gameplay questionnaires. The user game play area (Zone 2) was in a dedicated small room where a participant sat on a chair and contained: • The olfactometer mixing and delivery head, connection tubing to a participant's nose and an extraction fan to remove excess odour. • A standard PlayStation 4 console, VR headset and wireless controller that were used to run the VR environment and control the game's character. • Playstation camera used to capture the PlayStation VR headset motion. • A subset of participants (n = 11) wore an Empatica E4 Wristband to collect physiological data on heart rate, electrodermal activity (EDA), and body temperature. Zone 3 contained the odour solutions and olfactometer that controlled delivery of the smells to the mixing and delivery head in the user gameplay area. The monitoring and supervision area (Zone 4) contained: • A monitor connected to the PlayStation console which simultaneously displayed a 2D version of the VR environment the participant was experiencing. • A mixing board with a microphone that the experimentalist could talk to the participant through earphones, allowing directions to be communicated to help navigate through the VR environment. • A monitor connected to the olfactometer with a custom-built interface (software program) for the experimentalist to control the timing of delivery of the odour mixtures. • A GoPro camera was positioned to record the monitoring and supervision area (2D monitor of VR environment and olfactometer controls) to enable the viewing of each VR at a later time if required (i.e. resolve discrepancies in data). Gameplay Prior to playing the game, participants were provided instructions on how to navigate through the game and completed a short tutorial in a VR environment. The VR environment was provided by the game Resident Evil 7 (Capcom). Although classified as a survival horror game, the gameplay used in the experiment was based on exploration and did not incorporate the more extreme horror components of the game (e.g. use of weapons, fighting, reanimated monsters). Use of an established and well-designed VR environment also assisted with reduction of nausea, which can be problematic [13]. During the gameplay, participants were instructed to follow a certain path and to explore the VR environment by the experimental supervisor (communicated through the microphone to the headset worn by the participant). The length of time to complete the gameplay was 6-10 minutes (~6min for experienced VR/PlayStation player,~10 minutes for beginners). The gameplay consisted of 3 sections which included: • Forest walk: a person starts the VR gameplay in a forest and follows a path surrounded by plants and trees. During the forest walk, a participant walks through a swamp, encounters a sculpture made of horse legs and blades and passes a smouldering fire on approaching a house. • Abandoned house: Participant approaches the apparently abandoned house and walks onto a patio/porch and to the front door of the house. The participant enters the house and walks down the end of a corridor to a kitchen where they open a pot and a refrigerator containing rotten food and cockroaches. The participant then proceeds through the kitchen to another corridor and enters a parlor where they move to a fireplace to pull a handle and open a secret door. • Basement: the participant moves through the secret door where they climb down a ladder and the ladder breaks and falls to the ground. The participant then walks through the basement into chest/neck-high murky water. The participant moves though the water along a corridor and under a beam where they encounter a submerged rotting head. The gameplay then ends as they walk out of the water. Olfactometer and simulated odours An olfactometer was used to mix and deliver smells to participants at predetermined events during the gameplay. The olfactometer is part of a custom built simultaneous gustometer olfactometer (SGO) that has been previously described [20]. The gustometer component of the SGO was not used in this experiment. The olfactometer component of the SGO uses six pairs of computer-controlled, motorised mass-flow controls to vary air flow through six glass saturators, containing the odour mixture solutions, and six corresponding bypass air flows. Each of the saturator and bypass air-flow pairs can be varied from 0% to 100% of saturator headspace flow at a constant total flow of 150 mL/min. The six streams are then carried to the delivery manifold, where they are switched by vacuum flow into a carrier flow of humidified air (5L/min at 70% RH, 22˚C). The odour mixture and carrier air flow are then delivered to the participant through a nasal cannula, fixed below the participant's nose. Four odour mixture solutions were prepared and delivered to the participant throughout the gameplay. The odours used were developed to be congruent with the VR environment experienced by the participant but did not try to re-create the complex mixture of volatiles that would be expected in each of the different environments. The odour mixtures were prepared by dilution of pure food grade compounds in water (or used as a neat solution for smoke odour). Each mixture was loaded into the olfactometer to be delivered in separate streams. Two olfactometer aroma streams remained as air blanks and were not used in this study. Table 1 lists the ingredients used to create each odour mixture in the study. The four odour mixtures were blended in pre-programmed patterns, by the olfactometer, to be congruent with the event being experienced in the VR environment ( Table 2). The delivery of each odour mixture was controlled by the experimentalist using a custom software interface pre-programmed to deliver or stop the correct blend of the four odour mixtures. The experimentalist watched the gameplay on a secondary computer display and selected each odour mixture event, by name, to correspond with the current gameplay event. The olfactometer interface recorded a time stamp at each odour mixture event to allow for physiological data measured using the Empatica E4 Wristband (heart rate (HR), electrodermal activity (EDA) and body temperature (Temp)) to be accurately matched to each event in the VR environment. The gameplay between events was conducted with continuous, humidified air flow without delivery of odour mixture. Post-gameplay questionnaires After each gameplay session (i.e. with and with-out odour), participants filled out a questionnaire, which included: • The Independent Television Commission Sense of Presence Inventory [ITC-SOPI] questionnaire [17], a validated questionnaire split into two parts (Part A and B having six and 38 questions, respectively) with questions rated on a 5 point scale. Answers to different questions are combined to generate scores on different dimensions of presence including spatial presence, engagement, ecological validity/naturalness and negative effects. An additional question was added to the end of the questionnaire "the smells that I experienced matched the virtual environment" which was scored on the same 5-point scale (this question is referred to as "Match question" in this paper). • A smell recall questionnaire, where participants were shown flashcards of nine events experienced in the gameplay. For each event, a participant had to name the event and rate on a 5-point scale the realism (scale anchors "Did not match the game" to "Perfectly matched"), pleasantness (scale anchors "Very unpleasant" to "Very pleasant") and strength of the smell (scale anchors "Did not smell" to "Extremely smelly"). • Three open response questions intended to gather qualitative feedback on their experience. The prompts were: "What did you remember most?", "What frightened you?" and "Other comments". • A short demographic survey to collect age, gender, current work situation and previous experience playing video games and VR. Physiological measures Physiological measures were collected to enable real time assessment of a person's response to the VR environment as additional assessment of the impact of odour on the user experience in the VR environment. The Empatica E4 Wristband was used to collect physiological data on heart rate, electrodermal activity (EDA), and body temperature. These physiological measures have been validated with traditional laboratory devices used to measure physiological responses [22,23]. However, as outlined in the introduction, use of physiological measures to assess increased presence has yielded mixed results to date [18] and their use is exploratory and provides the opportunity to build the evidence/validity for their use in future research. In the current experimental set-up, use of a wireless based wrist device was appropriate to ensure that the equipment did not detract from the VR experience through either restriction of free movement of the head (i.e. due the presence of additional cabling) or by a novel/unusual sensation that may impact on a participants response (e.g. devices fitted around the chest to assess respiration). The participants were seated during the VR gameplay. The wristband was attached to the participant at the start of each gameplay and continuously recorded data during the two gameplays. The event button was pressed to designate the start and end of each gameplay. The arms, wrist and hands were kept relatively still as they held a Playstation game controller with both hands to navigate through the VR environment. This immobilisation of the arm and the E4 Empatica Wristband removed motion artifacts which could alter the accuracy of the physiological measures. This was confirmed by reviewing the accelerometer data from the Empatica Wristband for each participant which showed no/low movement over periods of gameplay. The physiological measures were reviewed on the Empatica web dashboard and downloaded after each session as .csv files. As outlined above, each participant progressed through the Resident Evil 7 experience at their own unique pace to allow them a sense of agency within the world and further promote their immersion within the world. This freedom means that each 'smell' point would occur at slightly different times for each participant. To allow comparison between participants, the data was aligned and truncated to 5 seconds worth as each participant reached the next smell location/event. Five seconds was determined to be sufficient time to receive the odour and for a participant to react to the smell and event. To truncate and align the data points, the participant's heart rate, EDA and temperature were collected from the E4 wristband via a number of comma separated values text (.csv) files. The wristband time was synchronised to the current time and the start time and sample duration for each data point was recorded in the .csv file. The application which controlled the olfactometer was also synchronised to the current time and output a .csv file detailing which location (and smell combination) was being output and at which time. A set of Unity C# scripts were created to collate all of the data for each participant around each event and output two measures to describe the 5 second portions after each event: (i) mean: the mean value over the 5 second period after an event and (ii) delta: the difference between the starting measure and the maximum measure from the 5 second period after an event. The collated data was then output into a separate .csv file for further analysis. The measures 'mean' and 'delta' were used in the analysis as these were the meaningful variables of interest. Data analysis The main outcome variables were the measures from ITC-SOPI for different dimensions of presence: spatial presence, engagement, ecological validity/naturalness and negative effects. Secondary outcomes were responses to emotional state and smell recall questionnaire and the physiological measures (heart rate, EDA and body temperature). There were 22 participants in total, but physiological data was only collected for 11 of the 22 due to equipment malfunction. With a within-subject study design of 22 participants, the study was adequately powered to detect medium sized effect changes in the main outcome variables (sensitivity analysis using G � Power indicated that the study was powered to detect an effect size of 0.44 with a power of 0.80 and α of 0.05 [24]). Smaller effects may have been identified with a larger sample size. However, from a practical application, the effect of smell to increase a participant's presence needs to be of a significant level to warrant further assessment and for the potential development of a commercial odour delivery device integrated into VR hardware. Thus, the study was adequately powered to meet the outcomes of the study. Statistical analysis was conducted in GenStat V19.1 (VSN International, Hemel Hemstead, U.K.). Multivariate analysis of variance (MANOVA) was used to assess the effect of odour on the ITC-SOPI scores (spatial presence, engagement, ecological validity/naturalness and negative effects), responses to emotional state and smell recall questionnaire and the physiological measures. Dependent variables were the presence of odour or control (i.e. with no odour) and also the order of the gameplay (i.e. whether the participant received the odour or control in the first gameplay) and the interaction between the presence of odour and order. To better understand effect size for MANOVA analysis, partial eta squared (η p 2 ) and 90% confidence intervals of the effect size were determined in excel and The MBESS R Package [25][26][27], respectively. These effect size measures are reported in S1-S3 Tables that provide expanded information in parallel to the results tables. Contrast analyses of significant interaction effects were performed in R with Cohen's d effect size and 95% confidence interval calculated using the R package Effsize [28]. For the physiological measures, gameplay events were further categorized in two groups based on the scare factor of fright effect (included casserole, fridge, basement, water & rotten head) or benign (forest, swamp, horse, smoke, near house, hallway, kitchen & fireplace) and included in MANOVA analysis (between subject factors) together with the effects of odour and order. Graphs were generated in R using the R package ggplot2 [25,29]. Basic thematic analysis of the qualitative data was conducted in order to identify consistent themes and surface participant suggestions and preferences [30]. Overview of participants A total of 22 subjects participated in the experiment, exploring the same VR environment of Resident Evil 7 on two occasions (with and with-out odour). The mean age for the group was 28.8 years, with 13 male and nine female subjects. Of the 22 participants, 12 were students and 10 had full time employment. Many of the participants had previous experience with virtual reality (n = 13) and only three participants had no experience playing any video games. Only one participant had previously played Resident Evil 7. The addition of odours increased spatial presence of VR environment The addition of simulated odours significantly increased participants' sense of spatial presence (Table 3, S1 Table and Fig 2). Engagement and naturalness scores were higher in the VR gameplay with odour compared to the control with no odour, however the differences were not statistically significant. No difference in negative effects score was observed between the VR environment with or with-out odour. Finally, there was no significant effect of order of the gameplay on measures of engagement, ecological validity/naturalness and negative effects or the interaction between the odour and order. The additional question "The smells I experienced matched the virtual environment" was rated significantly higher with the odour (M = 4. Table 3 and S1 Table) and was independent of the order of gameplay. No significant differences in the emotional state of the participants were observed (arousal, pleasure and dominance), however there was a trend observed for increased anxiety in the presence of the odour (Table 3 and S1 Table, p = 0.057). The smell recall questionnaire which presented flashcards of events experienced throughout the VR gameplay, found no significant differences when odour was delivered compared to control for each of the nine individual events for pleasantness and realism ratings (Table 3 and S1 Table). However, participants rated the strength of the smells significantly higher (but not too high) for all individual events when the odour was presented in the gameplay compared to the gameplay with no odour-indicating participants did not have difficulty perceiving the smells (Table 3 and S1 Table). Additionally, there was a significant difference between gameplays (order effect) in ratings of the forest strength and an interaction between the presence of odour and order of gameplay on the strength ratings at the fridge event. In the second Table 3. Effect of odour addition and order of gameplay on post gameplay questionnaire measures. Questionnaire Odour effect Order effect Interaction effect � PLOS ONE Impact of odour on a VR environment Outcome measures from the smell recall questionnaire were further assessed by ANOVA to determine the effect of the presence of odours or the events combined (rather than comparing each individual event) on ratings of pleasantness, realism and strength (Table 4, S2 Table contains full analysis with each event). No difference in pleasantness ratings were observed with the addition of odour, however, ratings were significantly different between the events. Odour addition had a significant effect on realism ratings (F( 1,374 ) Table 4 and S2 Table), however the event (p = 0.115) or interaction between odour and event (p = 0.795) did not significantly impact realism ratings. The strength ratings were significantly different between both the presence of odour (p < 0.001) and between the different events (p < 0.001). Odour addition altered participants' physiological response Physiological measures were collected from participants as they completed the gameplay and measures for each VR event extracted (see materials and methods for details). In addition to odour and order, an additional dependent variable was included in the analysis based on whether an event was benign (e.g. walking through forest) or scary (e.g. a rotten head appearing in submerged water). Due to the smaller number of participants with physiological measures (n = 11, balanced for order), analysis was completed across all gameplay events (rather than individual events) to identify trends from the addition of the odours (Table 5 and S3 Table). The outcome measures were the average measure and the delta change (maximum measure minus start measure) for each heart rate, temperature and EDA. Overall, the addition of odour to the VR environment resulted in a change in participants physiological response, further confirming odour addition had an impact on the user's experience. Specifically, the addition of odour resulted in significantly lower (p < 0.001) average EDA, compared to no odour. Delta HR was higher and approached significance (p = 0.056) in the presence of odour. However, there were clear order effects and interactions between the order of gameplay and odour for most physiological measures (Table 5 and S3 Table: Average HR, Delta HR, Average Temperature and Average EDA). Average EDA was lower in the presence of odour for the first gameplay (1.7 vs 5.23, t( 120 ) = 2.9, p < 0.004, Cohen's d = -0.51 and 95% CI [-0.15, -0.87]) but no difference was measured in the second gameplay. The Average HR (90.4 vs 81.2, t( 113 ) = 3.6, p < 0.001, Cohen's d = 0.63 and 95% CI [0.27, 1.00]) and Delta HR (0.334 vs 0.147, t( 66 ) = 2.3, p < 0.02, Cohen's d = 0.47 and 95% CI [0.11, 0.83]) were higher when the odour was present in the first gameplay compared to no odour. This effect was reversed when odour was present in the second gameplay for Average-HR but was not significant (76.6 vs 83.3). Further, average heart rate was significantly lower when the odour was experienced second (90.43 vs 76.59, t( 77 ) = 5.0, p < 0.001, Cohen's d = -0.98 and 95% CI [-1.35, -0.60]) further highlighting the impact of order on the physiological effects. Average Temperature was higher in the odour condition compared to no odour (32.8 vs 30.8, t( 108 ) = 7.3, p < 0.001, Cohen's d = 1.18 and 95% CI [0.78, 1.57]) in the first gameplay. However, this was reversed in the second gameplay with Average Temperature lower when odour was present compared to none (31.3 vs 33.46, t( 100 ) = 10.3, p < 0.001, Cohen's d = -1.66 and 95% CI [-2.14, -0.83]). Finally, events classified as scary (fright effect) resulted in significantly higher EDA measures for both average EDA (p = 0.008) and delta EDA (p < 0.001) compared to benign events (Table 5, S3 Table). Validity of smells presented and experimental set-up It was important to verify that the experimental set-up was suitable to assess the effect of odour on the VR environment. Specifically, it is important to understand if the olfactometer was delivering smells that a participant was able to perceive, and further, if the odours perceived were congruent with the objects and experiences in the VR environment. This information can be gained from post-hoc analysis of the data and can provide assurance for the validity of the experimental set-up. The fact that participants rated the strength of the smells (Table 3, Smell recall questionnaire) significantly higher when odour was presented compared to when they were not presented indicates that the odour mixtures delivered by the olfactometer were detected by participants at each event. There are three different pieces of evidence that the odour mixtures prepared were congruent with the VR environment. Firstly, the question "the smells that I experienced matched the virtual environment" was rated significantly higher with the odour compared to without the delivery of odour (p < 0.001) (Fig 2 (Matched), Table 3) and was independent of the order of gameplay (p = 0.325) suggesting the smells were perceived as congruent to the VR environment by the participants. Secondly, Realism ratings (from the smell recall questionnaire) were not significantly higher when odour was delivered compared to control for each of the nine individual events ( Table 3). The realism ratings were further assessed by ANOVA to determine the effect of the odour or the gameplay events combined on realism rating. A significant effect of odour was observed compared to control (Table 4, p = 0.004), however the event (p = 0.115) or interaction between odour and event (p = 0.795) did not significantly impact realism rating. This shows that overall, the presence of the odours did have an impact on how realistic participants found the VR environment, independent of any one event. Finally, pleasantness ratings (from the smell recall questionnaire) were separated into two groups based on whether the event was a negative or neutral smell/experience (Negative = Horse sculpture, Pot with rotten food, Fridge, Basement water, Rotten head; Neutral = Forest, Outside fire, Hallway, Fireplace). All negative events had lower mean pleasantness scores compared to neutral events (mean values 1.9 and 3.1, respectively) suggesting congruent smells were delivered in the experiment. Qualitative feedback Qualitative feedback gathered from participants via the three written, short-answer questions ("What did you remember most?", "What frightened you?" and "Other comments"), were analysed in order to identify factors that appeared to enhance or detract from the overall experience and the sense of immersion. In general, participants indicated that the use of smells was interesting and provided an experience which was quite unique, eg. "[This was] a very interesting experience, have not experienced anything like it before". In terms of suggested improvements, there were comments that the level of smells should be able to be adjusted. Because of the somewhat extreme nature of the Resident Evil environment, many smells were quite strong, and three comments suggest that providing some kind of calibration for individual preferences should be considered, eg. "I find it very intense, way too much odour for my liking". Several participants also suggested that providing a wider range of smells, including more 'pleasant' smells beyond the horror genre, would make the experience more attractive for them personally (eg. "There [were] no good smells so the engagement. . .could be better with both good and bad smells"). Along similar lines, one participant expressed the view that the experience with aromas was "more immersive but probably less pleasant". Finally, one participant suggested the soft plastic tube that delivered the aromas under the nose should be "designed to wear more comfortably". Discussion The findings of this study show that users' sense of presence is enhanced when simulated smells are introduced into a VR environment. Presence is a multi-dimensional construct and the ITC-SOPI measures four different determinants of presence including Spatial Presence, Engagement, Ecological Validity (Naturalness) and Negative Effects (Nausea) [17]. Spatial Presence received a significant increase from the addition of odour, indicating that participants felt more immersed in the virtual environment with the addition of odour. Further, the effect size observed from the addition of aroma on spatial presence was large (n p 2 = 0.134) [27] and therefore of a significant contribution to a participants VR experience to support the development of commercial odour delivery devices integrated into VR hardware. There was a mild (non-significant) increase in both engagement and naturalness when odour was present which may indicate that the smells were well suited to the experience (ie they didn't detract) but that the horror narrative (engagement) and audio/visual depiction (naturalness) of the game environment were more important factors than the smells themselves. Finally, the presence of odour did not alter the participants' experience of negative effects (motion sickness), which is a surprisingly positive result taking into account some of the disgusting experiences with smells that were presented (e.g. maggots in casserole dish and rotten head). The ITC--SOPI results demonstrate that the addition of odour increased the participants' feeling of presence in the virtual environment, without any additional negative side effects. Furthermore, the overall significant increase in realism ratings further confirms the effect of odour addition to enhance the VR experience. The addition of odour to the VR experience resulted in differences in the participants physiological responses for HR, body temperature and EDA, compared to no odour. Considering the order effect, EDA was lower, while HR and body temperature were both higher in the presence of odour. The changes in the physiological measures observed are most likely due to the odours enhancing the fear and scare elements of the VR experience compared to no odour (e.g. fight or flight response of the sympathetic nervous system). There may be alternate mechanism(s) causing the physiological effects observed, for example parasympathetic nervous activation where the odours could be perceived as relaxing, decreasing stress, or evoking an arousal response. However, these alternate mechanisms are not consistent with: (i) the context the odours are presented in a VR environment (i.e. horror theme), (ii) the observed increase in heart rate which has been shown to increase with unpleasant odours in preparation for a defensive action [31] and (iii) the order effects that were observed (i.e. if odour was relaxing it should equally effect all gameplays and not just have an effect on first exposure). Further research is required to identify/confirm pathway(s) responsible for the changes in physiology in response to odour addition to the VR environment. Surprisingly, significant order and order � odour interactions were observed across many of the physiological measures. In most circumstances, the order effect only had an impact on participants' experiences in the VR environment in the first gameplay and not the second. This suggests that odour addition in some circumstances is only effective on the first exposure to the VR environment (i.e. when the situation/environment is novel to a participant). If a participant has already been exposed to the VR environment and/or can anticipate coming events, the addition of smell may not have an effect on enhancing their experience through physiological changes. While not the main focus of the current study, this order effect requires further attention. A similar experiment utilising participants who are conditioned to a VR environment prior to testing in the experiment would enable further exploration of order events. If the effect of smell addition on users' experience in VR are not sustained beyond initial exposure there are implications for how odours should be deployed in practice. Additionally, the current study delivered congruent smells to a participant compared to a control with no smell. Therefore, the current study cannot rule out that the delivery of any smell (i.e. a non-congruent smell) may also result in increasing a participant sense of presence in the VR environment. While other studies suggest that congruency of the smell is important feature to increase a participants sense of presence compared to a non-congruent smell [32], this needs to be further explored using an experimental set-up which would include a noncongruent smell study arm. The findings presented in this paper show the use of smells in virtual reality experiences can enhance users' sense of presence and the perceived realism of the virtual environment. We selected a popular and well-known horror game as the focus of our study as this genre seemed well matched to the kinds of primal, physical experiences that the addition of smells could enhance. While the game was a useful test bed for more 'extreme' smells, we believe the findings from this study demonstrate that the use of smells in virtual environments should be seriously considered for situations where presence and realism are critical. Beyond gaming, the use of carefully designed odours in virtual training environments for emergency services (fire fighting for example), military personnel, hazardous chemical response teams, etc. is likely to increase users' sense of immersion and therefore lead to improved outcomes. The addition of smell to digital devices is clearly intriguing, as there have been many attempts over the years to create commercially viable products and some high-profile failures. A recurring question is one of value. Specifically, what does the addition of smell bring to digital devices, and does this justify the extra complexity and expense? A key theme is the attempt to increase people's sense of presence and/or the perceived realism of environments present on screen or using virtual reality hardware. The findings of the current study provide additional evidence to support the value of the use of odour in a VR environment. There are several challenges faced by those who seek to commercialise products that add scents to digital devices. First, the size of devices suitable for use in home or office currently limits the palette of smells that can be reliably delivered. Small devices have been created, but these can dispense only a limited range of scents [33,34]. Second, the molecular nature of scents means that they appear and disappear very slowly in comparison with the instant response of screen pixels and digital audio. These challenges can be addressed at least to some degree, but until the efficacy of smell to enhance users' experience is well established, the motivation to invest the necessary time and effort into commercial development is likely to remain low. Given the difficulty in producing high-quality odour delivery devices for widespread use, it would seem that larger-scale training simulations, gallery and theme-park installations are perhaps the most likely to make use of larger scent delivery devices (similar to the device used in the current study and that presented in Ischer et al. [14]) in the short term. The issue of perceived 'realism' of smells is an interesting one for virtual environment designers to consider. We see parallels here with foley-the practice of creating environmental sounds for films. Foley artists are employed to add sounds in post-production-that is, after scenes have been shot. These will include sounds such as footsteps, car doors closing, rain, wind and other environmental sounds. In general, the sounds that are added to the scenes are not those that are recorded by microphones on location. Rather, foley artists source sounds from elsewhere and painstakingly edit, process and time them to fit the video. A key point is that foley sounds are crafted not only for realism but also to enhance the scene and for emotional effect. When examining and evaluating audiences' experiences with virtual environments combined with synthesised smells, we therefore suggest that it is likely to be more useful to ask whether the smells enhanced the experience or provided greater emotional impact than whether they were perceived as being accurate or authentic. For virtual environment designers used to working with animated graphics and sound, we note that the timing of odour delivery and dissipation is likely to be a challenge. Images and sounds can be made to appear and disappear almost instantly, and the 'teleportation' of users in a virtual world is technically trivial. For odours, however, there are physical limits to how quickly air can be delivered and removed. This means that transitions from scene to scene need to be choreographed with care to ensure odours transition with the current location. It may mean that certain game mechanics (such as teleportation) will need to be adjusted slightly to allow for the delayed olfactory delivery. Conclusion The results of this study provide further evidence of smells influence upon users' experience and sense of presence in VR environments. We have explored the use of odours in the somewhat extreme virtual environment of a VR game in the horror genre, with scenes featuring intense smelling objects such as rotten food, smoke and a rotting head. Whether the effects we identify here hold in more realistic or everyday environments will require further assessment. However, we believe that the evidence we have gathered here suggests that smells could effectively enhance the realism of virtual reality training environments, which may often need to simulate 'extreme' situations somewhat similar to those explored in this study. Supporting information S1 Table. Effect of odour addition and order of gameplay on post gameplay questionnaire measures. Expanded Table 3 from the manuscript containing the standard deviation, effect size (partial eta squared, ηp2) and 90% confidence interval of the effect size for each measure and interaction. (XLSX) S2 Table. Effect of odour addition and the gameplay events combined on smell recall questionnaire measures. Expanded Table 4 from the manuscript containing the means of the individual events and, the standard deviation, effect size (partial eta squared, ηp2) and 90% confidence interval of the effect size for each measure and interaction. (XLSX) S3 Table. Effect of odour addition, order of gameplay and fright on physiological response. Expanded Table 5 from the manuscript containing the standard deviation, effect size (partial eta squared, ηp2) and 90% confidence interval of the effect size for each measure and interaction. (XLSX)
2022-04-01T05:16:03.603Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "992fad21db72b35b0b5c8fdcb573441b5e18bc39", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0265039&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "992fad21db72b35b0b5c8fdcb573441b5e18bc39", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235220102
pes2o/s2orc
v3-fos-license
Exploring the Binding Interaction of Raf Kinase Inhibitory Protein With the N-Terminal of C-Raf Through Molecular Docking and Molecular Dynamics Simulation Protein-protein interactions are indispensable physiological processes regulating several biological functions. Despite the availability of structural information on protein-protein complexes, deciphering their complex topology remains an outstanding challenge. Raf kinase inhibitory protein (RKIP) has gained substantial attention as a favorable molecular target for numerous pathologies including cancer and Alzheimer’s disease. RKIP interferes with the RAF/MEK/ERK signaling cascade by endogenously binding with C-Raf (Raf-1 kinase) and preventing its activation. In the current investigation, the binding of RKIP with C-Raf was explored by knowledge-based protein-protein docking web-servers including HADDOCK and ZDOCK and a consensus binding mode of C-Raf/RKIP structural complex was obtained. Molecular dynamics (MD) simulations were further performed in an explicit solvent to sample the conformations for when RKIP binds to C-Raf. Some of the conserved interface residues were mutated to alanine, phenylalanine and leucine and the impact of mutations was estimated by additional MD simulations and MM/PBSA analysis for the wild-type (WT) and constructed mutant complexes. Substantial decrease in binding free energy was observed for the mutant complexes as compared to the binding free energy of WT C-Raf/RKIP structural complex. Furthermore, a considerable increase in average backbone root mean square deviation and fluctuation was perceived for the mutant complexes. Moreover, per-residue energy contribution analysis of the equilibrated simulation trajectory by HawkDock and ANCHOR web-servers was conducted to characterize the key residues for the complex formation. One residue each from C-Raf (Arg398) and RKIP (Lys80) were identified as the druggable “hot spots” constituting the core of the binding interface and corroborated by additional long-time scale (300 ns) MD simulation of Arg398Ala mutant complex. A notable conformational change in Arg398Ala mutant occurred near the mutation site as compared to the equilibrated C-Raf/RKIP native state conformation and an essential hydrogen bonding interaction was lost. The thirteen binding sites assimilated from the overall analysis were mapped onto the complex as surface and divided into active and allosteric binding sites, depending on their location at the interface. The acquired information on the predicted 3D structural complex and the detected sites aid as promising targets in designing novel inhibitors to block the C-Raf/RKIP interaction. Protein-protein interactions are indispensable physiological processes regulating several biological functions. Despite the availability of structural information on protein-protein complexes, deciphering their complex topology remains an outstanding challenge. Raf kinase inhibitory protein (RKIP) has gained substantial attention as a favorable molecular target for numerous pathologies including cancer and Alzheimer's disease. RKIP interferes with the RAF/MEK/ERK signaling cascade by endogenously binding with C-Raf (Raf-1 kinase) and preventing its activation. In the current investigation, the binding of RKIP with C-Raf was explored by knowledge-based protein-protein docking web-servers including HADDOCK and ZDOCK and a consensus binding mode of C-Raf/RKIP structural complex was obtained. Molecular dynamics (MD) simulations were further performed in an explicit solvent to sample the conformations for when RKIP binds to C-Raf. Some of the conserved interface residues were mutated to alanine, phenylalanine and leucine and the impact of mutations was estimated by additional MD simulations and MM/PBSA analysis for the wild-type (WT) and constructed mutant complexes. Substantial decrease in binding free energy was observed for the mutant complexes as compared to the binding free energy of WT C-Raf/RKIP structural complex. Furthermore, a considerable increase in average backbone root mean square deviation and fluctuation was perceived for the mutant complexes. Moreover, per-residue energy contribution analysis of the equilibrated simulation trajectory by HawkDock and ANCHOR web-servers was conducted to characterize the key residues for the complex formation. One residue each from C-Raf (Arg398) and RKIP (Lys80) were identified as the druggable "hot spots" constituting the core of the binding interface and corroborated by additional long-time scale (300 ns) MD simulation of Arg398Ala mutant complex. A notable conformational change in Arg398Ala mutant occurred near the mutation site as compared to the equilibrated C-Raf/RKIP native state conformation and an essential hydrogen bonding interaction was lost. The thirteen binding sites assimilated from the overall analysis were mapped onto the complex as INTRODUCTION The physiological processes including signal transduction, cell proliferation, cell division, enzyme inhibition, and DNA repair are controlled via recognition and association of different proteins (Thiel et al., 2012). Nearly 6,50,000 protein-protein interactions (PPI) referred to as "interactome" regulate human life and dysregulation of any interaction leads to pathological conditions including neurological disorders and cancer (Ryan and Matthews, 2005;Stumpf et al., 2008;Sable and Jois, 2015;Ottmann, 2016). Despite the vast biological significance of protein-protein complexes, elucidating their structures and association mechanisms remains a notoriously challenging task (Zinzalla and Thurston, 2009;Ngounou Wetie et al., 2013). Protein-protein docking is a fundamental computational tool often combined with experimentally predicted information to decipher the association mechanism of such complexes (Ritchie, 2008;Kaczor et al., 2018). Innumerable protein-protein docking web-servers have been developed with diverse sampling algorithms and scoring functions in order to accurately predict the binding mode between two protein structures (Gromiha et al., 2017;Porter et al., 2019). Due to varying differences in their docking and scoring strategies, choosing an appropriate protocol for docking is a tricky problem in itself (Huang, 2014;Park et al., 2015;Gromiha et al., 2017;Porter et al., 2019). The CAPRI (Critical Assessment of PRedicted Interactions) community-wide effort attempts to dock the same proteins provided by the assessors in a scientific meeting held every six months for discussing protein-protein docking accuracy (Janin, 2010). This CAPRI meeting divides the innumerable protein-protein docking tools available into validated and non-validated ones (Kangueane et al., 2018). In addition to molecular docking, atomic-level molecular dynamics (MD) simulations characterize the structure, dynamics and stability of proteinprotein complexes and provide an unprecedented sampling of the complexes formed by two protein monomeric structures (Kuroda and Gray, 2016;Shinobu et al., 2018). Elucidating the structural basis of RKIP binding with C-Raf is essential for completely understanding the regulation of C-Raf. Previous study by Trakul et al. reported that RKIP abrogates C-Raf activation by binding to its N-terminal region and inhibiting its phosphorylation at residues Ser338 and Tyr340/ Tyr341 (Trakul et al., 2005). This data was consistent with another study by Park et al. where they investigated the binding of RKIP to C-Raf N-terminal region by mutational analysis. They substituted Ser338 with alanine (Ala) and Tyr340/Tyr341 with phenylalanine (Phe) and the mutation was observed to diminish the binding of RKIP with C-Raf (Park et al., 2006). A study published by Rath et al. examined the importance of the ligand-binding pocket of RKIP in binding with its substrate C-Raf at the aforementioned N-terminal region residues Ser338/Tyr340/Tyr341. The two highly conserved residues within the ligand-binding pocket of RKIP were mutated: Asp70 with Ala and Tyr120 with Phe. These RKIP mutants demonstrated diminishment in their capability to inhibit C-Raf, thereby establishing the significance of the ligand-binding pocket of RKIP in binding with its substrate (Rath et al., 2008). Wu et al. in a study published to further illuminate on the ligandbinding pocket of RKIP mutated seven residues to Ala (Asp70, Asp72, Tyr81, Glu83, Ser109, Tyr120, and Tyr181) and two residues to leucine (Leu) (Pro74 and Pro112). With the wildtype (WT) RKIP binding affinity of 154 mM −1 for C-Raf residues 1-147 amino acids, the binding affinity of mutants Pro74, Tyr81, Ser109, and Pro112 decreased by 30-50%. Furthermore, the binding affinity of Asp70, Glu83, and Tyr120 mutations considerably reduced to 30, 22, and 18 mM −1 , respectively, while a sizable decrease was noticed in the affinity of mutations Asp72 (7 mM −1 ) and Tyr181 (3 mM −1 ) (Wu et al., 2014). Since RKIP binds to multiple regions of C-Raf, the mutations introduced by Wu et al. perturbed the binding of RKIP pocket residues with C-Raf residues (1-147 amino acids). However, the structural basis and interaction mode for binding of RKIP pocket residues with the N-terminal of C-Raf (340-615 amino acids) remains elusive in spite of the accessibility of their crystal structures. This opens the door for the prediction of their structural complex through molecular modeling techniques. Comprehending the binding mode of C-Raf/RKIP complex at the molecular level is of paramount importance for designing novel PPI inhibitors that could disrupt their association. Herein, we propose a consensus mode of binding between the two proteins obtained through two knowledge-based proteinprotein docking programs. Specifically, we carried out a reliable docking approach employing programs HADDOCK and ZDOCK combined with molecular dynamics (MD) simulations to investigate the C-Raf/RKIP PPI interface and uncover the interactions involved in the formation of their structural complex. Consequently, we introduced mutations at the conserved interface residues of the complex and carried out additional simulations to infer and compare their stabilities. Moreover, we evaluated free energy of binding for the complexes using MM/PBSA along with calculating per-residue energy decomposition using MM/GBSA. Subsequently, we also identified druggable "hot spots" that can be targeted for future drug optimization by ANCHOR web-server. An additional 300 ns of MD simulation was performed using the identified hot spot Arg398 as an exemplification to investigate the stability of the Arg398Ala mutant complex. The final aim was to shed some light on the residues involved in the complex formation and the identified sites for future designing of novel drugs. The complex predicted in this study is envisaged to be useful in procuring novel PPI inhibitors targeting the association of RKIP with the N-terminal of C-Raf. Structure Retrieval and Refinement The molecular details of RAF proto-oncogene serine/threonine protein kinase and its inhibitory protein RKIP/PEBP1 were retrieved from UniProtKB database with UniProt ID P04049 (RAF1_HUMAN) and P30086 (PEBP1_HUMAN), respectively, with proteins of lengths 307 (C-Raf) and 187 (RKIP) amino acids. Of the 18 X-ray diffraction structures available for C-Raf, nine structures were in the peptide form. Eight out of nine structures did not contain the desired residues (Tyr 340 and Tyr341) for PPI with RKIP and were present in the form of effector RAS binding domain (RBD). The X-ray diffraction structure (PDB ID: 3OMV) of resolution 4.00 Å with the presence of residues Tyr340 and Tyr341 was retrieved from the Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB-PDB) with two identical chains: A and B (Hatzivassiliou et al., 2010). The structure was further evaluated in BIOVIA Discovery Studio (DS) Visualizer 2018 and chain B was eliminated along with the associated co-crystallized ligands of both chains. Similarly, the only three-dimensional structure of RKIP with the presence of C-Raf peptide ligand and a resolution of 1.95 Å was retrieved and downloaded from RCSB (PDB ID: 2QYQ) (Simister et al., 2011). The co-crystallized ligand O-phosphotyrosine was removed for further process of molecular docking, retaining only the crystal structure of RKIP. Structures of both the proteins were cleaned by Clean Protein protocol, missing loops were added and refined by minimization employing the CHARMM forcefield in DS. The minimized structures so obtained were employed for detailed PPI via two knowledge-based docking web-servers. Molecular Docking of RKIP With C-Raf Protein docking is a quintessential tool used in molecular biology to identify key residues responsible for the interaction among two proteins (Kangueane et al., 2018). The binding interaction of RKIP with C-Raf was accomplished using two docking webservers in order to achieve the best native conformation according to the knowledge of binding residues detected by aforementioned site-mutagenesis studies. These servers included HADDOCK 2.2 (Dominguez et al., 2003;De Vries et al., 2010;Van Zundert et al., 2016) and ZDOCK 3.0.2 (Pierce et al., 2011, Pierce et al., 2014. At this stage, one might remember that there are several web-servers for performing protein-protein docking and the results of different servers may not always be same. Therefore, with the intention of acquiring a consensus mode of binding, two aforementioned knowledge-based docking servers were utilized for the current study. Knowledge-based protein-protein docking of C-Raf with RKIP was performed with the Easy interface of HADDOCK 2.2 (High Ambiguity-Driven biomolecular DOCKing) webserver (Van Zundert et al., 2016). HADDOCK (https://wenmr. science.uu.nl) considers experimental data to drive the process of molecular docking unlike ab initio docking protocols considering only the co-ordinates of the structures (van Dijk et al., 2006). The docking strategy followed by HADDOCK involves three steps including 1) randomization of orientations followed by energy minimization to remove steric clashes, 2) torsion angle dynamics utilizing torsion angles as degrees of freedom and 3) refinement with an explicit solvent in Cartesian space. Tyrosine (Tyr340 and Tyr341) residues of C-Raf and ligand-binding pocket residues of RKIP were mentioned as "active" residues involved in the intermolecular interaction, while "passive" residues were automatically defined as residues surrounding the active ones before submitting the docking job. The four clusters retrieved as HADDOCK results were probed for interactions between C-Raf and RKIP in DS. ZDOCK (https://zdock.umassmed.edu/) facilitates global docking search on a 3D grid using the FFT algorithm via its user-friendly web interface combined with shape complementarity, electro statistics and statistical potential terms for scoring of the complex structures (Chen and Weng, 2002). ZDOCK version 3.0.2 was employed to perform the rigidbody docking of RKIP with C-Raf (Pierce et al., 2014). The tyrosines Tyr340 and Tyr341 of C-Raf and ligand-binding pocket residues of RKIP were selected as contacting residues for the docking process. The top 10 predictions of complex structures were downloaded as ZDOCK results and examined individually in DS by analyzing the interactions between C-Raf and RKIP. The modeled structures from the above web-servers were analyzed and the output structures chosen as optimal ones from both servers were superimposed in DS to observe their alignment and comprehend the putative binding mode of interaction between C-Raf and its interacting partner RKIP. The intermolecular interactions between RKIP and C-Raf were analyzed by utilizing the Interaction Monitor implanted in DS. In silico Mutagenesis of Conserved Interface Residues Given the structure of the complex, computational mutagenesis is extensively used to probe the protein-protein interfaces for "hot spot" residues affecting the binding affinity of the complex. In such cases, residues at the interface of WT complex structure are mutated and the binding affinity of the resulting complex is estimated (Massova and Kollman, 1999;Lise et al., 2009;Bradshaw et al., 2011). Some of the common residues acquired at the interface of HADDOCK and ZDOCK complex structures were mutated to Ala, Phe, and Leu. As the crystallographic structures of mutated proteins are not available, mutations were modelled using the Build and Edit Protein tool implemented in DS. Consequently, residues Tyr340 and Tyr341 of C-Raf were mutated to Phe constructing a Tyr340Phe/Tyr341Phe mutant as reported in an experimental study, to check the effect of RKIP binding with the N-terminal region of C-Raf (Park et al., 2006). Mutations were also introduced according to the experimental analysis by Wu et al. to further confirm the affinity of RKIP ligand-binding pocket residues with the N-terminal region of C-Raf (Wu et al., 2014). Accordingly, three Ala mutations (Asp70Ala, Tyr120Ala, and Tyr181Ala) and two Leu mutations (Pro74Leu and Pro112Leu) were introduced in the WT structural complex and probed for binding affinity of RKIP mutants with C-Raf via molecular dynamics (MD) simulations. Analysis of Interaction Dynamics MD simulations of the near-native docked C-Raf/RKIP structural complex and six mutants computationally constructed were executed to evaluate the stability and dynamic behavior of interacting proteins. The above seven complexes were subsequently prepared for MD simulations with GROMACS v.5.0.6 (GROningen MAChine for Chemical Simulation) software package (Abraham et al., 2015). The AMBER99SB-ILDN forcefield was applied to generate the topology parameters of the structural complexes (Lindorff-Larsen et al., 2010;Venkatesan et al., 2015;Zarei, et al., 2017;Galeazzi et al., 2018). The binary complexes were then surrounded by dodecahedron periodic box of SPCE water molecules (Selent et al., 2013;Venkatesan et al., 2015;Galeazzi et al., 2018;Du et al., 2020). Cl − ions were added to the systems to neutralize it prior to minimization (Supplementary Table S1). Before NVT and NPT equilibration, energy minimization of above systems by steepest descent algorithm (50,000 steps) was performed to remove initial steric clashes. A robust NVT (constant number of particles, volume and temperature) equilibration protocol of 500 ps at 300 K was applied to all systems using a V-rescale thermostat. NVT was followed by achieving the system equilibration under the NPT (constant number of particles, pressure and temperature) ensemble for 500 ps at 1.0 bar. The complex systems were then subjected to 10 ns of production run under constant temperature (300 K) and pressure (1.0 bar). Electrostatics of long range interactions was estimated by PME (Particle Mesh Ewald) algorithm (Darden et al., 1993) and the LINCS algorithm (Hess et al., 1997) was applied to restrain the bond lengths. The MD output was monitored through assessing the stability and behavior of the structural complexes by calculating the Root Mean Square Deviation (RMSD) and Root Mean Square Fluctuation (RMSF) throughout 10 ns of simulation run. Additionally, the dynamics of all systems were scrutinized by visualizing in Visual Molecular Dynamics (VMD) program (Hess et al., 1997) and DS. Binding Free Energy Calculations using MM/PBSA and MM/GBSA Method The free intermolecular binding energy of C-Raf with RKIP and its variants was estimated using Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and Molecular Mechanics/ Generalized Born Surface Area (MM/GBSA) methodology (Chen et al., 2016;Siebenmorgen and Zacharias, 2020). The binding free energy (BFE) ΔG bind for the complex was calculated according to the below equation for the WT and mutated structural complexes. (1) The G complex refers to binding energy of the C-Raf/RKIP structural complex while G protein1 and G protein2 refers to energies of individual proteins within the complex. The free energy of binding with MM/PBSA method was estimated using the g_mmpbsa tool of GROMACS (Kumari et al., 2014). In addition, the MM/GBSA approach was applied to identify the essential residues involved in the protein-protein binding interface for providing the per-residue energy decomposition in the WT C-Raf/RKIP complex structure. To evaluate the same, HawkRank scoring function (Feng et al., 2017) incorporated in the HawkDock (http://cadd.zju.edu.cn/ hawkdock/) web-server (Weng et al., 2019) was implemented. Identification of Druggable Binding Sites Based on the representative WT structural complex of C-Raf/ RKIP, the ANCHOR (http://structure.pitt.edu/anchor/) webserver (Meireles et al., 2010) was employed for identifying the druggable binding sites in the protein-protein interaction complex. Given the structural complex, ANCHOR evaluates the change in solvent accessible surface area (ΔSASA) for each side-chain, along with its contribution to the binding energy. It thus facilitates the identification of "hot spots" residues, where small molecule inhibitors have high propensity to bind. Molecular Docking and Molecular Dynamics Simulation Analysis of RKIP With C-Raf The predicted complex structures from the two web servers were subjected to detailed analysis in DS on the basis of their binding mode and interacting residues. Knowledge-based protein-protein docking was performed employing HADDOCK web-server to explore the putative binding mode of interaction between C-Raf and RKIP. About 191 structures in four clusters were identified using HADDOCK which represented 95.5% of the water-refined models. HADDOCK score is calculated as a weighted sum of van der Waals intermolecular energy, electrostatic intermolecular energy, desolvation energy, distance restraints energy and buried surface area. Along with HADDOCK score, Z-score is represented as the number of standard deviations from the average a particular cluster is located in terms of score. Negative Z-scores are postulated as being better exemplification of a good HADDOCK cluster. Out of the four clusters, the top two clusters (cluster 3 and cluster 2) were found to have negative Z-scores, while the remaining two clusters (cluster 1 and cluster 4) with positive Z-scores were not considered for further analysis. The cluster three represented the highest negative value in terms of HADDOCK score (−182.3 +/− 8.3) and Z-score (−1.2). The contribution of van der Waals energy and electrostatic energy was observed to be −77.1 +/− 6.0 kcal/mol and −329 +/− 43.8 kcal/ mol, respectively. Buried surface area (BSA) criterion was used to evaluate the amount of protein surface not in contact with water. Higher BSA value of 2017.2 +/− 66.3 indicated that the structural complex is compact. Furthermore, RMSD of 0.6 +/− 0.4 was reported to be significantly lower for cluster 3. The four best structural complexes from cluster three were downloaded as PDB files and further examined in terms of their interactions in DS. The third structural complex displayed favorable interactions compared to the remaining three complexes. Residues Tyr340, Tyr341, Trp342, Glu345, Glu348, Arg398, and Lys399 from C-Raf were observed to interact with residues Asp70, Ala73, Lys80, Tyr81, Trp84, His86, Gly110, Tyr120, Tyr181, Glu182 of RKIP in the chosen structural complex (Supplementary Table S2). The interactions were majorly characterized by electrostatic bonds, hydrogen bonds and π-π/π-alkyl bonds. FFT-based docking program ZDOCK was further used to get a unanimous docking pattern as HADDOCK structural complex. The 10 structural complexes as ZDOCK results were downloaded and analyzed in DS. Two different binding modes were observed between C-Raf and RKIP resulting in two clusters. The complexes from the largest cluster were further investigated on the basis of intermolecular interactions. Accordingly, Complex5 from the largest cluster was observed to make favorable interactions in terms of hydrophobic, electrostatic and hydrogen bonds. Thus, Complex5 was selected as an ideal model from ZDOCK analysis. Residues Tyr340, Tyr341, Trp342, Glu345, Arg398, and Lys414 from C-Raf were observed to interact with RKIP binding pocket residues Ala73, Pro74, Lys80, Tyr81, His86, Gly110, and Tyr181 in Complex5 characterized by hydrogen, hydrophobic and electrostatic interactions (Supplementary Table S2). Furthermore, analysis of docked complexes from the above servers showed the occurrence of six C-Raf (Tyr340, Tyr341, Trp342, Glu345, Arg398, and Lys399) and eight RKIP (Asp70, Ala73, Lys80, Tyr81, His86, Gly110, Tyr181, and Glu182) actively interacting amino acid residues observed as common in docked complexes from both servers (Supplementary Table S2). Comparative analysis of the binding modes was performed by superimposing the structural complexes in DS. The generated complexes from the two knowledge-based docking programs superimposed well on each other, thereby displaying a similar binding pattern of interaction between the two proteins ( Figure 1A). The complexes were further subjected to MD simulations to assess their stability over a time period of 10 ns by computing the RMSD of backbone atoms. The structural complex generated by HADDOCK program demonstrated an average RMSD of 0.26 nm compared to the average RMSD of 0.74 nm displayed by the complex from ZDOCK docking server ( Figure 1B). As a result, the mode of interaction (hereafter referred to as WT complex) displayed by the HADDOCK docking protocol was chosen for subsequent analysis. Interaction Dynamics of WT and Mutant C-Raf/RKIP Structural Complexes In order to analyze the structural consequences upon mutation, the six mutant structures constructed were also subjected to 10 ns of MD simulations. The stability of the structural complexes was assessed by plotting the backbone RMSD values obtained throughout the production run. The inferred RMSD profile for WT complex demonstrated an average RMSD of 0.269 nm as stated above. Similarly, the C-Raf mutant Tyr340Phe/Tyr341Phe residues rendered an average RMSD of 0.527 nm (Figure 2; Table 1). Additionally, the above mutant was observed to abrogate the binding of RKIP with C-Raf. This was consistent with the experimental analysis reported earlier (Park et al., 2006). The three alanine mutations of the RKIP pocket residues Asp70, Tyr120, and Tyr181 were observed to demonstrate an average RMSD of 0.612, 0.628, and 0.622 nm, respectively, (Figure 2; Table 1). The Asp70Ala mutation rendered a substantial increase towards the end of 10 ns, thereby depicting the disruption of regular complex formation. Leucine mutation of RKIP residue Pro74 exhibited an average RMSD of 0.680 nm, while Pro112 mutation displayed an average RMSD of 0.524 nm (Figure 2; Table 1). The Pro74Leu mutant showed a very high fluctuation, while the Pro112Leu mutant displayed a considerable increase as compared to the WT structural complex. The above RMSD analysis suggested that the WT structural complex displayed a stable RMSD throughout 10 ns of production run as compared to the six structural mutants (Figure 2). Subsequent analysis of backbone RMSF by residue indicated an average of 0.165 nm for the WT complex ( Figure 3; Table 1). The C-Raf mutant Tyr340Phe/Tyr341Phe demonstrated an average RMSF of 0.721 nm (Figure 3; Table 1). Mutations in RKIP residues rendered an average RMSF in the range of 0.424-0.661 nm ( Figure 3; Table 1). From the RMSF graph, it was observed that all the mutants exhibited higher fluctuations across the complex (Figure 3). Additionally, the RMSF of mutated residues in respective mutant complex systems was compared with the RMSF of residues in the WT complex system. Large deviations were observed for all the mutated residues as compared with their RMSF values when in WT complex ( Table 2). The overall analysis suggests that mutations on the residues of RKIP could interrupt its complex formation with the N-terminal region of C-Raf. Furthermore, the binding mode for the C-Raf/RKIP interaction (WT) was analyzed by taking a representative pose from the last 5 ns of simulation run. The stable WT complex obtained after MD simulations established strong intermolecular interactions characterized by hydrogen bonds, electrostatic and π-hydrophobic bonds. Residues Tyr340, Tyr341, Glu345, Glu348, Arg398, and Lys399 of C-Raf were observed to be involved in complex formation with residues Pro74, Lys80, Trp84, Gly108, Gly110, Tyr181, Glu182, and Gly186 of RKIP ( Figures 4A,B; Table 3). Interactios of C-Raf residues Tyr340, Tyr341, Glu345, Glu348, Arg398, and Lys399 with RKIP residues Lys80, Trp84, Gly110, Tyr181, and Glu182 were found to be consistent both at the beginning (best docked HADDOCK complex) and at the end of the simulation run. The presence of the abovementioned interactions at both times suggest a very stable binding. Moreover, Gly110 and Tyr181 of RKIP were found to interact with C-Raf via hydrogen bonds (H-bond) ( Table 3). Similar interactions via H-bonds with above residues were reported in a recent study where HIF-1α is shown to interact with RKIP ligandbinding pocket residues (Srivani et al., 2020). The free energy of binding (ΔG bind ) was calculated for the WT C-Raf/RKIP structural complex as well as for the constructed mutant complexes using MM/PBSA methodology. With the BFE of −174.443 +/− 94.364 kJ/mol obtained for the WT complex, the mutant complexes generated a very low BFE ( Table 4). The RKIP binding-pocket mutants Pro74Leu, Pro112Leu, Tyr120Ala, and Tyr181Ala and the C-Raf mutant Tyr340Phe/Tyr341Phe were noticed to have the lowest BFE among the mutant complexes. This could be attributed to the change in conformation thereby impairing the binding. From these values, it can be concluded that the interface residues in the WT structural complex determines a strong stabilization and are essential elements for regular complex formation of RKIP with the N-terminal of C-Raf. Moreover, per-residue energy decomposition analysis of the MD simulation-derived equilibrated trajectory of C-Raf/RKIP structural complex was estimated by MM/GBSA implemented in the HawkDock web-server. Our results showed that C-Raf residues Arg398, Tyr341, Lys399, Glu348, and Glu345 along with RKIP residues Lys80, Tyr181, Gly186, Tyr81, Glu182, Trp84, Pro74, and Gly110 are the most critical residues for complex formation (Table 5). From the HawkDock MM/ GBSA analysis, it was perceived that Arg398 (C-Raf) and Lys80 (RKIP) contribute majorly to the binding of the complex (Figures 4C,D; Table 5). Additionally, ANCHOR web-server analysis of the MD simulation derived structure of C-Raf/RKIP complex also confirmed Arg398 (C-Raf) and Lys80 (RKIP) as major sites contributing significantly to the binding of the complex ( Table 6). Short MD simulations of 10 ns were additionally performed each for Arg398Ala and Lys80Ala mutant complexes in order to check their BFE through MM/PBSA. A significant difference of BFE was noted for Arg398Ala (−57.705 +/− 116.120 kJ/mol) and Lys80Ala (51.747 +/− 101.711 kJ/mol) mutant complexes as compared to the WT C-Raf/RKIP complex ( Table 7). An additional long-time scale MD simulation of 300 ns was performed to investigate the stability of the Arg398Ala mutation starting from the representative structure of WT C-Raf/RKIP structural complex. It was noted that RMSD significantly increased by around 180 ns for the Arg398Ala mutant complex. Moreover, representative snapshot with the largest RMSD value at 180 ns (0.9 nm) for the Arg398Ala complex was extracted and superimposed with the WT representative snapshot ( Figure 5A). Significant conformational change occurred in the Arg398Ala mutant complex near the mutation site as compared with the equilibrated native state conformation of the WT C-Raf/RKIP structural complex ( Figure 5B). In addition, Ala398 (C-Raf) was observed to form a carbonhydrogen bond with Ser185 (RKIP) and two conventional hydrogen bonds with Gly186 (RKIP) observed in the WT complex formation were lost. The distance of the carbonhydrogen bond between Ala398 (C-Raf) and Ser185 (RKIP) of 3.59 Å was greater than the distance of the conventional hydrogen bonds between Arg398 (C-Raf) and Gly186 (RKIP) in WT (Table 3), thus depicting the importance of Arg398 in regular complex formation with RKIP. Collectively, the C-Raf/RKIP interface binding sites acquired from the WT representative interaction analysis, MM/GBSA analysis and binding sites defined previously by experimental analysis were mapped onto the complex as surface with different colors ( Figure 6). Accordingly, the binding sites-Tyr340 (1, pink), Tyr341 (2, light blue), Tyr81 (3, orange), Pro74 (4, dark blue), Gly110 (5, dark grey), Lys80 (6, light pink), and Tyr181 (7, light orange) can be considered as the most essential ones located at the interface of C-Raf/RKIP interaction ( Figure 6). Additional literature survey identified anti-leprosy drug, Clofazimine as the C-Raf/RKIP interaction inhibitor in which residues Pro74 and Lys80 were revealed as crucial binding sites (Guo et al., 2018). Moreover, Pranlukast was also identified as a novel ligand, binding to the conserved binding pocket of RKIP and inhibiting its interaction with C-Raf where residues Tyr81 and Tyr181 played a vital role . Recently, Suramin, initially utilized to treat African sleeping sickness, was identified as C-Raf/RKIP interaction inhibitor binding to residue Tyr181 (Guo et al., 2021). Furthermore, the binding sites-Gly186 (8, maroon), Arg398 (9, violet), Glu345 (10, green), Glu348 (11, dark pink), Glu182 (12, tan), and Lys399 (13, yellow) which are located slightly away from the C-Raf/RKIP interface can be considered as potential allosteric sites ( Figure 6). From the above overall analysis, the mapped 13 interface binding sites acquired from the C-Raf/ RKIP interaction can be considered as druggable binding sites ("hot spots") for future designing of small molecule inhibitors that may inhibit the protein-protein interaction between C-Raf and RKIP. DISCUSSION RKIP/PEBP-1 portrays a modulatory role in numerous kinase signaling cascades and was identified as an endogenous regulator of kinases of the RAF/MEK/ERK pathway (Tavel et al., 2012). Besides its role in normal physiological phenomena, dysregulated RKIP expression was observed to contribute significantly to pathophysiological illnesses including Alzheimer's disease, various cancerous ailments and diabetic nephropathy (Keller et al., 2004;Escara-Wilke et al., 2012;Ling et al., 2014;Farooqi et al., 2015). Interestingly, RKIP was also observed to be a metastasis suppressor (Granovsky and Rosner, 2008;Yesilkanal and Rosner, 2014). Differential regulation of RKIP was also perceived in a variety of human cancers. As a result, RKIP might provide as a valuable indicator for tumor metastases tissues. It is relevant therefore to understand the basis of RKIP inhibition for its application in physiological abnormalities. Developing novel biomarkers will benefit in effective perturbation of RKIP's involvement in pathological diseases and characterize its ostensibly conflicting roles. The cell sheet migration inhibitor, Locostatin is the only available potent RKIP inhibitor identified till date (Shemon et al., 2009;Rudnitskaya et al., 2012). Locostatin functions as a PPI inhibitor by binding RKIP and abrogates it from interacting with C-Raf (Beshir et al., 2011;Janjusevic et al., 2016). In spite of the accessibility of Locostatin, design of better probes to hinder the association of RKIP with C-Raf for future implications is needed on an urgent basis. However, the 3D structural complex of the PPI between C-Raf and RKIP has not yet been elucidated despite the availability of their individual crystal structures. Herein, we established the molecular basis of interaction between the two proteins by an in silico protocol. A systematic study was designed and a consensus mode of C-Raf/RKIP interaction was obtained by using two knowledge-based docking programs. In particular, the 3D structural model was obtained using a combination of HADDOCK and ZDOCK protein-protein docking web-servers. Similar strategy of integrating multiple docking programs for selection of an ideal binding mode was also implemented in previous studies (Kausar et al., 2013;Venkatesan et al., 2015;Galeazzi et al., 2018;Raghav et al., 2019). The model procured from the HADDOCK knowledge-based docking program was the most reliable as indicated by its stable RMSD obtained throughout the 10 ns of MD production run (Figure 1), negative Z-score, contribution of van der Waals and electrostatic energy. Predominantly, the interactions between the two proteins were characterized by several hydrogen, electrostatic and hydrophobic bonds. Residues Tyr340, Tyr341, and Arg398 of C-Raf were observed to bind Trp84, Gly108, Gly110, Tyr181, and Gly186 of RKIP by six conventional hydrogen bonds (Figure 4, Table 3). Furthermore, the docking/MD protocol was integrated with in silico mutagenesis of few conserved interface residues occurring as common amino acids obtained through HADDOCK and ZDOCK docking results (Supplementary Table S2). The impact of mutations on complex formation was verified by additional MD simulations amalgamated with BFE analysis by MM/PBSA methodology. With a BFE of -174.443 kJ/mol for C-Raf/RKIP WT structural complex, the binding energies of constructed mutants were estimated and compared ( Table 4). The two most crucial tyrosines of C-Raf involved in complex formation were mutated to phenylalanine simultaneously. This resulted in repulsion of the two proteins disrupting their bond followed by a substantial upsurge in its stability as observed by its RMSD, RMSF, and BFE of 74.785 kJ/mol (Figures 2, 3; Tables 1, 2, 4). This was in agreement with previously reported experimental study (Park et al., 2006), thus explaining their significant contribution in binding with RKIP via hydrogen bonds. Alanine and leucine mutagenesis of the five conserved residues of RKIP were further analyzed by their diverse RMSD and RMSF plots and differential BFEs (Figures 2, 3; Tables 1, 2, 4). The above mutations were embarked on the basis of the prior RKIP binding study with C-Raf amino acids 1-147 (Wu et al., 2014). However, the influence of the above mutations on the N-terminal region of C-Raf (amino acids 340-615) has not been elucidated yet. The two leucine mutations of Pro74 and Pro112 were observed to abolish the binding between the two proteins and this was also witnessed with their positive binding energies of 170.387 and 118.645 kJ/mol, respectively, (Table 4). Similarly, alanine mutagenesis of Tyr120 and Tyr181 diminished the C-Raf/RKIP interaction resulting in insignificant BFE of 24.055 and 36.921 kJ/mol, respectively ( Table 4). From the above analysis, it can be perceived that Pro74, Pro112, Tyr120, and Tyr181 are the most crucial residues for the binding of RKIP with the aforementioned N-terminal region of C-Raf. Alanine mutation of Asp70 also decreased the binding affinity by 46% resulting in BFE of −81.406 kJ/mol ( Table 4). The weaker binding affinities of phenylalanine, leucine and alanine mutations in RKIP residues may attribute to the compromised stability and integrity of its ligand-binding pocket with the N-terminal region of C-Raf residues. This explains that the above mutated residues contribute significantly to the regular complex formation of RKIP with C-Raf at the N-terminal. The snapshot derived from the last 1 ns equilibrated trajectory was subjected to MM/GBSA analysis by HawkDock web-server to characterize the per-residue energy decomposition of important amino acids in complex formation. Moreover, residue contribution in terms of energy and druggable site prediction was also performed by ANCHOR web-server. It was intriguing to note that both servers predicted two residues as indispensable ones for C-Raf/RKIP interaction (Tables 5, 6). Arg398 of C-Raf and Lys80 of RKIP could be regarded as "hot spot" residues along with the above previously identified interface residues and deemed to be druggable sites for future development of novel inhibitors. Using one of the identified "hot spots" as an example, a long-time scale 300 ns MD simulation was performed for the Arg398Ala mutation according to the similar strategy adopted in previous PPI study (Du et al., 2020). A noteworthy difference was noticed near the mutation site when the representative trajectory extracted at 180 ns (RMSD of 0.9 nm) was compared with the representative WT snapshot ( Figure 5). Taken together, comparative protein-protein docking, MD simulations, MM/PBSA and MM/GBSA results revealed vital residues in the interaction of RKIP with C-Raf N-terminal residues Tyr340-Lys615. These 13 residues were mapped as surface with different colors demonstrating their location at the C-Raf/RKIP interface ( Figure 6). Accordingly, the binding sites 1-7 located close at the interface can be regarded as active sites while binding sites 8-13 located away from the interface can be considered as allosteric inhibition sites. Identification of RKIP hot spots is imperative in designing anti-RKIP drugs when the aim is to disrupt the C-Raf/RKIP association. Likewise, the acquired information regarding the pivotal amino acids of C-Raf can be utilized in the process of developing C-Raf mimicking peptides for RKIP inhibition as well as for development of novel Raf-1 kinase inhibitors. The present study is the first attempt towards the computational binding analysis of RKIP's interaction with the N-terminal of C-Raf residues Tyr340-Lys615 employing protein-protein docking approach and MD simulations. CONCLUSION Targeting the RAF/MEK/ERK pathway represents a potential strategy for the treatment of pathological illnesses including Alzheimer's disease and cancer. In this work, the interaction of RKIP with the N-terminal of C-Raf (residues Tyr340-Lys615) was investigated by employing two knowledge-based proteinprotein docking web-servers which provided a consensus mode of interaction. Docking was followed by refinement of the associated complex by MD simulations. In silico mutagenesis of either residues of C-Raf or RKIP that could significantly impact complex formation indicated a lower binding free energy for constructed mutant complexes as compared to the free energy of binding for wild-type C-Raf/RKIP structural complex (−174.443 kJ/mol). A substantial surge in stability was noticed for the mutant complexes as observed from their individual root mean square deviations and fluctuations, thus suggesting that the residues contribute significantly for the regular C-Raf/RKIP interaction. Analysis of equilibrated MD trajectory revealed two residues (Arg398 and Lys80) as quintessential sites contributing to the C-Raf/RKIP interaction. It is intriguing to note that, compared with the equilibrated native conformation, noteworthy conformational and interaction amendment occurred in the Arg398Ala (one of the "druggable hot spots") near the mutation site. Overall, our model allows for improved understanding of the interactions between the N-terminal region of C-Raf and RKIP. The thirteen binding residues were mapped as surface on the basis of their location onto the C-Raf/RKIP interface, leading to the identification of active (Tyr340, Tyr341, Tyr81, Pro74, Gly110, Lys80, and Tyr181) and allosteric (Gly186, Arg398, Glu345, Glu348, Glu182, and Lys399) protein-protein interaction inhibition sites. This will provide valuable hints to elucidate the structural basis of RKIP binding with C-Raf and help in the effective design of novel inhibitors blocking C-Raf/ RKIP interaction. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary files, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS SP conceived the idea of the project, performed computational simulations, analyzed the data and wrote the original manuscript. SR and GL proofread the manuscript. JH provided the funding acquisition and KL supervised the project. All the authors have read and approved the manuscript for submission.
2021-05-28T13:35:30.398Z
2021-05-28T00:00:00.000
{ "year": 2021, "sha1": "1d46fd6ec513d54071a75a00d3d46e7dbfc0017e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2021.655035/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d46fd6ec513d54071a75a00d3d46e7dbfc0017e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
135466289
pes2o/s2orc
v3-fos-license
Controlled interconversion of quantized spin wave modes via local magnetic fields In the emerging field of magnonics, spin waves are considered for information processing and transmission at high frequencies. Towards this end, the manipulation of propagating spin waves in nanostructured waveguides for novel functionality has recently been attracting increasing focus of research. Excitations with uniform magnetic fields in such waveguides favors symmetric spin wave modes with odd quantization numbers. Interference between multiple odd spin wave modes leads to a periodic self-focusing effect of the propagating spin waves. Here we demonstrate, how antisymmetric spin wave modes with even quantization numbers can be induced by local magnetic fields in a well-controlled fashion. The resulting interference patterns are discussed within an analytical model and experimentally demonstrated using microfocused Brillouin light scattering ({\mu}-BLS). Introduction Collective excitations of the electronic spin structure known as spin waves and their quasiparticles, i.e. magnons, are promising for high frequency information processing and transmission. [1][2][3][4] Additional functionality can be gained from the fact that spin waves can also be coupled to other wave-like excitations, such as photons 5,6 and phonons. 7 Furthermore, many classical wave phenomena, such as diffraction, 8,9 reflection and refraction, [10][11][12] interference 13,14 and the Doppler effect 15,16 were observed with spin waves. At the same time, quantum mechanical interactions, such as the magnon scattering [17][18][19] and their interactions with other quasiparticles 20 were observed as well, and provide additional avenues for utilizing spin waves. Understanding these phenomena is key to realizing practical applications in the rapidly emerging field of magnonics. Spin waves can encode information either in their amplitude 21,22 or their phase. 23,24 Compared with conventional electronic approaches, spin waves possess several advantages, including potentially reduced heat dissipation, 25 wave-based computation 26,27 and strong nonlinearities, 28,29 which may all be beneficial for efficient data processing. The recent emerging interest in magnonics can be attributed to the improvement of modern micro-fabrication, which enables the realization of the magnetic microstripes with characteristic dimensions ranging from several μm to below hundred nm 30,31 , as well as integrated micro-antenna for excitations 32,33 . When such a magnetic microstripe is magnetized with an external magnetic field (Hext) in-plane and perpendicular to the stripe direction, the spin waves are called Damon-Eshbach modes 34 and can be localized either at the edge or in the center region, depending on their frequencies 35,36 . Previous studies demonstrated that spin waves at the center region (socalled waveguide spin waves), are quantized into several discrete modes due to the confinement along the width of the waveguide. 37 In addition, generally a homogenous rf field can only excite lateral symmetrically-distributed, odd waveguide spin wave modes. 38 The interference of several of these modes results in a periodic self-focusing, where the waveguide spin waves propagate in diamond chain-like channels 32,39,40 . In magnonic applications, the manipulation of the spin wave propagation is of great significance for the functionality of such devices, especially for logic elements [21][22][23][24] and multiplexers 41 . Towards this end constructive or destructive interference of multiple, coherent spin waves impact the spatial intensity distributions of the resultant waves, and therefore controls the energy and information flows associated with the spin waves. Previous investigations focused mostly on odd spin wave modes, since they were easier to generate with homogeneous excitations. In this work, we demonstrate the controlled interconversions of odd and even waveguide spin waves in yttrium iron garnet (YIG) microstripes by breaking the symmetry via well-defined local inhomogeneous magnetic fields. This allows for a reconfigurable mechanism of mode conversion, unlike previous experiments where the symmetry is broken by the geometry of the waveguide. 42 The local magnetic fields are generated from permalloy (Py, Ni81Fe19) micro-magnets placed asymmetrically next to the YIG waveguide. Note that the saturation magnetization (Ms) for permalloy is about five times larger than that for YIG. Using a combination of theoretical calculations, magnetic simulations, and microfocused Brillouin light scattering (μ-BLS), we demonstrate that the different spin wave channels are essentially controlled by the phase difference between odd and even modes, which can be practically modulated through the relative position of the micro-magnets and the magnitude of the external magnetic field. Analytical Calculations We consider a thin YIG microstripe with the thickness t =50 nm, width w = 3 μm and infinite length l, magnetized in-plane in a direction perpendicular to the length through a magnetic field H0 = 650 Oe as shown in the inset of Fig. 1 (a). The material parameters used in the theoretical calculation are Ms(YIG) = 1960 G, exchange constant A(YIG) = 410 -7 erg/cm, and damping factor α(YIG) = 7.561×10 −4 . 31 For the first step, the waveguide spin wave modes in a microstripe can be described based on the dipole-exchange theory of the spin wave dispersion spectra in a continuous magnetic film. 43,44 This theory provides explicit relations between the wave vector k = (kx, ky) and the frequency f of the spin waves: where p = 1-(1-e -kt )/kt, k 2 = kx 2 +ky 2 , and λex = (2A/Ms 2 ) 1/2 is the exchange length. 45 The two limiting relations for kx = 0 and ky = 0 correspond to Demon-Eshbach and backward volume modes. Furthermore, there are scientific constants for the gyromagnetic ratio γ = 2.8 MHz/Oe. Neglecting the effect of the demagnetizing field (Hd), which is important only close to the edges of the microstripe, the waveguide spin waves are confined along the width direction and can be described as the quantization of planar spin waves propagating along the length direction. It means that only the waveguide spin waves with ky components satisfying the resonant standing waves conditions can propagate in the microstripe. These ky components are a set of discrete values, described by a simple expression: , yn k n w  = . (2) Combining Eqs. (1) and (2), the dispersion relation curves for each mode with n = 1,2,…,5 are plotted in Fig. 1(a). Only lateral modes with odd quantization numbers n can be excited under uniform rf magnetic field, and their amplitudes decrease with increasing n as 1/n. 38 With a frequency of f =4 GHz we can calculate the corresponding kx,n. Then, the spatial distribution of the nth mode's dynamic magnetization and their integrated superpositions, i.e. the interference of the odd modes can be written as where φn is the excitation phase. The patterns of the first three odd modes are mapped in Fig. 1(b) for -2πft+φn = 0, which coincides with the maximum dynamic magnetization at x = 0. According to Eqs. (3) and (4), the major contribution to IΣ(x,y) comes from the first few modes, since the intensity of the modes are proportional to 1/n 2 . Therefore, n = 11 is sufficient for an accurate analysis and the corresponding interference pattern is mapped as shown in the upper panel of Fig. 1(c). In order to determine the amplitude of the procession of every spin, we calculated the maximum values of IΣ(x,y) within -2πft+φn(0, 2π): where I(x,y) is the amplitude of the waveguide spin wave in materials (without considering damping effects), which can be detected using the μ-BLS technique. The waveguide spin wave intensity pattern for odd numbers n is mapped in the lower panel of Fig. 1(c). It shows that the interference of the odd modes results in a symmetric rhombohedral-shaped channel. Here, it should be pointed out that mathematically the phase differences of the lower modes (n=1, 3) between the adjacent nodes (I, II and III in Fig. 1(c)) of the spin wave pattern are approximately 2qπ+π, where q is an arbitrary integer, as shown in Fig. 1 Magn. Introducing new modes to interfere with the existing modes should modify this flow pattern. Towards this end, we consider the even modes because: 1. they have the same frequency as the previously considered odd modes, and therefore the coherent interference would lead to a time-invariant pattern; 2. they should be easy to excite and should have comparable lifetimes compared to the odd modes in the waveguides. In contrast to the odd modes, the even modes have antisymmetric patterns; in other words, mn(x,y) + mn(x,w-y) = 0 for even n according to Eq. (3). The patterns of the first two even modes are mapped in Fig. 2(a). The interference patterns are strongly depended on the difference of the initial phases (Δφ=φodd-φeven), which means that the waveguide spin wave channels can be controlled through tuning Δφ between the odd and the even modes. For our analysis, some representative values (0, π/2, π, and 3π/2) for Δφ were chosen by fixing φodd = 0 in Eq. (3), and changing φeven = 0, π/2, π, and 3π/2, respectively. The corresponding patterns of IΣ(x,y) and I(x,y) are shown in Fig. 2(b)-(e). Compared with Fig. 1(c), the introduction of the new modes changes the patterns from symmetric diamond-like shapes to antisymmetric zig-zag shapes. In addition, the paths of the waveguide spin waves can be continuously changed if Δφ is varied continuously in the range from 0 to 2π. Since the phase shift is given by Δφ = kd, we investigated the control of the Δφ via two different pathways: the change of distance d, and the wave vector k. Effect of the distance d In the discussions above, the introduction of even modes allows to manipulate the propagating waveguide spin waves through their interference with the intrinsic odd modes. The generation of even modes can be realized via the breaking of translational symmetry, for example, by passing through curved waveguides 42,46 . In this work, we demonstrate that the magnetic symmetry of the single YIG microstripe can be broken by non-symmetric distribution of lateral micro-magnets, i.e., a permalloy dot as shown in Fig. 3(a). The simulations were performed using MuMax3 47 . The material parameters for permalloy (Py) are Ms(Py) = 1.0810 4 G, A(Py) = 1.310 -11 J/m and α(Py) = 0.01. 48 The external magnetic field (Hext) set in the simulation was 640 Oe. The y component of the static effective magnetic field (Heff) distribution inside of the YIG microstripe is shown in the color map of Fig. 3(a). Due to the strong induced dipolar field, the lateral symmetry of Heff across the width of the waveguide is gradually broken in the segment close to the permalloy dot, while Heff is again symmetric in the segments far away from the permalloy dot. For exciting the spin waves, we apply a continuous excitation of "sin" function hx = h0sin(2πft) in the antenna region, with f = 4 GHz, and h0 = 1 Oe, which is weak enough to avoid nonlinear effects. The total simulation time was 80 ns, to ensure that the system reaches a steady state. Fig. 3(b) shows the pattern of the waveguide spin waves in a single YIG microstripe, which is similar to the theoretical result in Fig. 1(c). Note, that the length of the spin wave modulation period in the simulation is slightly different to the ones previously calculated analytically, which is due to the reduced effective width by the demagnetic field and the slightly different Hext. Fig. 3(c) to (f) show the propagating waveguide spin wave patterns when the permalloy dot was located at the first node, antinode, the second node, and antinode respectively. They are qualitatively in accordance with the patterns of Δφ = π, 3π/2, 0 and π/2 in Fig. 2. Practically, the odd modes are excited in the antenna region, with φodd = 0. As the odd modes propagate along the stripe for a certain distance d, the phases shift by kd, where k is the corresponding wavevectors. At the first node position, the phase shift of the main contributing odd modes is approximately φodd = 2qπ+π as discussed above. Here, since the symmetry is broken, the even modes are excited with φeven = 0, and therefore, the final interference pattern in Fig. 3(c) agrees well with the analytical result of Δφ = π. Similarly, the patterns of Fig. 3(d) to (f) agree with Δφ = 3π/2, 0 and π/2, respectively. The y component of the effective magnetic field (Heff) distribution inside of the YIG stripe with a permalloy (Py) dot (green, same hereinafter). Patterns of the waveguide spin waves propagating in (c) single YIG stripe, and YIG stripe with a lateral permalloy dot at the (d) first node, (e) first antinode, (f) second node and (g) second antinode. In addition, it should be pointed out that the initial phase of the newly introduced even modes is also determined by which side the permalloy dot is located on. For example, comparing Fig. 3(c) and (e), the patterns of the waveguide spin waves after passing by the permalloy dot are inversely mirrored. Similar behavior is also observed for Fig. 3(d) and (f). This indicates that a phase difference of π can be induced by placing the permalloy dot on the other side. Therefore, the even modes can be annihilated (enhanced) by the destructive(constructive) interference with other even modes generated by other micromagnet in close proximity to the waveguide on the same (other) side one period away. In order to demonstrate this, we simulated the waveguide spin wave patterns in a YIG microstripe with three permalloy dots distributed on one side and two sides as shown in Fig. 4(a) and (b). In Fig. 4 (a), the permalloy dots were located at the first three nodes on one side. The waveguide spin waves experienced the following processes: 1. the first even mode (EM1) was generated with φEM1 = 0 at the first node, resulting in the waveguide spin waves propagating nonsymmetrically in the following self-focusing period; 2. the second even mode (EM2) was generated with φEM2 = 0 at the second node. However, at this point, the first even mode has a phase shift of π and destructively interferes with the second even mode. Therefore, the asymmetry disappeared in the next period; 3. the third even mode (EM3) was generated with φEM3 = 0 at the third node again, leading to the following asymmetrical pattern. On the contrary, in Fig. 4(b), the second even mode was generated with φEM2 = π and thus constructively interfered with the first even mode, as did the third even mode. The anti-symmetric component was therefore increased compared with Fig. 3(c). In this section we demonstrated that Δφ can be tuned by changing the relative position of permalloy dots near the YIG microstripe, including the distance d to the excitation, and the side on which it is located. Changing the distance d leads to a phase shift of odd modes with kd, and switching the sides cause even modes phases to shift by π. Using multiple permalloy dots introduces multiple even modes, whose constructive (destructive) interference increases (decreases) the anti-symmetric component of the propagating waveguide spin waves. Effects of the wave vector k According to the dispersion relation described by Eq. (1), the wave vectors k of the waveguide spin waves with specific frequencies can be modified by H0, which is the most common tunable parameter among the variables in the equation if the devices are already fabricated. 49,50 Fig. 5(a) shows a schematic illustration of the investigated device, which is a 4.5-μm wide YIG (75-nm thick) stripe. The fabrication of the structures was done using electron-beam lithography and lift-off. For the excitation of the spin waves, the shortened end of a coplanar waveguide made of Ti(20 nm)/Au(500 nm) with a width ~2μm was placed on top of the end of the YIG microstripe. The spin waves excited by the antenna structure connected with a microwave generator can reach in several GHz frequency range. In this work, we fixed the frequency at 4 GHz. All the observations of the spin waves were performed using microfocused Brillouin light scattering (μ-BLS) 51 with a laser wavelength of 532 nm. First, we measured the 4 GHz spin wave intensity versus Hext in a single YIG stripe with the laser spot fixed at the center of the cross in the red circle as indicated in Fig. 5(a). The BLS intensities versus magnetic field is shown as in Fig. 5(b), where the peak is located around 650 Oe. It means that the 4-GHz spin waves propagate with the highest efficiency in the YIG microstripe for Hext ≈ 650 Oe. Subsequently, the intensity patterns of propagating spin waves in a single YIG microstripe under 630 and 670 Oe were mapped as shown in Fig. 5(c) and (d). Comparing the two patterns in a single YIG microstripe, the self-focus period was expanded with the increase of Hext due to the collective decrease and the convergence of the ks for odd modes. 52 Subsequently, the 4.5-μm permalloy dot was deposited using a combination of ebeam lithography and sputter deposition (see supplementary for experiment details), laterally on one side of the YIG microstripe ~3.5-μm away from the antenna, almost at the first node of the pattern measured for 630 Oe. Lastly, the spin wave intensities were imaged in the same region of the YIG microstripe under various magnetic field (610 to 690 Oe) as shown in Fig. 6 Fig. 6(b)] are in accordance with Fig. 3(b) and (c), where the spin waves flows toward the permalloy dot. On the contrary, comparing the patterns of Fig. 6(b) and (d), the effect of the permalloy dot at 670 Oe is to squeeze the spin wave flow toward the other side instead of attracting to the same edge, which indicates that the generated even modes here have a π phase difference with those in Fig. 6(b). According to Fig. 5(b), the 4-GHz spin waves propagate with the largest amplitude in the middle of YIG microstripe under Hext ≈650 Oe. The spin waves with a specific frequency in the waveguide could reach the highest intensity near the ferromagnetic resonant field. Similar phenomena were observed in measurements of the spin waves localized at the two edges of a stripe. The two SWs beams were split more with the increase of the field at a fixed frequency, 54 as well as the decrease of the frequency at a fixed field 35 due to the demagnetizing magnetic field. In order to demonstrate this effect, the Heff across the YIG stripe versus its width are plotted in Fig. 7(a), where the black dash line indicates the level of 650 Oe. The integrated BLS normalized intensities across the width close to permalloy dot were measured for different magnetic fields as shown in Fig. 7(b). The intersections between the dash line and solid lines in Fig. 7(a) agree with the locations of the BLS intensity peaks in Fig. 7(b) for the different magnetic fields. The presence of the permalloy dot introduces an additional static dipolar field, which shifts the position of the effective field being 650 Oe closer to (further away from) the permalloy dot when Hext < 650 Oe (Hext > 650 Oe), attracting (repelling) the spin wave flow. Conclusion In summary, we demonstrated a new method, using interference of different spin waves, to manipulate the channels of the waveguide spin waves propagating in a magnetic microstripe. The waveguide spin wave channels can be tuned by the phase difference Δφ between the intrinsic odd modes, which are preferred by homogenous excitation. Additional even modes can be introduced via breaking the magnetic symmetry through the non-symmetrical placement of a permalloy dot next to the wave guide. The phase shift Δφ is controlled by the relative position of the permalloy dot to the antenna and the external magnetic field Hext. An additional phase difference of π can be introduced if the permalloy dot is located on the opposite side of the microstripe or the Hext exceeds the field for the most efficient spin wave propagation. These findings will assist with magnonic engineering, such as the design of a multiplexer combined with piezoelectric strain control of the micro-magnets. They might also enable new functionality, such as the non-reciprocity. Furthermore, note that with the suitable design of additional magnetic structures with sufficiently high anisotropy, the additional stray field may be modulated in a bistable manner, which could provide additional possibilities for controlling spin wave propagation. Lastly, this model system also serves as an ideal system for fundamental scientific research on the physics of wave propagation.
2019-04-25T22:39:01.000Z
2019-04-25T00:00:00.000
{ "year": 2019, "sha1": "8538dedd6f1878666aa7c3e817c5a53feb082449", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.100.014429", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "8538dedd6f1878666aa7c3e817c5a53feb082449", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15990489
pes2o/s2orc
v3-fos-license
Promoter hypermethylation and downregulation of trefoil factor 2 in human gastric cancer Trefoil factor 2 (TFF2) plays a protective role in gastric mucosa and may be involved in the progression of gastric cancer, but the detailed functions and underlying molecular mechanisms are not clear. The present study used a combination of clinical observations and molecular methods to investigate the correlation between abnormal expression of TFF2 and gastric cancer progression. TFF2 expression was evaluated by reverse transcription polymerase chain reaction (RT-PCR), quantitative PCR (qPCR), and western blot and immunohistochemistry analyses. TFF2 methylation levels were analyzed by genomic bisulfite sequencing method. The results showed that TFF2 mRNA and protein expression were decreased in gastric cancer tissues compared with the matched non-cancerous mucosa, and the decreased level was associated with the differentiation and invasion of gastric cancer. Moreover, the average TFF2 methylation level of CpG sites in the promoter region was 70.4% in three gastric cancer tissues, while the level in associated non-neoplastic tissues was 41.0%. Furthermore, the promoter hypermethylation of TFF2 was also found in gastric cancer cell lines, AGS and N87, and gene expression was significantly increased following treatment with a demethylating agent, 5-Aza-2′-deoxycytidine. In conclusion, TFF2 expression was markedly decreased in gastric cancer and promoter hypermethylation was found to regulate the downregulation of TFF2. TFF2 has been suggested as a tumor suppressor in gastric carcinogenesis and metastasis. Introduction Gastric cancer is a multi-step progression from normal gastric mucosa to chronic gastritis, atrophy, intestinal metaplasia, dysplasia and ultimately cancer (1). Three closely related trefoil factors (TFFs) known in humans, pS2 (TFF1), spasmolytic polypeptide (SP or TFF2) and intestinal TFF (ITF or TFF3) (2,3), have been previously reported to be associated with the development of various types of cancer (4). TFF1, a tumor suppressor gene, exhibits decreased expression in precancerous and gastric cancer tissues (5,6). TFF3 expression is significantly elevated in intestinal metaplasia biopsy specimens compared with that in normal tissues, and the samples with an elevated expression of TFF3 lack goblet cell features (7). Furthermore, as TFF3 promotes tumorigenesis by increasing cell invasion and metastasis (8), gastric carcinoma patients with positive expression of TFF3 show invasive characteristics and poor prognosis (9). TFF2 is a principal cytoprotective TFF in the stomach and is highly expressed in ulcer tissue (10). Certain studies have previously reported that TFF2 expression is upregulated in gastric cancer tissues and that the overexpression is associated with cancer invasion, metastasis and a poor prognosis (11,12). However, several studies have shown that TFF2 expression is decreased significantly in gastric adenomas compared with the associated normal tissue, suggesting that the loss of TFF2 expression, as with the loss of TFF1, is an important event in gastric carcinogenesis (13)(14)(15). However, the correlation between the downregulation of TFF2 expression and clinicopathological data, as well as the detailed molecular mechanism underlying TFF2 abnormal expression, remain unclear. Promoter hypermethylation and downregulation of trefoil factor 2 in human gastric cancer The majority of gastric cancers are diagnosed in the advanced stage (16), which is generally resistant to radiotherapeutic or chemotherapeutic treatments. Therefore, it is important to identify any early regulatory molecules involved in gastric cancer progression, which may aid the detection of gastric cancer at an early and curable stage. Recently, it has been reported that the downregulated expression of protease-activated receptor 4 (PAR4) in gastric cancer tissues and the loss of PAR4 expression in gastric cancer may result from hypermethylation of the gene promoter (17). In the current study, we aimed to define the expression difference of TFF2 in gastric cancer and the gene methylation level. Materials and methods Gastric tissue samples. Gastric specimens (n=28) were obtained from the tumor and an adjacent non-cancerous area, ≥6 cm from the tumor tissues of gastric carcinoma patients at the First Affiliated Hospital of Kunming Medical College (Kunming, China). The mean age of the patients at diagnosis was 56 years. The non-neoplastic tissue was confirmed to lack tumor cell infiltration using histological analysis. The tissues were immediately placed in liquid nitrogen and stored at -80˚C until use. A gastric cancer tissue microarray representing 110 types of gastric cancer with their non-neoplastic resection margins was constructed (18) at the Shanghai Outdo Biochip Center (Shanghai, China). Human samples were used in accordance with the requirements of the Ethical Committee of the Kunming Institute of Zoology, the Chinese Academy of Sciences, under the guidelines of the World Medical Assembly (Declaration of Helsinki). Written informed consent was obtained from the patient's families. RNA extraction and polymerase chain reaction (PCR). RNA extraction and first-strand cDNA synthesis were performed as previously described (19). For semi-quantitative reverse transcription PCR (RT-PCR) and quantitative PCR (qPCR), the following primers were used: Forward, 5'-CTGCTTCTCCAACTTCATCT-3' and reverse, 5'-CTTAGTAATGGCAGTCTTCC-3' for TFF2 (74-bp product); and forward, 5'-ATGGGGAAGGTGAAGGTCG-3' and reverse, 5'-GGGGTCATTGATGGCAACAATA-3' for glyceraldehyde 3-phosphate dehydrogenase (GAPDH; 107-bp product). GAPDH was used as an internal control. Following RT-PCR, the amplicons were separated by electrophoresis in a 2% agarose gel that was stained with ethidium bromide and viewed under ultraviolet illumination. qPCR was performed using a continuous fluorescence detector (Opticon Monitor; Bio-Rad, Hercules, CA, USA) and PCR was performed using an SYBR Green real-time PCR kit (Takara Bio, Inc., Dalian, China) with the following reaction conditions: Initial denaturation at 95˚C for 1 min followed by 40 cycles at 95˚C for 15 sec, 60˚C for 15 sec and 72˚C for 20 sec. Each sample was run three times. No-template controls (no cDNA in the PCR) were run to detect non-specific or genomic amplification and primer dimerization. Fluorescence curve analysis was performed using Opticon Monitor software. The relative quantitative evaluation of TFF2 levels was performed using the E-method (20) and expressed as a ratio of the TFF2 to GAPDH transcripts in the tumor tissue divided by that ratio in the non-neoplastic tissue of the same patient. The identities of RT-PCR and qPCR products were confirmed by DNA sequencing. Cell culture. AGS and N87 human gastric cancer cells were obtained from the American Type Culture Collection (Manassas, VA, USA). AGS cells were cultured in a 1:1 mixture of Dulbecco's modified Eagle's medium and Ham's media. N87 cells were cultured in RPMI-1640 media containing 10% fetal calf serum, 100 U/ml penicillin and 100 mg/ml streptomycin. The cells were grown in a humidified atmosphere containing 5% CO 2 at 37˚C. The cells were seeded at a density of 1x10 6 cells/ml in a 60 mm dish and treated with 10 mM 5-Aza-2'-deoxycytidine (5-Aza-2'-dC; Sigma-Aldrich, St. Louis, MO, USA). DMSO was used as a control. The cells were collected after 3 days and subjected to RT-PCR, qPCR and western blot analysis. Western blot analysis. Tissue and cell samples were homogenized in radioimmunoprecipitation assay buffer containing a protease inhibitor cocktail (Sigma-Aldrich). The protein concentration was determined using a protein assay kit (Bio-Rad). Samples (containing 50 µg of protein) were loaded into a sodium dodecyl sulfate-polyacrylamide gel electrophoresis gel, electrophoresed and then electro-transferred onto a polyvinylidene fluoride membrane. The membrane was subsequently blocked with 3% bovine serum albumin and incubated with an anti-human TFF2 polyclonal antibody (Protein Tech, Chicago, IL, USA) and a horseradish peroxidase-conjugated secondary antibody (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Protein bands were visualized using Super Signal reagents (Thermo Fisher Scientific, Inc., Rockford, IL, USA). Tissue immunohistochemistry (IHC). Tissue IHC was performed as previously described (21). Briefly, antigen retrieval was performed by heating samples in an autoclave at 121˚C for 5 min. Dewaxed sections were pre-incubated with blocking serum and then incubated overnight with an anti-human TFF2 antibody (P-19; Santa Cruz Biotechnology) at 4˚C. Specific binding was detected using a streptavidin-biotin-peroxidase assay kit (Maxim, Fujian, China). The section was counterstained with Harris hematoxylin. Direct microscopic micrographs were captured using a Leica DFC320 camera controlled using Leica IM50 software (Leica, Mannheim, Germany). Sections incubated with normal goat IgG served as negative controls, which were devoid of any detectable immunolabeling. The specificity of the anti-TFF2 antibody was confirmed using an overnight preincubation at 4˚C with its antigen in a 20-fold molar excess of antigen to antibody. The preincubation with TFF2 antigens resulted in an absence of immunolabeling. Immunohistochemical staining was semi-quantitatively assessed by measuring the intensity of the staining (0, 1, 2 or 3) and the extent of staining (0, 0%; 1, 1-10%; 2, 11-50%; and 3, 51-100%). The scores for the intensity and extent of staining were multiplied to yield a weighted score for each case (maximum possible, 9). For the statistical analysis, the weighted scores were grouped into two categories, in which scores of 0-3 and 4-9 were considered negative and positive, respectively (22). Bisulfite sequencing. Genomic DNA from carefully selected 20-µm sections of gastric cancer, non-neoplastic tissues, and AGS and N87 cell lines was isolated using the Universal Genomic DNA Extraction kit (Takara, Bio, Inc.) and bisulfite-converted using the Clontech EpiXplore™ Methyl Detection kit (Takara, Bio, Inc.). TFF2 promoter sequences were amplified from the bisulfite-converted DNA by PCR, purified from agarose gels and subcloned into the pBackZero T Vector (Takara, Bio, Inc.). For each sample, 11 individual clones were sequenced to identify methylated cytosine residues. The PCR primer sequences used were forward, 5'-GGGATTTTTTTATGTTATTTGTTGG-3' and reverse, 5'-ATAAAAAAACCCTCTCCTTCACTTACAAAA-3'. Statistical analysis. All statistical results were analyzed using SPSS 11.0 software (SPSS, Inc., Chicago, IL, USA). Fisher's exact and χ 2 tests were used to analyze the correlation between TFF2 expression and clinicopathological parameters (Tables I and II). Differences in the numerical data between the two paired groups were evaluated using the paired Wilcoxon test (Fig. 1). P<0.05 was considered to indicate a statistically significant difference. Results Downregulated expression of TFF2 mRNA in types of gastric cancer and correlation with clinicopathological parameters. TFF2 mRNA expression in gastric cancer tissues was examined using RT-PCR. In total, four pairs of samples were randomly selected from 28 patients and normalized to the GAPDH level. As shown in Fig. 1A, TFF2 mRNA expression was significantly decreased in cancer tissues compared with the associated normal mucosa. To quantify the differences in the expression of TFF2 mRNA, qPCR was performed on 28 gastric tumor tissue samples. TFF2 expression was downregulated in 93% (26 out of 28) of gastric cancer tissue samples compared with the associated non-neoplastic tissues. In addition, the mRNA levels of TFF2/GAPDH in gastric cancer tissues were significantly lower than those in the corresponding non-neoplastic mucosal tissues (mean ± SE, 3.4±2.7 vs. 9.6±5.4, respectively; P=0.046) (Fig. 1B). The clinical significance of the loss of TFF2 expression was further investigated based on the clinical pathological data. As shown in Table I, there were significant differences in TFF2 mRNA expression in well-and moderately differentiated tumors versus poorly differentiated tumors (P=0.019), and in tumors with lymph node invasion versus non-invasive tumors (P=0.026). In detail, TFF2 mRNA was reduced by a fold-change of 15.0±4.1 (mean ± SE) in the 22 poorly differentiated tumors compared with a fold-change of 1.9±0.7 in the six well-and moderately differentiated tumors (P=0.009; paired Wilcoxon test), and a fold-change of 15.6±3.7 in the 23 lymph node invasive tumors compared with a fold-change of 1.1 ± 0.4 in the five non-invasive tumors (P=0.002; paired Wilcoxon test) (Fig. 1C). Downregulation of TFF2 protein expression in gastric cancer tissues by western blot and tissue IHC analyses. The protein Table I. Correlation between TFF2 mRNA expression levels and clinicopathological data in gastric cancer patients. expression levels of TFF2 in normal and gastric cancer tissues were verified using western blot analysis. After the samples were normalized to the β-actin level, a marked reduction or loss of TFF2 protein was observed in four gastric tumor tissue samples compared with the matched non-malignant tissues (Fig. 2). TFF2 protein levels in normal and malignant gastric Table II. Correlation between TFF2 protein expression levels and clinicopathological data in gastric cancer patients. A B C mucosa were also assessed using an IHC assay. In 110 gastric cancer tissue microarray assays, TFF2 expression was downregulated in 82% (90 out of 110). TFF2 was expressed at high levels in all investigated normal mucosa tissues and staining was identified from the basal-to-middle portions of gastric glands. Furthermore, TFF2 localization was in the cytoplasm and the membrane of gastric normal epithelial cells (Fig. 3A). However, the expression was significantly reduced in well- (Fig. 3B) and moderately (Fig. 3C) differentiated intestinal gastric cancer tissues, while TFF2 expression was almost absent in the poorly differentiated intestinal and diffuse types of gastric cancer (Fig. 3D). Sections incubated with normal goat IgG served as negative controls (Fig. 3E). Analysis of the correlation between TFF2 expression and clinicopathological data showed that decreased TFF2 expression was closely associated with tumor cell differentiation and lymph node invasion (Table II). In detail, TFF2 expression was decreased in 86.9% of poorly differentiated cancers and 65.4% of well-and moderately differentiated cancers (P=0.02; χ 2 test). TFF2 expression was decreased in 88.8% of positive lymph node invasion and 63.3% of negative lymph node invasion tumors (P=0.004, χ 2 test) (Table II). Treatment with 5-Aza-2'-dC increases TFF2 expression in the AGS gastric cancer cell line. To elucidate the potential molecular mechanisms underlying the process of TFF2 downregulation in the progression of gastric cancer, AGS cells were treated with 10 mM 5-Aza-2'-dC, which is a demethylating agent. RT-PCR analysis showed that TFF2 expression in AGS cells was significantly increased following 5-Aza-2'-dC treatment for 3 days (Fig. 4A). qPCR indicated a 3.89-fold increase in the mRNA expression levels of TFF2 following 5-Aza-2'-dC-treatment, while western blot analysis also indicated that TFF2 protein expression increased in AGS cells treated with 5-Aza-2'-dC (Fig. 4B). The results suggested that the epigenetic alteration may be involved in the downregulation of TFF2 expression in the progression of gastric cancer. Analysis of the promoter region methylation of the TFF2 gene in gastric cancer tissues. Treatment with 5-Aza-2'-dC induced demethylation and led to the upregulated expression of TFF2 in AGS cells. Therefore, the methylation level of the TFF2 gene promoter was further analyzed in three gastric cancer and non-neoplastic tissue samples, as well as in AGS and N87 gastric cancer cell lines. Using the genomic bisulfite sequencing method, 16 CpG sites were analyzed in a 571-bp region containing part of the TFF2 promoter region. It included six CpGs found after the transcription start site and 10 CpGs located in the ~300-bp 5'-flanking region. The A B C D E average promoter methylation level of three gastric cancer tissues was 70.4% and the control of non-neoplastic tissues was 41.0%, which showed that gastric cancer tissues with a decreased expression of TFF2 exhibited hypermethylation levels at the 16 CpG sites. In addition, AGS and N87 gastric cancer lines exhibited 85.2 and 93.7% methylation levels at the 16 CpG sites, respectively (Fig. 5). Therefore, these results indicated that promoter hypermethylation may lead to the inhibition of TFF2 transcription in gastric cancer. Discussion TFFs are widely expressed in the mucosa of the gastrointestinal tract and play a role in inflammation, injury and repair. TFF2, a member of the TFFs, is expressed in the cytoplasm of gastric mucosal neck cells and acts as a mitogen to promote cell migration and suppress acid secretion (11). An SP-expressing metaplasia lineage is markedly associated with early gastric cancer and may be an important candidate for the development of metaplastic processes in gastric adenocarcinoma (23,24). In the present study, by RT-PCR, qPCR, western blotting and immunohistochemical assays, the expression of TFF2 was shown to be frequently downregulated in gastric cancer tissues compared with the associated normal mucosa. In detail, TFF2 was expressed in the neck cells and the deeper glands of the normal gastric mucosa, but the expression was significantly decreased in the cancer tissues. Furthermore, no TFF2 expression was detectable in certain malignant tissues from poorly differentiated gastric cancer patients or highly lymph node-invasive cancer patients. The evidence that TFF2 expression was found to decrease is consistent with the results of previous studies, and decreased TFF2 expression is associated with the proliferation and malignant transformation of gastric cancer mucosa (15). However, the overexpression of TFF2 in gastric carcinoma tissues has also been shown in additional previous studies (12). The contradictory results may be attributed to the differences among cancer cell types. In the qPCR and IHC analyses of the current study, the decreased expression of TFF2 was 92.9% and 81.8%, respectively, and the reduced expression was found to significantly correlate with tumor cell differentiation and invasion. Therefore, there was reduced TFF2 expression in poorly differentiated tumor cells compared with well-and moderately differentiated tumor cells, and reduced TFF2 expression in positive lymph node invasion tumors compared with negative lymph node invasion tumors. The dysregulation of TFF2 expression has been associated with gastric cancer cell migration, invasion and resistance to apoptosis. However, the underlying mechanisms associated with aberrant TFF2 expression remain unclear. Transcriptional silencing by promoter hypermethylation has emerged as one of the important mechanisms of gastric cancer development (25). TFF2 methylation has been shown to inversely correlate with mRNA levels of TFF2 at the time of Helicobacter pylori infection and to increase throughout gastric tumor progression (26). In the present study, a demethylating agent was found to increase the expression of TFF2 in AGS cells. Therefore, the methylation status of cytosines was further analyzed in sites of the TFF2 promoter region of gastric cancer and non-neoplastic tissues, as well as in AGS and N87 gastric cancer cell lines. Promoter hypermethylation was confirmed in gastric cancer tissues compared with that in non-neoplastic gastric mucosa. In addition, TFF2 promoter hypermethylation was also found in AGS and N87 gastric cancer cell lines. These results indicated that the TFF2 gene was undermethylated in the normal mucosa, but overmethylated in gastric cancer tissues, suggesting that promoter hypermethylation may lead to the inhibition of TFF2 transcription in gastric cancer tissues. In conclusion, the current study showed that the expression levels of TFF2 were downregulated in gastric cancer tissues, particularly in poorly differentiated cancer cells and lymph node-positive tumors. Notably, the aberrant DNA promoter methylation is critical in the downregulation of TFF2 expression. These results may be useful to elucidate the molecular Figure 5. Genomic bisulfite sequencing of the TFF2 promoter-associated CpG sites in gastric cancer and non-neoplastic tissues. TFF2 promoter methylation in DNA from three gastric cancer tissues, one control of non-neoplastic tissue and two gastric cancer cell lines, AGS and N87. The average methylation at each analyzed CpG site in the TFF2 promoter was indicated based on the bisulfite sequencing of 11 individual clones. TFF2, trefoil factor 2.
2018-04-03T04:18:46.374Z
2014-02-21T00:00:00.000
{ "year": 2014, "sha1": "6af907d26f3b0b7ef0278179944b4ad290b8634e", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ol/7/5/1525/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6af907d26f3b0b7ef0278179944b4ad290b8634e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
121288291
pes2o/s2orc
v3-fos-license
Charge radii of octet and decuplet baryons The charge radii of the octet and decuplet baryons have been calculated in the framework of chiral constituent quark model ($\chi$CQM) using a general parameterization (GP) method. Our results are comparable with the latest experimental studies as well as with other phenomenological models. The effects of SU(3) symmetry breaking and GP parameters pertaining to the one-, two- and three-quark contributions have also been investigated in detail. The internal structure of baryons is determined in terms of electromagnetic Dirac and Pauli from factors F 1 (Q 2 ) and F 2 (Q 2 ) or equivalently in terms of the electric and magnetic Sachs form factors G E (Q 2 ) and G M (Q 2 ) [1].The electromagnetic form factors are further related to the static low energy observables of charge radii and magnetic moments.Although Quantum Chromodynamics (QCD) is accepted as the fundamental theory of strong interactions, the direct prediction of these kind of observables from the first principle still remains a theoretical challenge as they lie in the nonperturbative regime of QCD. The mean square charge radius (r 2 B ), giving the possible "size" of baryon, has been investigated theoretically in various models such as Skyrme model [2], 1/N c expansion [3], chiral perturbation theory [4], lattice QCD [5] etc.The results for different theoretical models are however not consistent with each other.Several measurements have been also made for the charge radii of p, n, and strange baryon Σ − [6,7]. The chiral constituent quark model (χCQM) [8] coupled with the "quark sea" generation through the chiral fluctuation of a constituent quark into Goldstone bosons (GBs) [9,10], finds applications in the low energy regime.Since this model successfully explains many of the low energy hadronic matrix elements [11,12,13,14], it therefore become desirable to extend it to calculate the charge radii of the octet and decuplet baryons using a general parametrization method (GP) [15]. The most general form of charge radii operator consisting of the sum of one-, two-, and three-quark terms with coefficients A, B, and C is expressed as Solving the charge radii operators for the spin 1 2 + and spin 3 2 + baryons, we obtain The charge radii squared r 2 B(B * ) for the octet (decuplet) baryons can now be calculated by evaluating matrix elements corresponding to the operators in Eqs.(2) and (3) as well as the χCQM parameters, a, aα 2 , aβ 2 , and aζ 2 representing the probabilities of fluctuations to pions, K, η, and η ′ , respectively, the charge radii of other octet and decuplet baryons can be calculated.The results have been presented in Table 1. To understand the implications of chiral symmetry breaking and "quark sea", we have also presented the results of NQM including the one-, two-, and three-quark contributions of the GP parameters.If we consider the contribution coming from onequark term only, the charge radii of the charged baryons are equal whereas all neutral baryons have zero charge radii in NQM.These predictions are modified on the inclusion of two-and three-quark terms of GP method in NQM and are further modified on the inclusion of "quark sea" and SU(3) symmetry breaking effects.Thus, it seems that the GP parameters alone are able to explain the experimentally observed non-zero charge radii of the neutral baryons.However, NQM being unable to account for the "proton spin problem" and other related quantities, the results have been presented for χCQM. The importance of strange quark mass has been investigated by comparing the χCQM results with and without SU(3) symmetry breaking.The SU(3) symmetry results can be easily derived by considering α = β = 1 and ζ = −1.The SU(3) breaking results are in general higher in magnitude than the SU(3) symmetric results.The SU(3) symmetry breaking corrections are of the order of 5% for the case of p, Σ + , Σ − , and Ξ − baryons whereas this contribution is more than 20% for the neutral octet baryons.We have also compared our results with the other phenomenological models and our results are in fair agreement in sign and magnitude with the other model predictions.Since experimental information is not available for some of these charge radii, the accuracy of these relations can be tested by the future experiments. The decuplet baryon charge radii, presented in Table 1, the inclusion of SU(3) symmetry breaking increases the predictions of charge radii as in case of octet baryons.Again, the sign and magnitude of the decuplet baryon charge radii in χCQM are in fair agreement with the other phenomenological models with the exception for neutral baryons.One of the important predictions in χCQM is a non-zero ∆ 0 charge radii which vanishes in NQM as well as in some other models.The contribution of the three-quark term in the case of decuplet baryons is exactly opposite to that for the octet baryons.Unlike the octet baryon case, the inclusion of the three-quark term increases the value of the baryon charge radii. The χCQM using a GP method is able to provide a fairly good description of the charge radii of the octet and decuplet baryons.The most significant prediction of the model is the non-zero value pertaining to the charge radii of the neutral baryons.The SU(3) symmetry breaking parameters pertaining to the strangeness contribution and the GP parameters pertaining to the one-, two-and three-quark contributions are the key in understanding the octet and decuplet baryon charge radii.New experiments aimed at measuring the charge radii of the other baryons are needed for a profound understanding of the hadron structure in the nonperturbative regime of QCD.Thus at the leading order constituent quarks and the weakly interacting Goldstone bosons constitute the appropriate degrees of freedom in the nonperturbative regime of QCD.
2011-07-20T10:02:35.000Z
2011-07-20T00:00:00.000
{ "year": 2011, "sha1": "012c6a43d59056eeff964c0b88eb564fa79e6f58", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1107.3931", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "590e19166c81c8e0529e7771b20d69a569c4fd23", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251320404
pes2o/s2orc
v3-fos-license
Electric field control of anomalous Hall effect in CaIrO$_3$/CaMnO$_3$ heterostructure We demonstrate an electric field control of anomalous Hall effect emerging in CaIrO$_3$/CaMnO$_3$ heterostructures. We fabricate both electron-type and hole-type carrier samples by tuning epitaxial strain and then control the carrier density in CaIrO$_3$ layer via electric double layer gating technique. As the Fermi energy of CaIrO$_3$ is tuned close to the Dirac line node, anomalous Hall conductivity is enlarged in both carrier-type samples. This result reveals that the anomalous Hall effect comes from the intrinsic origin reflecting the Dirac like dispersion in CaIrO$_3$. We propose that band splitting induced by the interface ferromagnetism yields several band crossing points near the Dirac line node. These points play as a source of the Berry curvature and contribute to the anomalous Hall effect. Oxide heterointerfaces exhibit a variety of exotic physical properties due to complex interplay between charge, spin and orbital degrees of freedom across the interface [1][2][3]. In particular, interface ferromagnetism driven by charge transfer is one of the well-known examples. Ferromagnetism emerges at various interfaces between two non-ferromagnetic compounds; for example, manganites with an antiferromagnetic insulator ground state (CaMnO 3 , SrMnO 3 ) and paramagnetic conductors (CaRuO 3 , SrIrO 3 , etc.) [4][5][6][7]. When these materials with different chemical potentials are adjacent to each other, electrons are injected into originally empty e g orbitals of Mn 4+ to adjust both chemical potentials [8]. This leads to an intrinsic doping of electrons to manganites and assists double exchange interaction. As a result of competition between the double-exchange interaction near the interface and superexchange interaction in the bulk region, the system takes a canted antiferromagnetic state and exhibits weak ferromagnetism. This interface ferromagnetism in turn gives rise to anomalous Hall effect (AHE), which is experimentally verified in SrIrO 3 /SrMnO 3 superlattices with short period [5]. Theoretical calculation predicts that the AHE is intrinsic effect where electrons acquire anomalous Hall velocity induced by the Berry curvature [9]. In this context, it is interesting to modulate the strength of spin-orbit interaction or carrier density (i.e., the position of the Fermi energy) by an external electric field, since the magnitude of the anomalous Hall conductivity (AHC) is linked to these parameters [10]. However, an electric field modulation is difficult for SrIrO 3 /SrMnO 3 superlattices because the electric field cannot uniformly modulate every interfaces due to the screening effect, which prohibits us from studying the electric field effect on the AHE emerged at the interface. In this study, we investigate an electric field effect on AHE in CaIrO 3 /CaMnO 3 heterostructures. For SrIrO 3 /SrMnO 3 , AHE was reported only in superlattice structure so far [5]. On the other hand, for CaIrO 3 /CaMnO 3 , emergence of AHE was reported even in bilayer structure [11]. The reported carrier density of CaIrO 3 is one or two orders of magnitude smaller than that of SrIrO 3 ; around 10 17 cm −3 and 10 19 cm −3 orders in bulk single crystals [12] and epitaxial thin films [13], respectively, which is more suitable for an electric field control of the Fermi energy. Furthermore, CaIrO 3 is known to be a topological semimetal, so-called nodal line semimetal, and to possess Dirac line node near the Fermi energy, wherein the conduction and valence bands cross along a closed line in momentum space. This Dirac line node imparts high-mobility carriers due to the large band dispersion 2/14 as in the case of other Dirac electron systems. Moreover, by breaking the time-reversal symmetry, degeneracy of the line node can be lifted, yielding Weyl nodes which function as sources of the Berry curvature and contribute to AHE. These features render this system an ideal platform to examine the Fermi energy dependence of AHE originating from Diraclike band structure in oxides. We fabricate heterostructures with n-and p-type carriers by controlling epitaxial strain imposed by substrates and then modulate the carrier density in CaIrO 3 layer via electric double layer (EDL) gating method [14][15][16][17]. in which a gate electrode of Pt coil was immersed. The schematic of the device structure is depicted in Fig. S2(a) of Supplementary Materials. Typical channel size for electrical transport measurements is 2 × 4 mm 2 , with which the measured resistance was converted into sheet resistance (R S ). The transport properties were measured under vacuum with a back pressure of 1×10 −5 Torr. Before the electrical measurements, the IL was stored in a vacuum hot plate at 90 • C for several hours to remove water contamination which may induce some electrochemical reactions. We first discuss the difference of the transport properties between the heterostructure and each constituent film. There has been a report about metallic temperature dependence of resistivity and emerging AHE for compressively strained 20 nm-thick Ce 0.05 Ca 0.95 MnO 3 films 3/14 grown on (001) oriented YAlO 3 substrates [18]. Therefore, it is important to clarify which layer contributes to the transport phenomena in our heterostructures. Figure 1(a) shows the temperature dependence of R S for the CaIrO 3 /CaMnO 3 heterostructure and the CaIrO 3 film grown on LaAlO 3 substrates. The inset shows the R S of 1.5 nm-thick Ce 0.05 Ca 0.95 MnO 3 film as a function of temperature. The behavior of R S in the heterostructure is similar to that in the CaIrO 3 film. Both samples exhibit a semimetallic temperature dependence and the R S moderately increases with decreasing temperature. On the other hand, the R S of Ce 0.05 Ca 0.95 MnO 3 film is several orders of magnitude higher than that of the heterostructure and exhibits an insulating behavior as shown in Fig. 1(b), which is totally different from the Ce 0.05 Ca 0.95 MnO 3 films grown on YAlO 3 (Ref. [18]). This contrast plausibly comes from tensile strain imposed on our Ce 0.05 Ca 0.95 MnO 3 films grown on LaAlO 3 , which stabilizes an insulating ground state as reported previously [19]. In opposition to the similarity of the behavior in R S between the heterostructure and the shows R AHE as a function of B at various temperatures. The R AHE term is extracted by subtracting the R H B from the measured R yx , where the R H B term is estimated from the linear fitting in the higher magnetic field region as shown in the red broken line in Fig. 1 (c). R AHE emerges below ∼60 K and exhibits an anticlockwise hysteresis at low temperature. It is worth noting that the sign of AHE is positive for the heterostructure while previously reported Ce 0.05 Ca 0.95 MnO 3 films on YAlO 3 substrates exhibits the negative sign of AHE [18]. From the comparison of the transport properties between the heterostructure and each constituent film, we have confirmed that CaIrO 3 layer is dominant in both electrical conduction and the observed AHE rather than electron doped CaMnO 3 layer in the heterostructure. Here we propose a possible mechanism of the observed AHE as follows: (i) electrons transfer from CaIrO 3 to CaMnO 3 layers and induce double-exchange interaction, resulting in a weak ferromagnetism (canted antiferromagnetism) in CaMnO 3 layer near the interface, (ii) magnetization is expected to be induced in CaIrO 3 near the interface due to magnetic proximity 4/14 effect, and (iii) CaIrO 3 layer with magnetization exhibits the AHE. In this study, it is difficult to reveal the magnetic ordering at the interface in detail only from the transport properties. However, recent experimental result and theoretical calculations for SrIrO 3 /SrMnO 3 superlattice revealed that the intra-layer interaction within the SrMnO 3 and SrIrO 3 layers is ferromagnetic but the inter-layer interaction between the SrMnO 3 and SrIrO 3 layers is antiferromagnetic [5,9,20]. Although it is intriguing to elucidate the mechanism of the emergent ferromagnetism in CaIrO 3 /CaMnO 3 heterostructures, this is beyond the scope of this report and remains as future work. We then examine the effect of epitaxial strain on the transport properties in the heterostructures. As shown in Fig. 2(a), CaIrO 3 grown on SrTiO 3 substrates is imposed on tensile strain while that on LaAlO 3 substrates is compressively strained. Figure 2(b) shows B dependence of R yx for the heterostructures grown on SrTiO 3 (blue) and LaAlO 3 (red) substrates measured at 5 K. Hall coefficient R H exhibits opposite sign between two samples, indicating that hole (electron) type carrier is dominant for the heterostructure grown on SrTiO 3 (LaAlO 3 ). Previous studies report that carrier type of CaIrO 3 thin films is sensitive to epitaxial strain [13,21]. It has been theoretically predicted that tetragonal distortion can lift the degeneracy of t 2g orbitals in the J eff = 1/2 state of Ir 4+ near Fermi level [22], and thus the epitaxial strain might induce this carrier type change. Yet, considering the sensitivity of the band structure of CaIrO 3 against electron correlation as well [12], further elucidation of the origin of the change in carrier type is a matter of speculation. To obtain further insight into the origin of the observed AHE, we attempt to tune the position of the Fermi energy of CaIrO 3 layer via EDL gating method, where negative gate voltage is applied to the both carrier-type samples. Negative gate voltage corresponds to tuning the Fermi energy closer to (away from) Dirac line node of CaIrO 3 for n-type (p-type) sample as shown in the top schematics of Fig. 4(a). We performed Hall measurements at several gate voltages. Each gate voltage was applied at 265 K for 60 mins before the samples were cooled down to each measurement temperature at the rate of 0.5 K/min. Before Hall measurements, we confirmed that negative gate voltage reversibly modulated R S for CaIrO 3 and ruled out the possibility of electrochemical reactions (see supplementary Fig. S2(b)). V G increases R AHE . On the other hand, R AHE decreases with negative V G for the p-type heterostructure. Figures 3(c) and 3(d) show temperature dependence of σ xy at 9 T under several V G for the p-and n-type heterostructures, respectively. Here, σ xy is calculated as where t is the thickness of CaIrO 3 layer. In both heterostructures, the ferromagnetic transition temperature, where the AHE emerges, is unchanged (∼60 K). Furthermore, the coercive field, which is estimated from the hysteresis loop of R AHE , also remains nearly unchanged against the amplitude of the gate voltage. These results suggest that the magnetic properties of the heterostructures are not modulated by the EDL gating. Rather, the electric field only modifies the carrier density of CaIrO 3 (i.e., the position of the Fermi energy). It should be pointed out that the σ xy calculated from Eq. (1) may be underestimated because the effective thickness accounting for the AHE may be smaller than t ≈ 6 nm, if we consider the interfacial ferromagnetism arising from proximity effect. Next, we discuss how CaIrO 3 layer contributes to the AHE. It is well known that AHE is generally classified into two types [23]. One comes from an intrinsic origin where electrons acquire anomalous Hall velocity induced by the Berry curvature in momentum space. This mechanism is dominant in the moderately dirty system where longitudinal conductivity σ xx is below ∼10 4 S/cm. The other comes from an extrinsic origin where electrons are scattered by magnetic impurities via spin-orbit interaction and contribute to the AHE. This mechanism is dominant for larger σ xx above ∼10 5 S/cm. Taking into account that σ xx for our heterostructures is below ∼10 3 S/cm, we can assume that the AHE of the heterostructures comes from the intrinsic origin. At this point, the Dirac-like band dispersion of CaIrO 3 has significance as it can be a source of the Berry curvature. In Kubo formula [24], anomalous Hall conductivity σ xy is given by where n is band index, f (ε n (k )) is Fermi distribution function and v (k ) is velocity operator defined in the k-dependent Hamiltonian (H (k )) for the periodic part of the Bloch functions Equation (2) can be transformed into where b z n (k ) is the Berry curvature. Equation (4) indicates that σ xy is the sum of the Berry curvature over up to the Fermi energy. According to Eq. (2), the anomalous Hall conductivity is enhanced in the following conditions: (i) large group velocity v (k ), which is satisfied in a large band dispersion at the Fermi energy and (ii) two bands are energetically close to each other (i.e., small ε n (k ) − ε n ′ (k )). These two conditions are indeed satisfied when the Fermi energy of CaIrO 3 is tuned close to the Dirac line node. Although the magnitude of exchange energy is uncertain, it may be several meV order since the AHE commonly emerges at ∼60 K, or k B T C ≈ 5 meV. Such an induced band splitting results in the band crossings near Dirac line node and the creation of several Weyl nodes as shown in Fig. 4(b). In this study, we assume that the Fermi energy of CaIrO 3 is far above (below) Dirac line node for n(p)-type samples. It is reported that the Fermi energy for bulk single crystals is about 10 meV above Dirac line node [12]. Since the carrier density of CaIrO 3 films is two orders of magnitude higher than that of single crystals [13], the Fermi energy of our CaIrO 3 layers is estimated to be away from Dirac line node in both carrier-type samples. In this assumption, the negative V G means that Fermi energy approaches (leaves) the band crossings for n(p)-type. As the Fermi energy is closer to the band crossings, each carrier acquires larger anomalous velocity from the Berry curvatures, resulting in the enhancement of the AHE (Fig. 4(c)). In conclusion, we fabricate CaIrO 3 /CaMnO 3 heterostructures with both-carrier types and confirm that CaIrO 3 takes the role of the carrier transport in this system. We perform an 7/14 electric field control of the AHE emerging at the heterostructures by using EDL gating. In both carrier-type heterostructures, anomalous Hall conductivity is enlarged as a gate voltage tunes the Fermi energy closer to Dirac line node of CaIrO 3 . This result indicates that the AHE comes from an intrinsic origin reflecting the Dirac-like linear energy dispersion of CaIrO 3 . We propose a plausible explanation for the AHE in the context of the Berry curvature originating from Weyl nodes which are presumably induced by the magnetic proximity effect from the CaMnO 3 layer. Our work provides important insight into the origin and manipulation of AHE in the oxide heterointerfaces with Dirac-like band dispersion. Also recently, emergence of Dirac electrons has been reported in strained SrNbO 3 thin films [25,26], where enhancement of AHE is expected under broken time-reversal and inversion symmetries. In this sense, the demonstrated technique in this report would be a promising way to modulate AHE by not only tuning Fermi level but also breaking inversion symmetry at the interface. SUPPLEMENTARY MATERIAL See supplementary material for the additional XRD and transport measurements data. ACKNOWLEDGMENTS This work was partly supported by the Japan Science and Technology Agency Core Research for Evolutional Science Technology (JST CREST) (No. JPMJCR16F1) and Izumi Science and Technology Foundation. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. II. Device structure for edl gating and confirmation of no chemical reaction during device operation Figure S2(a) illustrates a schematic diagram of the device structure for EDL gating. 12/14 As a gate dielectric, we employed an ionic liquid (IL), N, N-diethyl-N-(2-methoxyethyl)-Nmethylammonium tetrafluoroborate (DEME-BF 4 ), in which a gate electrode of Pt coil is immersed. For the electrical contacts, we utilized Al wire bonding. Al wires are covered 1/3 by silicone sealant to preserve a chemical reaction between the IL and Al wires. For the device preparations, we did not use standard lithographic techniques because perovskite iridates are highly prone to be degraded in lithographic processes [S1]. Figure S2
2022-08-05T06:41:37.056Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "e4606eadc4ed8842b323fb430d12a9d5c09a115d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e4606eadc4ed8842b323fb430d12a9d5c09a115d", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
213262678
pes2o/s2orc
v3-fos-license
Performance of distance-based k-nearest neighbor classification method using local mean vector and harmonic distance K-Nearest Neighbor was one of the top ten algorithms data mining in the classification process. The low accuracy results in the K-Nearest Neighbor classification method was caused of this method used the system of majority vote which allowed the selection of outliers as the closest neighbors and in the distance model used as a method of determining similarity between data. In this process it is evident that local mean vector and harmonic distance can improve accuracy, where the highest increase in average accuracy obtained in the set data wine is equal to 6.29% and the highest accuracy increase for LMKNN is obtained in set data glass identification which is 16.18%. Based on the tests that had been conducted on all data sets used, it could be seen that the proposed method was able to provide a better value of accuracy than the value of accuracy produced by traditional K-Nearest Neighbor and LMKNN. Introduction The K-Nearest Neighbor method was first method which was introduced in the early 1950s. K-Nearest Neighbor was one of the lazy learning classification methods which was the most widely used in classification, pattern recognition, text categorization. Providing a solution to these weaknesses was done by replacing the traditional distance models that used a distance model based on similarity and feature value similarity features. In this study, the writer suggested to use a distance model harmonic as a substitute for the distance model Euclidean. Determination of the test data class Local Mean Based K-Nearest Neighbor used the measurement of the closest distance to each one using the distance eucllidean from each data class. In addition, K-Nearest Neighbor worked by looking at the nearest K neighbor of each data where in the traditional K-Nearest Neighbor classification process uses the system voting most as the prediction class of the new data. The selection of a small K-Nearest Neighbor value caused the classification of noise or outliers to be sensitive, if the value of K is too large the number of closest neighbors may be too large, which could ultimately reduce the classification results. This study aimed to improve the accuracy of traditional K-Nearest Neighbor by using local mean vector as a class for new data using the distance model Harmonic in the process of calculating similarities between data Problems Based on the introduction above, it was necessary to increase the accuracy of the classification K-Nearest Neighbor at the variable average point. The results of the accuracy of 3rd NICTE IOP Conf. Series: Materials Science and Engineering 725 (2020) 012122 IOP Publishing doi:10.1088/1757-899X/725/1/012122 2 the traditional K-Nearest Neighbor classification method were caused because this method used the system majority vote which allowed the selection of outliers as the closest neighbors, and in the distance model used as a method of determining similarity between data, where traditional distance models were very fragile to similarity calculations . These things could increase errors in the classification process. This study used Local Mean Based K-Nearest Neighbor and Harmonic Distance to improve accuracy on the method K-Nearest Neighbor. Distance Euclidean K-Nearest Neighbor Traditional distance models were very fragile in determining the similarity. Moreover in traditional distance models, the value of attributes which were too large, it could cover the influence of other attributes, and most traditional distance models lack the difference between data, especially in large data samples. In this research, the writer suggested to use a distance model Harmonic, where the distance model was considered better in describing the similarities between data. The main idea of the distance model Harmonic was to take the average number of harmonics from the distance Euclidean between one particular data point to the point of another group of data. Compared to other distance models, distance of Harmonic was more focus on the influence of the closer data. Local Mean Based K-Nearest Neighbor (LMKNN) This method was classified as a simple, effective and resilient method. Stating the use of Local Mean was proven to improve performance and also to reduce the influence of outliers on traditional K-Nearest Neighbor methods, especially for small amounts of data. The workflow of the LMKNN was as follows: Determining the K Value, then calculated the distance of the test data throughout the data from each data class by using the distance model Euclidean. Classifying the distance data between the data from the smallest to the largest K from each class. Calculating the local mean vector of each class with the equation: Determining the test data class by calculating the closest distance to the local mean vector of each data class with the equation: Explaining the K value on LMKNN was very different from K-Traditional NN. LMKNN as the value of K was the number of closest neighbors of each data class, whereas in traditional K-Nearest Neighbor, the value of K was the number of closest neighbors of all data. LMKNN was equal to 1-NN if K value was 1 Classification Classification was a process of assessing objects to include them in a particular class based on the characteristics possessed by that object. Knowing the amount of data that has been successfully classified correctly could be seen from the level of accuracy and rate error of the prediction results in the classification system. Calculation of the level of accuracy could be seen from the equation below: Accuracy = Amount of data is predictable right (4) Amount of Prediction do As for measuring the rate of error used the equation: The rate of error = Amount of data Predictable Wrong (5) Amount of Prediction do All classification algorithms tried to create models with high accuracy (rate error low). The model which was built generally could predict the training data correctly, but when the model was evaluated with the test data then the performance of the classification model, surely it could be seen clearly. Methodology This study used a combination of several stages in Local Mean Based K-Nearest Neighbor and Harmonic Distance as a label for the test data. It was expected that by using a combination of the two methods can improve the accuracy of K-NN. The general description of the stages of the method proposed in this study was shown in Figure 1, it could be seen that the proposed method had several stages, including: i. Data set. In this process, the used data would be divided into 85% of the data which would be used as training data and 15% would be used as test data. ii. Calculate the distance between training data and test data with Euclidean. iii. Determine the nearest K neighbor, on the LMKNN the nearest neighbor was taken from each class of data. Whereas in traditional K-NN, the determination of the nearest K neighbor was taken from all data. In this process, the proposed method would follow the rules of the LMKNN. iv. Specify the HarmonicDistance from each data class with Harmonic as determination of Labels for data test. Labels for test data were determined based on the value of the Harmonic Distance; the smaller value could indicate the similarity of closer data. Results and Discussion A dataset with 8 data records which showed that the data had 3 attributes and 2 classes. 85% of the data was used as training data and 15% was used as test data. The details of the dataset could be seen in table 1. After the Data were trained and data test was determined, then the classification process would be carried out by using the proposed method, LMKNN, and traditional K-NN. The first step in the classification process on the proposed method was to determine the K value, assuming the K value used was 2. Then, calculated the distance between the training data and the test data using Euclidean. Did this similar way for all other training data. The next step was to determine the nearest K neighbor from each data class. Next calculate the value of the harmonic distance. There were Harmonics for each class of data. The values harmonic distance of each data class could be seen in table 2. Stages in determining the class with the data test in combination of LMKNN and Harmonic Distance were to make the grade with values Distance Harmonic which showed that the highest as a class for the tested data. The highest value in the test data was found by class 2, so the tested data was in class 2 The first step in the LMKNN method was to determine the K value, in the previous subsection K values were assumed to be 2, then calculated the distance of the test data to all training data by using the distance model Euclidean. The next stage was to sort the ascending distance as much as K for each class, at this stage 2 closest training data to the test data for each class will be sorted. The next step was to calculate local mean vector for each data class, then calculate the distance of the test data to each local mean vector with Euclidean. The last step in LMKNN was to make Local Mean Vector from the closest class as a class for the test data. The local mean vector closest was found by class 2, so class 2 is used as a new class for the test data. There was way to see clearly the average of the accuracy values found in each method for all data used in this study can be seen in Figure 2. Figure 2. Graph of average accuracy values from all data It could be seen that the proposed method was able to provide a value of better accuracy than traditional K-Nearest Neighbor and LMKNN. improving where the highest accuracy value to the traditional K-Nearest Neighbor found in the data set ionosphere that is equal to 6:29% and an increase in the highest accuracy of the method was found on dataset LMKNN toward glass identification that was equal to 16:18%. The lowest accuracy value increased between the methods proposed before traditional K-NNs of 2.08% and 1.32% for LMKNN, both of which were found in the set data ionosphere. The increase in the average accuracy value of all datasets used was 3.87% for traditional K-Nearest Neighbor and 8.07% for LMKNN. Conclusion While the lowest increase in average accuracy of conventional K-Nearest Neighbor was obtained at the data set, ionosphere which amounted to 2.08% for conventional K-Nearest Neighbor and 1.32% for LMKNN. The average increase in accuracy obtained from the entire dataset was 3.87% for conventional K-Nearest Neighbor and 8.07% for LMKNN. Based on the tests that had been carried out in the previous chapter, it could be concluded that local mean vectors and harmonic distances can improve accuracy in all data sets used. Acknowledgment The writer gave thank you greatly to the Research Institute of the University of Sumatra North (LP USU), the Graduate School of Computer Science at the USU Fasilkom-IT and rector of the University of North Sumatra, has s upported this research.
2020-01-23T09:09:18.066Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "a4a2809c4468cb72a4998d0f9007edeeca905cc1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/725/1/012122", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e186306601d74f4639f787bd28f8861249342327", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
260925669
pes2o/s2orc
v3-fos-license
Don't show us your instrument park: Give us your students/give us to your students! Inviting colleagues from other institutions to partake in seminars or consortial research meetings holds immense significance in the realms of information sharing, strategic research planning and, perhaps most importantly, in stimulating creativity through brainstorming and exploring the vast expanse of idea space. These interactions often serve as catalysts for novel experiments, fresh directions and exciting collaborations. Social events accompanying these visits offer a more relaxed setting and a different context for discussions, which can unexpectedly trigger new and groundbreaking ideas, and of course can create new/ strengthen existing synergistic relationships. It is within the time spent with postgraduate students and postdocs that maximum reciprocal benefits are often reaped, as it provides a chance for budding researchers to learn from and engage with established experts, and for visitors to immerse themselves in the enthusiasm and fresh ideas flourishing among young scientists. The value of these professional visits with either close or distant colleagues cannot be overstated. However, the pressing challenge lies in the time constraints, as such exchanges are typically limited to a few hours. To ensure these interactions are truly productive and enjoyable, let us explore some experiencebased hints to maximize their potential. Foremost, it is essential to avoid squandering precious time showcasing the facilities and instruments in your laboratory or research centre, regardless of how proud you are of them. While laboratories and equipment vary, dedicating substantial portions of the visit to guided tours of inhouse facilities is wasteful when time is of the essence. Instead, focus on meaningful discussions that delve into research questions, advances and challenges, and the ideas to which visitors can contribute. Unless the visit explicitly centres around discussing specific technologies, spending time admiring infrastructure is time of missed opportunities. Second, make the most of the time shared with visitors by tapping into their expertise and seeking their opinions and advice on ongoing projects, particularly those undertaken by early career researchers. Valuable ideas frequently emerge during these discussions, proving that brainstorming together among people endowed with diverse skills and expertise is an unbeatable source of innovation. Additionally, take advantage of any remaining time for socializing and building personal connections. Facetoface interactions with colleagues at the forefront of their respective fields hold a unique value that cannot be replaced by AI, Google searches or even reading the most impactful literature. Some of these connections develop into longterm associations that often become indispensable for navigating the research landscape along one's career. Third, serendipity undoubtedly plays a significant role in science: the chance meeting of ideas/expertise/ technologies at an opportune moment that becomes a light bulb occasion. Serendipity cannot be planned but is favoured by researcher interactions. Similarly, Eureka moments, those instances when one suddenly realizes the meaning of a collection of data that previously did not make any sense, are equally cherished by all researchers. These nonanticipated breakthroughs can emerge in the minds of both experienced scientists and their younger counterparts with equal probability during a scientific conversation; not grabbing the opportunity of mutual illumination is a heavy loss for both sides (how often do we hear: ‘Brilliant! We had similar data but did not consider that possibility.’?). Fourth, it is vital to distinguish between audiences when presenting your facilities and discussing your research. While politicians and funders may be captivated by photoshoot opportunities with grand machines, instrument parks rarely excite knowledgeable and intelligent researchers. Instead, they are more likely to appreciate the sharing of new ideas and approaches. In fields like Biology, where progress hinges on talent and creativity, conversations with individuals from diverse backgrounds play a pivotal role in awakening breakthrough concepts. In particular, encourage early career scientists to actively engage in discussions with visitors, promoting a mutual exchange of questions and outofthebox ideas. As Linus Pauling aptly put it ‘... If you want to have good ideas you must have many ideas. Most of them will be wrong, and what you have to learn is which ones to throw away...’ Received: 25 July 2023 | Accepted: 31 July 2023 E D I T O R I A L Don't show us your instrument park: Give us your students/give us to your students! Inviting colleagues from other institutions to partake in seminars or consortial research meetings holds immense significance in the realms of information sharing, strategic research planning and, perhaps most importantly, in stimulating creativity through brainstorming and exploring the vast expanse of idea space.These interactions often serve as catalysts for novel experiments, fresh directions and exciting collaborations.Social events accompanying these visits offer a more relaxed setting and a different context for discussions, which can unexpectedly trigger new and ground-breaking ideas, and of course can create new/ strengthen existing synergistic relationships.It is within the time spent with postgraduate students and postdocs that maximum reciprocal benefits are often reaped, as it provides a chance for budding researchers to learn from and engage with established experts, and for visitors to immerse themselves in the enthusiasm and fresh ideas flourishing among young scientists. The value of these professional visits with either close or distant colleagues cannot be overstated.However, the pressing challenge lies in the time constraints, as such exchanges are typically limited to a few hours.To ensure these interactions are truly productive and enjoyable, let us explore some experiencebased hints to maximize their potential. Foremost, it is essential to avoid squandering precious time showcasing the facilities and instruments in your laboratory or research centre, regardless of how proud you are of them.While laboratories and equipment vary, dedicating substantial portions of the visit to guided tours of in-house facilities is wasteful when time is of the essence.Instead, focus on meaningful discussions that delve into research questions, advances and challenges, and the ideas to which visitors can contribute.Unless the visit explicitly centres around discussing specific technologies, spending time admiring infrastructure is time of missed opportunities. Second, make the most of the time shared with visitors by tapping into their expertise and seeking their opinions and advice on ongoing projects, particularly those undertaken by early career researchers.Valuable ideas frequently emerge during these discussions, proving that brainstorming together among people endowed with diverse skills and expertise is an unbeatable source of innovation.Additionally, take advantage of any remaining time for socializing and building personal connections.Face-to-face interactions with colleagues at the forefront of their respective fields hold a unique value that cannot be replaced by AI, Google searches or even reading the most impactful literature.Some of these connections develop into long-term associations that often become indispensable for navigating the research landscape along one's career. Third, serendipity undoubtedly plays a significant role in science: the chance meeting of ideas/expertise/ technologies at an opportune moment that becomes a light bulb occasion.Serendipity cannot be planned but is favoured by researcher interactions.Similarly, Eureka moments, those instances when one suddenly realizes the meaning of a collection of data that previously did not make any sense, are equally cherished by all researchers.These non-anticipated breakthroughs can emerge in the minds of both experienced scientists and their younger counterparts with equal probability during a scientific conversation; not grabbing the opportunity of mutual illumination is a heavy loss for both sides (how often do we hear: 'Brilliant!We had similar data but did not consider that possibility.'?). Fourth, it is vital to distinguish between audiences when presenting your facilities and discussing your research.While politicians and funders may be captivated by photoshoot opportunities with grand machines, instrument parks rarely excite knowledgeable and intelligent researchers.Instead, they are more likely to appreciate the sharing of new ideas and approaches.In fields like Biology, where progress hinges on talent and creativity, conversations with individuals from diverse backgrounds play a pivotal role in awakening breakthrough concepts.In particular, encourage early career scientists to actively engage in discussions with visitors, promoting a mutual exchange of questions and out-of-the-box ideas.As Linus Pauling aptly put it '… If you want to have good ideas you must have many ideas.Most of them will be wrong, and what you have to learn is which ones to throw away…' And visits to other research centres are the privileged scenario for this to happen, for the sake of both the visitor and the visited! In essence, when planning a visit, prioritize organizing it to maximize intellectual interactions and brainstorming sessions, and leave shows of technological muscle aside.Emphasize research questions, advancements, problems and plans to unlock the full potential of these visits for both parties involved.In this regard, the interactions with students and postdocs becomes an invaluable element, enriching both the visiting and host institutions. In conclusion, professional visits between research centres should be carefully designed to encourage spending enough quality time together among participants.By shifting the focus from facility tours to substantive discussions and idea sharing, these visits can lead to transformative breakthroughs and enduring partnerships.Embrace the power of face-to-face interactions, and let these exchanges serve as a catalyst for intellectual growth and scientific advancement. Give us your students/give us to your students! C O N F L I C T O F I N T E R E S T S TAT E M E N T The authors have no competing interests.
2023-08-17T06:17:14.347Z
2023-08-16T00:00:00.000
{ "year": 2023, "sha1": "88e55f45fffec08904b61f393daa86431a40359b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1751-7915.14326", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d53aaba94d1c35ea61c5679e41a4332ec0d07225", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
5941073
pes2o/s2orc
v3-fos-license
Shelly : Methyl eugenol consumption by Bactrocera dorsalis 201 CONSUMPTION OF METHYL EUGENOL BY MALE BACTROCERA DORSALIS ( DIPTERA : TEPHRITIDAE ) : LOW INCIDENCE OF REPEAT FEEDING The tendency of male Bactrocera dorsalis (Hendel) to re-visit a methyl eugenol source following initial exposure was examined. The first field test investigated the effect of duration of exposure on subsequent capture probability. “Treated” males were allowed to feed on methyl eugenol for 30 s or had access to methyl eugenol for 1 h, 4 h, or 24 h immediately prior to release. Capture probabilities (1%-4%) did not differ significantly among the different treatments but were significantly below that (22%) recorded for “control” (unexposed) males. In a second field test, treated males were released 7 d, 21 d, or 35 d after an initial exposure (2 h) to methyl eugenol. Capture probabilities (11%-18%) did not differ significantly among the different treatments but were significantly below that (34%) recorded for control males. Laboratory tests yielded similar results as both the incidence and duration of re-feeding on methyl eugenol were uniformly low for males held 7 d, 21 d, or 35 d after their initial exposure. By exposing sterile males to the lure prior to release, it may be possible to combine programs of male annihilation and sterile insect release. The present findings also suggest that the effectiveness of male annihilation efforts may be reduced in areas where wild males have consumed sufficient amounts of methyl eugenol from natural sources. FEO is available from the Florida Center for Library Automation gopher (sally.fcla.ufl.edu) and is identical to Florida Entomologist (An International Journal for the Americas).FEO is prepared by E. O. Painter Printing Co., P.O.Box 877, DeLeon Springs, FL. 32130. This document was created with FrameMaker 4.0.2 of plant-borne substances (Chambers 1977;Sivinski & Calkins 1986;Fletcher 1987).Several well-known examples include the attraction of male Mediterranean fruit flies, Ceratitis capitata (Wiedemann), to trimedlure, male melon flies, Bactrocera cucurbitae (Coquillett), to cue-lure, and male Oriental fruit flies, B. dorsalis (Hendel), to methyl eugenol.Owing to their powerful attractancy, parapheromones play an important role in current control programs of tephritid pests, both in detecting incipient population outbreaks and eradicating already established populations via male annihilation (Chambers 1977). Despite the wide use of male lures in control efforts, relatively little attention has been given to explaining the underlying biological basis of this sex-specific, chemical attraction.In a recent study on the Oriental fruit fly, Shelly & Dewire (1993) found that "treated" males that fed on methyl eugenol achieved significantly more matings than "control" males deprived of methyl eugenol.Interestingly, treated males had a mating advantage even when they fed on methyl eugenol for only 30 s and were tested 35 d post-feeding. The present study investigates the tendency of B. dorsalis males to re-visit a methyl eugenol source following an initial exposure.Specifically, two field experiments and one laboratory experiment were conducted to examine whether the duration of the initial exposure and the time elapsed since the initial exposure affected the incidence and duration of re-feeding.Based on the results of mating trials (Shelly & Dewire 1993), I predicted that neither the duration of the initial exposure (at least for exposure periods exceeding 30 s) nor the time elapsed since the initial feeding (at least for intervals up to 35 d) would significantly affect the tendency for re-feeding. Field Experiments All flies used in field tests were from a colony maintained by the USDA/ARS Tropical Fruit and Vegetable Laboratory, Honolulu, for approximately 70 generations (M.Fujimoto, pers.comm.) using standard rearing procedures (Tanaka et al. 1969).Nonirradiated pupae were obtained 2 d prior to eclosion, and adults were sexed within 5 d of eclosion [(sexual maturity in this stock is attained at about 10 d of age, (M.Fujimoto, pers.comm.)].Males were kept in 5-liter plastic buckets (50 per bucket) covered with screen mesh and given food and water ad libitum. Experiments were conducted at 2 locations on the island of Oahu, Hawaii.During September-October, 1991, I used a 0.6-ha citrus grove in the University of Hawaii Agricultural Experiment Station, Waimanalo, that contained approximately 60 orange trees ( Citrus sinensis (L.)).The grove was bordered on two sides by an open field containing small patches of guava ( Psidium guajava L.) and coffee ( Coffea arabica L.) and on the other two sides by highly disturbed, second-growth forest.During May-July, 1992, field-work was conducted at the Kanewai Garden near the campus of the University of Hawaii, Honolulu.This small area (0.4 ha) contained six large mango trees ( Mangifera indica L.) and was bordered by an open lot on one side and lawns containing non-host vegetation on the remaining sides. Two field experiments were performed.At Waimanalo, I examined whether the duration of exposure to methyl eugenol affected capture probability.As described below, treated males fed on methyl eugenol for only 30 s or had access to methyl eugenol for 1 h, 4 h, or 24 h immediately prior to release.At the Kanewai Garden, I examined the effect of time lapse following initial feeding on capture probability.Treated males had access to methyl eugenol for 2 h and were released 7 d, 21 d, or 35 d later.An additional set of treated males was permitted to feed on methyl eugenol for only 30 s and was released 35 d later.Control males that had no exposure to methyl eugenol were also released in both experiments. To obtain treated males, 1.5 ml of methyl eugenol was applied to 5-cm long cotton wicks, and the wicks, held upright in small plastic containers, were placed singly in the appropriate buckets during midday.Buckets were placed on a shaded outdoor porch where air temperatures varied between 29-31 o C (or 23-31 o C during 24 h exposure periods).The feeding activity of individual males was not monitored during exposure periods of 1 h or more.To obtain males with 30 s feeding times, groups of 5-10 males were observed in screen cages (30 cm cubes with a cloth sleeve on one side) containing a single wick.Individuals were removed after 30 s of feeding by gently "coaxing" them into a vial.In all cases, treated males were exposed to methyl eugenol at 14 d of age and correspondingly were released at the age of 14 d at the Waimanalo site and 21 d, 35 d, or 49 d at Kanewai Garden.At Waimanalo, control males were 14 d old at release, while at Kanewai Garden separate control groups of males aged 21 d, 35 d, and 49 d, respectively, were used for the two treatment categories.Prior to release, control males and the males in the different treatment groups were cooled and marked on the thorax with different color combinations of enamel paint (a given combination was used only once at either field site).The cooling and painting procedures had no apparent adverse effects on male behavior, and individuals resumed "normal" activities within minutes of handling. The following protocol was used for the tests conducted at Waimanalo.On the day prior to a release, Steiner traps were placed singly in 16 different trees located throughout the grove.The same trees were used in all tests.Traps were suspended in the canopy by a 30-cm long wire fastened to a branch.Each trap contained a 5-cm long cotton wick to which 1.5 ml of methyl eugenol (3% naled) had been applied.For all tests, the males were released beneath a centrally located orange tree between 1500-1700 hours.The actual release was accomplished by removing the screen top and gently tapping the bucket to induce flight.Males that were unable to fly were not counted in the release sample.Traps were checked 5 d after release, and in the laboratory captured flies were examined individually for markings.Six replicates were conducted with 75-112 males released per group (control or treatment) per replicate. A similar release protocol was employed at the Kanewai Garden site.However, owing to the small size of the garden, only eight Steiner traps were used at this site.The traps were placed in a circle (70-m radius) around a central release point (a mango tree).Eight replicates were conducted for tests involving treated males exposed to methyl eugenol for 2 h and released 7 d or 21 d later, with 122-143 males released per group (control or treatment) per replicate.Four replicates were conducted for tests involving treated males released 35 d after either exposure to methyl eugenol for 2 h (82-113 males per group per replicate) or feeding on methyl eugenol for only 30 s (79-120 males released per group per replicate). Laboratory Observations The effects of feeding duration and time since first feeding on the incidence and duration of repeat feedin.g were also investigated in the laboratory.Males used in these tests were from a laboratory stock started in November, 1991, with 200-300 adults reared from mangos collected in Waimanalo.Data were collected in July-September, 1992; consequently, the individuals observed were approximately eight generations removed from the wild.Larvae were reared on papaya, and adults were separated by sex within 7 d of eclosion, well before reaching sexual maturity (at approximately 15-20 d of age, Foote & Carey 1987). Treated males fed on methyl eugenol for only 30 s (following the protocol described above) or had access to methyl eugenol for a 30-min period during which their feeding activity was monitored.To obtain this latter group, five uniquely marked individuals were placed in screen cages (30-cm cubes), allowed a 1-2 h "acclimation period", and then given free access to a 5-cm long cotton wick to which 1.5 ml of methyl eugenol had been applied.The amount of time that individual males fed on the wick was then recorded to the nearest second.All observations were made between 1100-1330 hours on a shaded outdoor porch at temperatures between 29-31 o C. Following the initial exposure, treated males were kept in 5-liter plastic buckets and given ample food and water. Treated males -both those restricted to 30 s feeding and those given 30 min access -were held 7 d, 21 d, or 35 d before a second exposure (30 min) during which feeding times of individual males were recorded.All treated males were initially exposed to methyl eugenol at 25 d of age.To investigate the possibility that male age was partly responsible for any feeding differences observed between the first and second exposures, I recorded the feeding times of uniquely marked, control males given their first exposure (30 min) to methyl eugenol at ages 32 d, 46 d, and 60 d, respectively (ages correspond to those of treated males held 7 d, 21 d, or 35 d, respectively). R ESULTS Field Experiments In the Waimanalo experiment, no significant differences in capture probability were found among males in the different treatment groups (H=6.1;P > 0.05; Kruskal-Wallis test; Fig. 1).Among the different exposure groups, only 1%-4% of the males were captured, on average, in a given replicate.In contrast, 22% of control Fig. 1.Capture probabilities of B. dorsalis males exposed to methyl eugenol for varying lengths of time.Points represent average proportion of males captured per replicate; vertical lines indicate + standard error.Release groups: C=control, T=treated.T-30 s males were restricted to 30 s of feeding on methyl eugenol; the remaining groups of treated males had access to methyl eugenol for 1 h, 4 h, or 24 h, respectively.See text for sample sizes.males were captured, on average, in a given replicate.The capture probability of control males differed significantly from males exposed for 1 h (q=5.6), 4 h (q=6.0), or 24 h (q=5.2) as well as from males whose feeding was restricted to 30 s (q=7.9;P < 0.005 in all cases; multiple comparisons test, Zar 1974: 156). At the Kanewai Garden, no significant differences in capture probability were detected among males exposed to methyl eugenol for 2 h but released after differing time intervals (H=5.1;P > 0.05; Kruskal-Wallis test; Fig. 2).Over the different intervals, only 11%-18% of the treated males were captured, on average, in a given replicate.Similarly, capture probabilities did not differ among control males held for varying periods before release (H=0.5;P > 0.05; Kruskal-Wallis test; Fig. 2).On average, approximately 33% of control males were trapped over all pre-release intervals.Based on data pooled over all pre-release intervals, the capture probability for control males was significantly higher than that observed for males given 2 h access to methyl eugenol before release (U=387.5;P < 0.001; Mann-Whitney test).Treated males that fed for only 30 s prior to their release 35 d later also had low capture probability (Fig. 2).An average of 11% of these males was captured per replicate, the same proportion observed for males released 35 d after 2 h exposure to methyl eugenol (U=9; P > 0.05; Mann-Whitney test). Laboratory Observations Among treated males given an initial 30-min exposure period, feeding durations were significantly shorter during the second exposure for males tested 7 d (T=276; n=54), 21 d (T=87; n=53), or 35 d (T=3; n=27) after the initial feeding (P < 0.001 in all cases; Wilcox on paired-sample test; Fig. 3).Moreover, for these males, feeding durations during the second exposure were independent of time elapsed since the initial feeding (H=1.1;P > 0.05; Kruskal-Wallis test).Among the different trials, 85%-91% of the males consumed methyl eugenol during the initial exposure compared to only 32%-38% during the second exposure.Decreased feeding during the second exposure was apparently not age-related: average feeding durations were similar among control males aged 32 d (n=35), 46 d (n=40), and 60 d (n=40; H=3.9; P > 0.05; Kruskal-Wallis test; Fig. 3).Data pooled over the different inter-exposure intervals (or, equivalently, male ages) revealed that, during their second exposure period, treated males fed for shorter periods of time, on average, than control males (Z=11.1;P < 0.05; n 1 =134, n 2 =115; Mann-Whitney U-test). Among treated males given an initial 30-min exposure, there was no correlation in the feeding times of individual males between the first and second exposure periods (r s =0.05; P > 0.05; n=134; Spearman rank).Even if only the incidence of feeding is considered (i.e., regardless of duration), feeding activity during the first exposure period was still not a reliable predictor of subsequent feeding activity: males that fed during the first exposure period were as likely to feed during the second period (48 of 118=41%) as were males that did not feed at all during the initial exposure (8 of 16=50%; G=0.4; P > 0.05; G test with Yates correction).Among treated males given two 30-min exposure periods, 6% (8/134) did not feed on methyl eugenol during either period.One set of treated males was given an initial 30 min exposure period, while another set was restricted to an initial feeding of 30 s; for both sets of treated males, the second exposure period was 30 min.Data for control males represent feeding durations during initial 30-min exposure periods at ages corresponding to males in different treatment groups.Points represent average values; vertical lines indicate + standard error.The value plotted for the initial exposure was calculated over all treated males given an initial 30-min exposure period.See text for sample sizes. Treated males limited to an initial feeding of 30 s also displayed low feeding activity during the second exposure period (Fig. 3).In fact, when re-exposed to methyl eugenol 7 d (n=35 males) or 21 d (n=35 males) after the first feeding, these individuals had feeding durations that were similar to (and not greater than, as might be expected) males given an initial access of 30 min (7 d -Z=0.6;n 1 =35, n 2 =54; 21 d -Z=0.5;n 1 =35, n 2 =53; P > 0.05 in both cases; Mann-Whitney U-test).However, at 35 d after the initial exposure, males (n=40) limited initially to a 30 s feeding fed longer, on average, than males first given a 30 min exposure period (Z=2.7;P < 0.01; n 1 =40, n 2 =27; Mann-Whitney U-test).Though feeding durations of these males increased after 35 d, they were still significantly lower than those of control males of the same age (Z=2.1;P < 0.05; n 1 =n 2 =40; Mann-Whitney U-test). D ISCUSSION Results of the present study indicate that after an initial exposure, B. dorsalis males have a greatly reduced tendency to re-visit a methyl eugenol source.In the field experiments, males that were permitted only 30 s feeding on methyl eugenol were rarely captured in methyl eugenol-baited traps even when released 35 d after feeding.Similarly, in the laboratory most males given an initial exposure of 30 min "ignored" a methyl eugenol source placed directly in their cage 35 d later. Though data are scant, it appears that a dramatic reduction in male responsiveness to lures following exposure characterizes other tephritid species as well.Using a large outdoor cage, Chambers et al. (1972) reported that, after initial exposure to cue-lure, only 14% of male B. cucurbitae , on average, responded to cue-lure-baited traps compared to 50% of control (unexposed) males.Similarly, Brieze-Stegeman et al. (1978) placed dye in a methyl eugenol-baited trap (lacking poison) and found that only 13% (daily average) of the B. cacuminatus (Hering) males seen at the trap over the next several days were marked. The major difficulty in interpreting laboratory studies on male attraction to lures is the scarcity of field data regarding both the availability of parapheromones in natural sources and the feeding behavior of males at these sources.To my knowledge, no data exist regarding either the incidence and duration of feeding bouts or the rate and amount of parapheromone consumption during these bouts.It is likely that the 1-2 ml doses of parapheromones used by experimenters (Chambers et al. 1972;Brieze-Stegeman et al. 1978; present study) exceed levels available in natural sources (e.g., Kawano et al. 1968).Despite this possible discrepancy, it is certainly conceivable that in the wild, males initially make frequent or prolonged feeding bouts and in so doing eventually consume parapheromone in amounts similar to males observed in laboratory studies.In other words, though the feeding time required to inhibit subsequent feeding is reduced in laboratory studies, the basic pattern of decreased responsiveness to parapheromones may nonetheless be characteristic of wild males. The present study has three major implications for control or eradication projects of tephritid pests.First, by exposing sterile males to the lure prior to their release, workers may be able to combine programs of male annihilation and sterile insect release.As noted by Chambers et al. (1972), pre-exposure of sterile males may increase the efficiency of achieving effective overflooding ratios, since wild males would respond to lure-baited traps, whereas sterile males would not.Pre-exposure to the parapheromone may also increase the mating competitiveness of sterile males (Shelly & Dewire 1993), further enhancing the effectiveness of the sterile insect release method.Second, the present findings suggest the possibility that wild males that have consumed sufficient amounts of parapheromone from natural sources may show reduced attraction to lure-baited traps, thus potentially reducing the effectiveness of male annihilation programs.Finally, and somewhat unexpectedly, 6% of the males observed in the laboratory tests were not attracted to methyl eugenol in two separate exposure periods.The possibility that some males in a population may respond only slightly or not at all to parapheromones implies that in certain situations male annihilation may fail to achieve total eradication.Studies in our laboratory are currently investigating the genetic basis of male responsiveness to parapheromones using the B. dorsalis -methyl eugenol association. A CKNOWLEDGMENTS I thank the staff of the University of Hawaii Agricultural Experiment Station in Waimanalo for their cooperation.Annie Dewire, Stacey Fong, Caryn Ihori, Cheryl Monez, and Michael Whang provided capable laboratory assistance, and to all I am grateful.Also, I thank Emma Shelly who, despite her young age, was a great help in counting marked flies in trap catches.Comments by Tim Whittier greatly improved the paper.This research was supported by funds from the California Department of Food and Agriculture (90-0581) and the USDA/ARS (58-91H2-6-42). Fig. 2 . Fig. 2. Capture probabilities of B. dorsalis males held varying lengths of time after exposure to methyl eugenol.Points represent average proportion of males captured per replicate; vertical lines indicate + standard error.One set of treated males (held 35 d) was restricted to 30 s of feeding on methyl eugenol; all other treated males had access to methyl eugenol for 2 h.See text for sample sizes. Fig. 3 . Fig. 3. Feeding times of B. dorsalis males during their second exposure to methyl eugenol 7 d, 21 d, or 35 d after the initial exposure.One set of treated males was given an initial 30 min exposure period, while another set was restricted to an initial feeding of 30 s; for both sets of treated males, the second exposure period was 30 min.Data for control males represent feeding durations during initial 30-min exposure periods at ages corresponding to males in different treatment groups.Points represent average values; vertical lines indicate + standard error.The value plotted for the initial exposure was calculated over all treated males given an initial 30-min exposure period.See text for sample sizes.
2017-09-08T01:15:00.923Z
1994-06-01T00:00:00.000
{ "year": 1994, "sha1": "1ae081bfd41a51894aa2bc75769775fc03536612", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2307/3495505", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1ae081bfd41a51894aa2bc75769775fc03536612", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
244137087
pes2o/s2orc
v3-fos-license
Climate Change Perception and Uptake of Climate-Smart Agriculture in Rice Production in Ebonyi State, Nigeria Rice production in Nigeria is vulnerable to climate risks and rice farmers over time have experienced the risks and their respective impacts on rice farming. Rice farmers have also responded to perceived climate risks with strategies believed to be climate-smart. Farmers’ perception of climate risks is an important first step of determining any action to be taken to counteract the negative effects of climate change on agriculture. Studies on the link between perceived climate risks and farmers’ response strategies are increasing. However, there are limited studies on the determinants of rice farmers’ perception of climate events. The paper therefore examined climate change perception and uptake of climate-smart agriculture in rice production in Ebonyi State, Nigeria using cross-sectional data from 347 rice farmers in an important rice-producing area in Nigeria. Principal component analysis, multivariate probit regression model and descriptive statistics were adopted for data analysis. Perceived climate events include increased rainfall intensity, prolonged dry seasons, frequent floods, rising temperature, severe windstorms, unpredictable rainfall pattern and distribution, late onset rain, and early cessation of rain. Farmers’ socioeconomic, farm and institutional characteristics influenced their perception of climate change. Additionally, rice farmers used a variety of climate-smart practices and technologies to respond to the perceived climate events. Such climate-smart practices include planting improved rice varieties, insurance, planting different crops, livelihood diversification, soil and water conservation techniques, adjusting planting and harvesting dates, irrigation, reliance on climate information and forecasts, planting on the nursery, appropriate application of fertilizer and efficient and effective use of pesticides. These climate-smart agricultural measures were further delineated into five broad packages using principal component analysis. These packages include crop and land management practices, climate-based services and irrigation, livelihood diversification and soil fertility management, efficient and effective use of pesticide and planting on the nursery. High fertilizer costs, lack of access to inputs, insufficient land, insufficient capital, pests and diseases, floods, scorching sun, high labour cost, insufficient climate information, and poor extension services were the barriers to uptake of climate-smart agriculture in rice production. Rice farmers should be supported to implement climate-smart agriculture in rice production in order to achieve the objectives of increased rice productivity and income, food security, climate resilience and mitigation. Introduction As the climate changes, it affects different aspects of the environment. Droughts, strong windstorms, floods, unpredictable rainfall volume, rising temperatures, late and early rain start, and other negative effects of climate change witnessed in previous years are becoming more common presently [1]. As the earth heats, rainfall patterns tend to vary, and extreme events such as droughts, floods, and forest fires become more common and severe [2]. Climate change could also result in more pressure on water bodies [3,4]. Communications noted the significant contributions of rice production to greenhouse gas emissions in the country [22][23][24]. One important mitigation measure to reduce greenhouse gas emissions in Nigeria is the adoption of climate-smart agriculture especially in rice production [24]. This makes the study of climate-smart agriculture in rice production in important rice-producing ecologies in sub-Saharan Africa, such as Ebonyi State, very important. Interest in this issue is one of the motivations of this study. It is clear from the foregoing that rice farming is a major contributor to climate change and a major sufferer of the impacts of climate change. Analysis revealed that climate change will have a negative impact on Nigeria's food security, prompting the implementation of various climate change adaptation and mitigation measures [25]. Responses that sustainably and simultaneously reduce the impacts of climate change on rice production, increase rice productivity and reduce/avoid/remove above and below ground carbon emissions are needed. Such responses are known as climate-smart agricultural practices, technologies or services and they are location and context-specific [26][27][28][29]. There is great potential to boost food production, increase resilience and carbon mitigation via large-scale adoption of climate-smart agriculture in rice farming. The adoption of climate-smart agriculture in rice production has the potential to increase income, food security and improve diets in Nigeria [24]. Conversely, climate-smart agriculture in Nigeria is at the nascent stage and its adoption in rice production is still low in the country [21,30]. Policy-makers require understanding of climate change perception and uptake of climate-smart agricultural practices in rice production in Nigeria to be able to meet the country's obligation of reporting progress made in the implementation of the Nationally Determined Contribution. Additionally, knowledge of the climate-smart agricultural practices in rice production will help the government to address the challenges facing farmers in adopting climate-smart agricultural technologies in the State and other locations with similar socioeconomic and biophysical contexts in Africa. Again, knowledge of the perception of climate change and its determinants will further trigger policies to drive down positive climate change response mechanisms in Nigeria and sub-Saharan Africa. The study was therefore conducted to determine climate change perception and uptake of climate-smart agriculture in rice production in an important rice-producing State in Africa. This study also contributes to the literature on climate-smart agriculture in rice production by applying the principal component analysis to categorize climate-smart agricultural strategies used by rice farmers in an important rice-producing State in Nigeria. Sampling and Data Collection The paper relied on a survey of rice farmers conducted in nine Local Government Areas in Ebonyi State, Nigeria between October 2019 and February 2020. The State has three zones. We included all the zones of the State in this study because rice is grown in all Local Government Areas of the State. In each zone, three Local Government Areas (LGAs) were purposively selected based on the degree of rice production ranked by officials of the Agricultural Development Programme in the State. The selected LGAs in each zone are shown in Figure 1. In each LGA, four communities were selected. In each community, the study selected ten rice farmers. During data entry and analysis, we observed that thirteen (13) returned questionnaires were not properly completed by the farmers. These questionnaires were not included in the final analysis. This reduced the number of observations from 360 rice farmers proposed to 347 farmers. We collected data on socio-economic, farm and institutional characteristics of the rice farmers, rice farmers' perception/experience of climate events. We also collected data on climate-smart practices and technologies used in rice production and the constraints to uptake of such practices and technologies. We collected data on socio-economic, farm and institutional characteristics of the rice farmers, rice farmers' perception/experience of climate events. We also collected data on climate-smart practices and technologies used in rice production and the constraints to uptake of such practices and technologies. Data Analysis The paper used descriptive statistics, principal component analysis and multivariate probit regression model to analyse the data collected. We used descriptive statistics to describe the characteristics of farmers, highlight farmers' perception on climate change and ascertain the barriers to uptake of climate-smart agricultural practices and technologies in rice production. To categorize the uptake of different individual climate-smart agricultural practices and technologies in rice production, we used the principal component analysis. We grouped the practices/technologies into heterogeneous clusters by the use of principal component analysis (PCA). The PCA has also been used in the literature to group climate risk management measures [31,32]. The practices were grouped using PCA with iteration and varimax rotation in the model shown below: where; Y 1 , . . . , Y g represent the principal components, which are uncorrelated. a n , . . . , a n . represent the correlation coefficients. x, . . . , x g represent the climate-smart agricultural strategies. We used the SPSS to carry out the principal component analysis. We also modelled the determinants of farmers' perception of climate events. The literature is replete on farmers' perception of climate change but there is scanty empirical evidence on the determinants of farmers' perception of climate events. Farmers perceive different climate events and the events are usually interrelated. Available literature in sub-Saharan Africa has largely treated the determinants of perception singly see [16,[33][34][35][36][37][38] without due consideration to the interrelated nature of perceived climate events. Although we are aware of the study of Liverpool-Tasie et al. [12], an exception in Nigeria, that considered the interrelated nature of farmers perceived climate events, the study however dealt with maize and poultry farmers. Rice farming is a significant contributor to greenhouse gas emissions in Africa's most populous country (Nigeria) and mitigation through adoption of climate-smart agricultural practices is needed to meet Nigeria's obligation to the global community as contained in the Nationally Determined Contribution [24]. It makes scientific and economic sense to consider the determinants of rice farmers' perception of climate change and how interrelated the perceived climate events are for effective programmes on climate change resilience and mitigation in the country and other countries with similar contexts. We therefore explored the determinants of climate change perception in rice farming using the multivariate probit model. The multivariate probit regression model treats the effects of predictors on various simultaneously perceived climate events and ensures that the disturbance terms of each perceived event is freely correlated. This model accounts for the interdependent nature of the perceived climate events and inform scientists whether the perceived events are complements or substitutes. The MVP further explains the potential relationship between climate change perception and unobserved factors [39]. Therefore, the multivariate probit (MVP) model has a set of dichotomous dependent variables (P i ) such that: where; β i represents the vector of parameter estimates, and P * i . denotes the latent variable. Equation (2) assumes that a rice farmer has a latent variable, P * i , that considers unobserved factors related to the nth perceived climate event. P * i is a linear combination of household socioeconomic characteristics, household assets, farm and institutional characteristics (X i ) affecting the simultaneous perception of climate events, as well as the unobserved factors explained by the error term u i . P i indicates the dependent variables measured whether or not a rice farming household has perceived a particular climate event. The dependent variables are the perceived climate events and they are listed below: Atmosphere 2021, 12, 1503 6 of 21 P 1 = Perceived increased rainfall intensity (Yes = 1, No = 0) P 2 = Perceived prolonged dry season (Yes = 1, No = 0) P 3 = Perceived frequent floods (Yes = 1, No = 0) P 4 = Perceived increased temperature (Yes = 1, No = 0) P 5 = Perceived severe windstorm (Yes = 1, No = 0) P 6 = Perceived unpredictable rainfall volume (Yes = 1, No = 0) P 7 = Perceived late onset of rain (Yes = 1, No = 0) P 8 = Perceived early cessation of rain (Yes = 1, No = 0) X i represents the vector of independent variables. The independent variables are listed below: The choice of the independent variables was supported by available literature on factors influencing climate change perception in sub-Saharan Africa [12,16,[33][34][35][36][37][38]. We used the STATA software to carry out the multivariate probit regression analysis. Table 1 showed the socio-economic characteristics of the farmers in the area. From the Table, the mean number of years spent in school was 9 years, implying that the farmers in the area at least had junior secondary education. Education is seen as a very veritable tool in this era of climate change because it helps farmers to access practices and technologies for responding to climate change. By this, farmers are able to properly withstand the adverse effects of climate change affecting rice production in the area [21]. Mean age of the farmers was 46 years. This implies that rice farmers in the area were young and in their prime age which avails them more opportunity to access climate information regarding the rice farming business. Age is notably an important factor in agriculture as it determines to a great extent the productivity of the farmers in general [10]. The mean household size was approximately 7 persons, which implies that the rice farmers had a relatively large household size which some of them could be relatives, extended dependents, etc. which undoubtedly could assist in rice production and in responding to the changing climate in the area. This is also related to the findings of Abegunde et al. [40] and Mujeyi et al. [41]. The majority (65%) of the farmers were males. This implies that there were more male farmers than female farmers in rice production. Africa is more of a patriarchal society, which allows men to access and own agricultural inputs such as (lands, credit facilities, improved seedlings, etc.) more easily than women [42]. Additionally, male farmers are able to withstand the negative impacts of climate change and access climate information more than female farmers. The extension values of 0.57 (57%) and 3.31 showed that about 57 percent of the rice farmers accessed extension services and were visited at least 3 times per year by the extension agents. Anugwa et al. [30] also found a similar level of extension access among rice farmers in another location in Nigeria. Extension services have a way of exposing farmers to new and recent knowledge on rice farming via the introduction of innovative technologies and updated climate information which empower the farmers to respond to climate change. Furthermore, extension visits build strong resilience amongst farmers in adapting to the adverse impacts of climate change [10]. The mean farming experience was 13.08 years. This means that rice farmers in the area have quite enough years to gather practical knowledge to solve inherent rice cultivation problems and be able to overcome both internal and external challenges affecting rice production. It is widely believed that the more experienced a farmer is the more likely the farmer would be in overcoming climate risks and implementation of long acquired practical knowledge [43] to boost rice production. About 28% of the farmers had access to credit. This implies that a small proportion of the rice farmers were able to access credit facilities from formal and informal sources. Generally, access to credit empowers rice farmers to acquire more agricultural inputs such as lands, fertilizers, improved seedlings, etc. [44]. However, access to credit could equally trigger access to climate information, since a farmer is privileged to move about in search of credit facilities, he/she may equally come across a discussion on climate change and what it offers, which could be efficiently and effectively utilized. It is interesting to note that 71%, 91% and 80% of the farmers had television, mobile phone and radio, respectively. These assets enhance both reception and communication of climate information, which help the farmers in overcoming adverse effects of climate change on agricultural production [45]. Similarly, 19 percent had at least a car, 67 percent had at least a motorcycle, 8 percent had at least a tricycle and 42 percent owned at least one bicycle. These means of transportation enhance both the movement of the rice farmers and their produce from the point of production to the point of sale as well as the movement of inputs (including climate-smart agricultural technologies) from point of purchase to the farm. Access to transportation is also a major determinant in the marketing of agricultural produce as it enhances free movement of goods and services between the rural and urban markets without many restrictions [46]. Furthermore, the result showed that about 53 percent of the rice farmers were members of farmer groups/associations. This implies that through the association, rice farmers could access both climate information and other agricultural inputs (including climatesmart agricultural technologies). Membership of farmers' associations encourages the transference of diverse knowledge and farm requisite information which help farmers in responding to climate change as well as boosting farm production [47]. Again, about 40 percent of the rice farmers attended trainings/workshops on climate change and/or rice farming and the average number of trainings/workshops attended per year was 1.52. This implies that the rice farmers were able to access vital information on rice farming and also on climate change via their attendance. These trainings and workshops have a way of communicating vital information which ordinarily is beyond the reach of farmers [48]. Through these trainings and workshops, farmers meet and interact with other farmers from various regions and locations. This could possibly serve as a medium of communicating other recent agricultural information/innovations. Eleven percent of rice farmers rely on government support in counteracting the negative impacts of climate events. The mean farm size of the rice farmers in the area was 1.47 hectares, this is typical of rural farmlands which are usually small in size, disjointed and fragmented [49,50]. This size of farmland could hardly support commercial farm production. Figure 2 showed the perceived climate events in the area. Increased temperature was perceived by over 90 percent of the rice farmers as a major climate risk affecting rice production in the area. Rice reproductive and developmental stages are hampered by high temperature, which reduces yield [51,52]. Increased rainfall intensity was cited by nearly 90% of rice producers as a perceived climate change concern. It is undeniable that higher rainfall intensity has an impact on rice production, resulting in lower yields and inferior grains [53]. Increased rainfall intensity causes erosion, which can destroy rice fields and rice grains. Erosion can take vital plant-available nutrients and organic matter with it when soil is lost. Flooding can also occur as a result of increased rainfall intensity, reducing outputs and exacerbating the local food security situation [54]. Flood deposits may raise nitrogen, phosphorous, silicon, and potassium levels in the soil, resulting in nutrient surpluses that can stymie rice development [55]. Crop loss, soil erosion, and increased flooding owing to heavy rains are all potential consequences of high precipitation, which can impact agricultural output. Approximately 83.3 percent of rice farmers viewed prolonged dry season as a serious climate event. Long dry seasons could reduce soil moisture content [56], denying planted grains access to the moisture they need for growth and crop development. Drought could be triggered by a protracted dry season, causing considerable harm to rice crops, especially if it occurs during critical times of crop development, such as after planting or flowering [57]. Drought can limit agricultural growth, resulting in lower yields and lower quality produce. Approximately eighty-two percent of the rice farmers believed that the area is prone to flooding. Rice (paddy) can be farmed in swamp (flood) locations, but regular flooding of farmlands washes away the fertile topsoil, leaving the soil less fertile. Flood water can suffocate and kill crops by depositing sand and debris. Crops can be damaged and output losses can occur even after floodwaters have receded. Flood does not only lower plant defences, but the soil and water conditions that prevail during flooding also favour the development of many plant diseases, resulting in an increase in the incidence of crop diseases [58]. Additionally, water in the soil or above the soil surface means that plants have less oxygen available to them, and one effect of low oxygen is a drastic reduction in metabolism, which can dramatically reduce yield and, if prolonged enough, cause death to a portion or the entire plant. Flooding has the potential to alter the amount of plant-available nutrients in the soil. The climate events Atmosphere 2021, 12, 1503 9 of 21 perceived/experienced by rice farmers in Ebonyi State-unpredictable rainfall pattern and distribution, increased rainfall intensity, prolonged dry season, frequent floods, increased temperature, severe windstorm, late onset of rain and early cessation of rain-are all in line with meteorological/scientific data analyses conducted in previous studies in Ebonyi State and Nigeria [10,59,60]. Farmers' Perception of Climate Events Atmosphere 2021, 12, x FOR PEER REVIEW 9 of 21 plant. Flooding has the potential to alter the amount of plant-available nutrients in the soil. The climate events perceived/experienced by rice farmers in Ebonyi State-unpredictable rainfall pattern and distribution, increased rainfall intensity, prolonged dry season, frequent floods, increased temperature, severe windstorm, late onset of rain and early cessation of rain-are all in line with meteorological/scientific data analyses conducted in previous studies in Ebonyi State and Nigeria [10,59,60]. Unpredictable rainfall pattern and distribution was cited by more than 83 percent of rice farmers as one of the climate change threats affecting rice output in the area. Farmers find it extremely difficult to plan their farming operations due to the unpredictable rainfall pattern and distribution, as they are frequently stuck (confused) on how to go about their rice cultivation [61,62]. Rainfall unpredictability is a major climate change event affecting agriculture [54,62,63]. Rain-fed rice, on the other hand, is primarily impacted by shifting rainfall patterns and rising temperatures. Extreme climate events such as floods and droughts are triggered by this irregular rainfall pattern and distribution, which have a negative impact on rice crops. Rainfall has a significant impact on soil. Nutrients in the soil can flow off and not reach the roots of plants if the weather is too wet or too dry, resulting in poor development and overall health of the planted crops. About 65% of rice growers reported experiencing a severe windstorm. Severe windstorms may induce a significant impact on rice production by inflicting severe damage and causing fractures, bends, and other sorts of injuries that result in reduced productivity. Heavy wind disrupts the growth and balance of planted crops, causing major damage. Severe windstorms can cause entire crop failure as well as soil surface erosion. Similarly, 81% and 78.1% of rice farmers have perceived late commencement and early cessation of rain Unpredictable rainfall pattern and distribution was cited by more than 83 percent of rice farmers as one of the climate change threats affecting rice output in the area. Farmers find it extremely difficult to plan their farming operations due to the unpredictable rainfall pattern and distribution, as they are frequently stuck (confused) on how to go about their rice cultivation [61,62]. Rainfall unpredictability is a major climate change event affecting agriculture [54,62,63]. Rain-fed rice, on the other hand, is primarily impacted by shifting rainfall patterns and rising temperatures. Extreme climate events such as floods and droughts are triggered by this irregular rainfall pattern and distribution, which have a negative impact on rice crops. Rainfall has a significant impact on soil. Nutrients in the soil can flow off and not reach the roots of plants if the weather is too wet or too dry, resulting in poor development and overall health of the planted crops. About 65% of rice growers reported experiencing a severe windstorm. Severe windstorms may induce a significant impact on rice production by inflicting severe damage and causing fractures, bends, and other sorts of injuries that result in reduced productivity. Heavy wind disrupts the growth and balance of planted crops, causing major damage. Severe windstorms can cause entire crop failure as well as soil surface erosion. Similarly, 81% and 78.1% of rice farmers have perceived late commencement and early cessation of rain as significant climate change threats to rice productivity. Late rains disrupt farmers' planting schedules, causing extended delays in rice farming, particularly highland rice cultivation, and resulting in low yields or product [64]. This prolongs the time it takes for farmers to begin cultivating their land. Early cessation of rainfall produces land dryness (drought) in the planted grains, resulting in immediate crop mortality. Furthermore, early rainfall cessation can cause poor soil aeration and lower moisture root content, resulting in poor crop growth and yields [58]. Table 2 showed the multivariate probit result of determinants of farmers' perception of climate events in the area. The Wald likelihood ratio Chi-square value of 202.69 was significant at 1 percent probability level, showing that the multivariate probit (MVP) regression model fitted appropriately in estimating the determinants of farmers' perception of climate events in the area. Educational attainment of the farmers had a significant negative relationship with frequent floods, severe windstorms and a positive significant relationship with late onset of rain and early cessation of rain. The implication is that educated people are more enthusiastic to note the changes in climate more than uneducated people. Educated people become very conscious about their environment and sense the changes in climate better [65]. This further implies that education influences farmers' perception of climate events. An increase in educational attainment of the farmers increases their perception, understanding and knowledge base in handling climate events such as late onset of rain and early cessation of rain. Education is notably a key determinant aimed at assisting the farmers to overcome the horrible experiences of climate events such as late onset of rain and early cessation of rain [38]. It has a way of equipping the farmers with the right knowledge and relevant information cum exposure in handling the challenges of late onset of rain and early cessation of rain which regularly interfere with the planting calendar. With this knowledge, farmers are in a better position to utilise the climate change information services to their advantage without many limitations. Determinants of Farmers' Perception of Climate Events The age of the farmers was negatively related to frequent floods, severe windstorms, late onset of rain, and early cessation of rain. This generally implies that younger farmers perceived more of these hazards than older farmers. This could be true because younger farmers are more involved in agriculture and other natural resource-dependent activities than the older ones and more likely to notice any changes in climate. Young farmers respond easily to trainings, workshops, seminars, as it relates to agriculture and climate change issues and are more willing to take steps in overcoming the changing climate and its associated risks such as frequent floods, severe windstorms, late onset of rain, and early cessation of rain. These risks, if not tackled ahead of time affects agricultural production negatively [36]. The household size of the farmers had a significant positive relationship with late onset of rain and also was negatively related to frequent floods. This implies that household size is a significant determinant of the perception of farmers to climate events. That is the larger the household size of a farmer is, the more likely he/she would be in perceiving the late onset of rain. This is likely given the higher number of persons in the household and each active member of the family has the ability to observe and record any change in the onset of the rains. Family labour is mainly seen as an outcome of large household size which could be utilised in managing climate events perceived. The negative relationship with frequent flood implies that families with fewer members noticed flooding more than their counterparts with larger members. Off-farm employment also had a positive significant relationship with early cessation of rain. This means that farmers involved in off-farm employment noticed early cessation of the rains than their counterparts not involved in any off-farm employment. Off-farm employment could take members of farm households to other locations with different climates from their homes and this might lead them to noticing changes in the timing of rains. Generally, off-farm employment offers the farmers the opportunity to seek any other jobs outside their primary occupation (farming), and as a result, the farmers are being exposed to climate change activities and its associated risks with possible means of counteracting its negative effects. Gender had a negative and significant relationship with a prolonged dry season and rising temperature. This shows that climate change perception by rice farmers is not gender-neutral. The negative significant relationship with the prolonged dry season and increased temperature implies that female rice farmers noticed prolonged dry season and rising temperature more than their male counterparts. This could be probably due to the serious engagement of the female farmers [65] and women's vulnerability to increasing temperature and associated risks. Nowadays women tend to engage more in farming activities and are also being exposed to climate change risks and their possible impacts. Ownership of television and mobile phone also had a negative significant relationship with late onset of rain. This implies that ownership of television had a negative influence on the perception of late onset of rain. Ownership of television and mobile phone grants farmers access to more diverse and well-analysed climate information that could be matched with their perception. In situations such as this, farmers will reconcile perception with scientific information [20]. Moreover, sometimes farmers that have television may not have the time to watch the television due to tight engagements [35]. Membership of farmers' groups/associations had a significant positive relationship with prolonged dry season, showing the influence of farmers' groups in shaping the perception of the farmers to climate events (in this case prolonged dry season). Members of farmer groups are better positioned to access information on climate events which ordinarily may elude them if they do not belong to such groups. Farmers' groups have a way of inculcating and disseminating vital information concerning new farming methods, agricultural innovations, climate change and its associated risks with improved ways to manage the risks [37]. Marital status also had a significant positive relationship with prolonged dry season and unpredictable rainfall pattern and distribution in the area. This posits the influence of marital status in local perception of climate change. In this case, the significant positive relationship with prolonged dry season and unpredictable rainfall pattern and distribution connotes that these climate events were perceived more by the married farmers than their colleagues who are single. This could be true because married farmers seem to be more disposed to information relating to agricultural activities compared to their single counterparts. Marriage enhances the capacity of the farmers in accessing reasonable and vital information [37]. Interdependent Nature of Perceived Climate Events The joint perception of climate events is shown in Table 3. The Chi-square, which determines the appropriateness of the MVP model, is significant at 1% level. This indicates that the MVP is appropriate in modelling the determinants of perceived climate events in rice production. Table 3 indicated that the perceived climate change events are only complementary. This implies that all the perceived climate events in the area complemented each other and existed amongst the farmers. The result consists of 28 pairwise correlation coefficients of the perceived climate events. All the correlation coefficients of the perceived climate events were positive. Amongst the 28 correlation coefficients, 25 were positively significant, while the remaining three were not significant. The results showed that increased rainfall intensity was significant and complemented frequent floods, increased temperature, late onset of rain and early cessation of rain. Increased rainfall intensity causes soil displacements and erosion, which can harm rice fields and destroy planted rice grains. Furthermore, higher rainfall intensity might result in flooding, reducing rice yields and affecting the local food security situation [35]. Frequent floods are known to undermine plant structures, causing total collapse of the rice plants. The soil and water conditions present during flooding usher in the growth of microbial diseases organisms. Increased temperature encourages the growth of soil pathogens, which spawn insect attacks and pest/diseases in rice fields, reducing rice yields, outputs and income [38]. Rice reproductive and developmental stages are hampered by high temperature, which reduces plant height and root extension. Early cessation of rainfall promotes poor aeration of the soil and decreases moisture root content, resulting in poor rice crop growth and yields [66]. Prolonged dry season was significant and complemented frequent floods, severe windstorm, and unpredictable rainfall pattern and distribution, late onset of rain and early cessation of rain. This implies that these perceived climate events complemented each other. Long dry season reduces soil moisture content, denying planted grains access to the moisture they need for growth and crop development. Drought could be triggered by a protracted dry season, causing considerable harm to rice crops. Frequent flood complemented increased temperature, severe windstorm, unpredictable rainfall pattern and distribution, late onset of rain and early cessation of rain. Severe windstorms can have a significant impact on rice production, causing substantial damage and causing fractures, bends, and other sorts of injuries that result in yield and productivity loss. Increased temperature was significant and complemented severe windstorm, unpredictable rainfall pattern and distribution, late onset of rain, and early cessation of rain. Farmers find it extremely difficult to plan their farming operations due to the unpredictable rainfall pattern and distribution. Rainfall is unpredictable, which disrupt the planting schedule and leaves farmers defenceless during planting seasons. Severe windstorm was significant and complemented unpredictable rainfall pattern, late onset of rain, and early cessation of rain. Late rains disrupt farmers' planting schedules, causing lengthy delays in rice farming, particularly highland rice cultivation, and resulting in low yields or product [35]. Unpredictable rainfall pattern and distribution was significant and complemented late onset of rain and early cessation of rain while late onset of rain further complemented early cessation of rain [36]. The interdependent nature of the perceived climate events indicated that farmers in the area have experienced one form of climate events or the other and had also adopted various climate change adaptation strategies in mitigating their negative effects on rice crops in the area. Climate-Smart Agricultural Practices/Technologies in Rice Production The authors first grouped and renamed some climate-smart agricultural strategies before having the final eleven strategies subjected to principal component analysis. Organic and inorganic fertilizer were grouped together and renamed effective use of fertilizer, while seeking early warning information about climate risks and using weather forecasting were grouped together and renamed reliance on climate information and forecasts. The climatesmart agricultural practices were reduced to eleven and principal component analysis was carried out to determine the broader categorization of the practices and the result presented in Table 4. The rotated component matrix of the climate-smart agricultural strategies adopted by the rice farmers is shown in Table 4. From the result, a threshold value of 0.500 was established and was used as the basis for determining the principal components. The first principal component (PC1) was highly correlated with three of the climate-smart agricultural strategies namely (planting improved rice varieties, soil and water conservation techniques, and adjusting planting and harvesting dates) and yielded scores of 0.783, 0.709 and 0.705 respectively. We named this component crop and land management practices. This component supports all around bio-physical development of the crop and soil leading to improved yields and outputs [10,18,67]. It is still evidently clear that planting improved rice varieties is a key climate-smart agricultural strategy. Improved rice varieties are high yielding varieties and are highly resistant to rice pests and disease infestations. Furthermore, improved rice varieties especially early maturing varieties reduce methane emissions from rice farms by reducing the length of the growing season, which is a measure of the length of time paddy rice fields are flooded and emit methane. Additionally, soil and water conservation is another critical climate-smart agricultural strategy which ensures minimal destruction of the soil surface and renewal of adequate moisture contents of the soil required for maximum growth and crop yields [21,35,39] and reduces emissions of greenhouse gas. Water and soil conservation helps in managing the emissions from paddy rice fields especially through intermittent aeration of the field. More so, adjusting planting and harvesting dates is seen as an effective crop and land management practice that enables rice farmers to adjust their planting and harvesting calendars to suit any prevailing climate change which, if not adhered to might cause havoc on planted rice crops [10]. iterations. ** signifies components with score of 0.5 and above and selected components. PC 1 was renamed crop and land management practices. PC 2 was renamed climate-based services and irrigation. PC 3 was renamed livelihood diversification and soil fertility management. PC 4 was efficient and effective use of pesticide. PC 5 was planting on the nursery. The second principal component (PC2), which is climate-based services and irrigation, is highly correlated with another three climate-smart agricultural strategies (irrigation, insurance, reliance on climate information and forecasts). Amongst the climate-smart agricultural practices/technologies, irrigation had the highest score of 0.731. Irrigation ensures efficient and sustainable supply of water to the planted crops all through the farming season [68,69]. Efficient water management and intermittent draining of paddy rice fields are very important strategies for reducing and avoiding methane emissions. Insurance with a score of 0.711 indicated that insurance is a vital adaptation strategy which covers the farmer during periods of total agricultural failure occasioned by climate change [45]. Reliance on climate information and forecasts is another important climate-smart agricultural strategy that assists farmers with current information on climate change. The climate information services empower the farmers to respond favourably to the adverse effects of climate change in the area. Similarly, the third principal component (PC3) again, categorized another three climate-smart agricultural strategies (livelihood diversification, appropriate application of fertilizer, and planting different crops), which had principal component scores of 0.839, 0.679 and 0.502, respectively, into one component. This component is called livelihood diversification and soil fertility management. These are climate-smart agricultural techniques employed to improve the living standard and/or condition of the rice farmers as well as improvement of the soil fertility, crop yields and productivity of the rice farmers in the area [21] and reduction of greenhouse gases in the area. Livelihood diversification helps the farmers especially the poor ones in raising additional sources of income outside their primary occupation and this income assist heavily in family support [70,71]. Appropriate application of fertilizer and planting different crops are beneficial climate-smart agricultural strategies. Appropriate application of fertilizers improves the soil fertility leading to bumper growth and harvest, avoids wastage of fertilizer which may lead to increased nutrient losses to the environment and increase emissions. Planting different crops serves as an alternative cover for the farmer in times of crop losses due to climate change [72]. The fourth principal component (PC4) and the fifth principal component (PC5) were an efficient and effective use of pesticides and planting on the nursery respectively. These climate-smart agricultural strategies had scores of 0.875 and 0.929, respectively. In combating rice pests and diseases, efficient and effective use and applications of pesticides is quite necessary to drive efficient growth processes and plant developments without many struggles [73]. Adoption of this practice empowers farmers to tactically reduce the effects of rice pests on the farmlands, give room for maximum growth and higher yields and reduce emission of any chemical that may contribute to global warming. Planting in the nursery is a suitable adaptation strategy, where the tender crops are first planted before being taken to the permanent field. This ensures maximum protection of the planted crops from the vagaries of climate. This practice enables farmers to properly tend the growing rice plants before transferring to the field and helps in protecting the growing rice plants from the negative effects of climate change. In addition, the rotated matrix of the adaptation strategies showed that none of the adaptation strategies were less than the threshold value of 0.500. Figure 3 presents the constraints to the adoption of climate-smart agricultural strategies in rice production in the area. Figure 3 showed that 98.6 per cent of the rice farmers indicated the high cost of fertilizer as their major barrier to uptake of climate-smart agriculture in the area. As a result of the devastating effects of climate change, soil fertility is majorly affected influencing the poor performance of the harvested crops, as such the only way out is the application of fertilizers [10]. The high costly nature of fertilizer tends to mar its effective and efficient applications thus, posing a huge challenge in responding to climate change. About 83 percent of the rice farmers averred lack of inputs access as their constraint to uptake of climate-smart agriculture. Access to farming inputs (improved varieties, seedlings, pesticides, etc.) facilitates farmers' response to climate change. However, the inability of the farmers to access inputs could become a barrier in responding to climate change [11,74,75]. Inadequate land was reported by about 76.1 percent of the rice farmers. Inadequate land restricts the full practices or application of some of the climate-smart agricultural strategies or techniques. In reality, adequate land is required to technically practice some of the climate-smart strategies especially soil and water conservation techniques, planting of different crops, etc. to fully maximize its benefits and rewards. When the land is fragmented or inadequate in some cases, it distorts the benefits of climate risk management [30] and this may kill the drive and interest of farmers in responding to climate change. Inadequate capital was observed by about 93.4 percent of the rice farmers as a serious constraint to uptake of climate-smart agriculture. Inadequate capital makes it difficult for farmers to access some of the farming inputs such as lands, labour, planting materials, etc. Whereby these farming inputs are not readily accessed, responding to climate change becomes extremely difficult [75]. Moreover, capital is seen as a key determinant to climate change resilience, mitigation, rice productivity and food security. Pests and diseases were also reported by 86.2 percent of the rice farmers as a major barrier to adoption of climate-smart rice production technologies/practices. Pests and diseases are usually triggered by prolonged drought and high-temperature conditions occasioned by climate change and as such attack rice crops reducing the quantity, quality and productivity of the farmers, thus posing a trait to climate change adaptation and mitigation [76]. About 58.2 percent of the rice farmers indicated flooding as a limitation to uptake of climate-smart agriculture. Flooding alters the level of plant-available nutrients in the soil by way of washing off both the sub and topsoil surfaces thereby weakening the defence mechanism of plant roots. In addition, flooding causes water percolation which breeds all manner of plant diseases and pests that attack planted crops, and hinder rice farmers resilience and mitigation to climate change [77]. Scorching sun was also pointed out by about 63 percent of the rice farmers. The high intensity of the sun sometimes makes it difficult for the farmers to respond to climate change. About 87 percent of the rice farmers attested to high labour cost as a major constraint to adopting climate-smart agriculture in rice production. Labour is evidently required to efficiently practice and apply climate-smart rice production strategies/techniques by the farmers. Where the labour is costly more especially hired labourers, it becomes extremely difficult to respond to climate change. Sometimes the exorbitant fare charged by hired labourers makes it challenging for some of the poor and vulnerable farmers from accessing them thus complicating their chances to respond to climate change [74]. Inadequate climate information was reported by 68.3 percent of the rice farmers. Inadequate climate information services limit the farmers' knowledge in responding to climate change. Where the farmers are not properly informed about the activities of climate change, they are bound to be overwhelmed and susceptible to climate change issues thereby limiting their response [11]. Consequently, poor extension services were reported by about 54 percent of the rice farmers. Extension service is another enabler of uptake of climate-smart agriculture in rice production. It is a mirror to the adaptation and mitigation strategies and empowers the farmers on some of the technicalities associated with some of the climate-smart strategies and their corresponding benefits and advantages over others. Extension services help farmers to access first-hand information on climate change on time and how to overcome and adapt to them favourably but where these services are not readily available or poorly delivered, the tendencies of the farmers to effectively respond to climate change become limited [74]. However, appropriate seminars, conferences, symposiums; etc. that will spur the farmers' productivity, climate change mitigation and resilience should be encouraged. Additionally, policy drive should be tailored toward overcoming the above-identified constraints in the uptake of climate-smart agriculture in rice production. Conclusions and Recommendations Climate change poses a serious challenge to rice production in many parts of Africa and rice farming also contributes significantly to greenhouse gas emissions. Farmers perceive different effects of climate change on rice production and have also responded differently. Climate-smart agriculture is needed as an important strategy to respond to climate change in Sub-Saharan Africa. However, the adoption of climate-smart agriculture in rice farming is still low in sub-Saharan Africa. To increase the understanding of rice farmers' perception on climate change and uptake of climate-smart agriculture in rice production, this study was conducted using cross-sectional data from three hundred and forty-seven rice farmers and analysed using principal components, multivariate probit regression model and descriptive statistics. Farmers perceived various climate events such as increased rainfall intensity, prolonged dry season, frequent floods, increased temperature, severe windstorm, unpredictable rainfall pattern and distribution, late onset rain and early cessation of rain. Several socioeconomic characteristics and assets of the farmers determined the perception of climate change in rice production. Education, age, household size, gender, off-farm employment, ownership of television and mobile phone, membership of farmers groups and marital status were the main drivers of climate change perception in rice production. Additionally, perceived climate events are largely interdependent and complementary to each other. In a bid to overcome perceived climate events, the rice farmers adopted several climate-smart agricultural strategies. These include planting improved rice varieties, insurance, planting different crops, livelihood diversification, soil and water conservation techniques, adjusting planting and harvesting dates, irrigation, reliance on climate information and forecasts, planting on the nursery, appropriate application of fertilizer and efficient and effective use of pesticides. The principal component analysis showed that the individual climate-smart agricultural practices can actually be disseminated as bundles of strategies in packages. This study has shown that farmers' perception on climate Conclusions and Recommendations Climate change poses a serious challenge to rice production in many parts of Africa and rice farming also contributes significantly to greenhouse gas emissions. Farmers perceive different effects of climate change on rice production and have also responded differently. Climate-smart agriculture is needed as an important strategy to respond to climate change in Sub-Saharan Africa. However, the adoption of climate-smart agriculture in rice farming is still low in sub-Saharan Africa. To increase the understanding of rice farmers' perception on climate change and uptake of climate-smart agriculture in rice production, this study was conducted using cross-sectional data from three hundred and forty-seven rice farmers and analysed using principal components, multivariate probit regression model and descriptive statistics. Farmers perceived various climate events such as increased rainfall intensity, prolonged dry season, frequent floods, increased temperature, severe windstorm, unpredictable rainfall pattern and distribution, late onset rain and early cessation of rain. Several socioeconomic characteristics and assets of the farmers determined the perception of climate change in rice production. Education, age, household size, gender, off-farm employment, ownership of television and mobile phone, membership of farmers groups and marital status were the main drivers of climate change perception in rice production. Additionally, perceived climate events are largely interdependent and complementary to each other. In a bid to overcome perceived climate events, the rice farmers adopted several climatesmart agricultural strategies. These include planting improved rice varieties, insurance, planting different crops, livelihood diversification, soil and water conservation techniques, adjusting planting and harvesting dates, irrigation, reliance on climate information and forecasts, planting on the nursery, appropriate application of fertilizer and efficient and effective use of pesticides. The principal component analysis showed that the individual climate-smart agricultural practices can actually be disseminated as bundles of strategies in packages. This study has shown that farmers' perception on climate change plays significant role in the decision to respond to climate change in rice production. Therefore, incorporating farmers' perception on climate change and indigenous knowledge into adaptation and mitigation planning will be an effective and efficient way of increasing rice productivity and resilience to climate change. The major barriers to uptake of climate-smart agriculture by rice farmers include high fertilizer cost, lack of inputs access, inadequate land, inadequate capital, pests and diseases, flood, scorching sun, high labour cost, inadequate climate information and poor extension services. Thus, the farmers are advised to constantly seek information on climate change before embarking on rice production, this will without doubt position them to overcome any adverse effect of climate changes. Additionally, the government should assist the farmers in implementing climate change policies to ameliorate the sufferings and constraints of the rice farmers. This study focused more on smallholder rice farmers in Ebonyi State, Nigeria. Future studies could extend the scope to Nigeria to allow for comparisons across different riceproducing agro-ecologies regarding perception, uptake of climate-smart rice production technologies and barriers to the adoption of such technologies. Such studies could also extend the analysis to ascertain the effect/contribution of adoption of different climatesmart rice production practices on rice yield, food security, resilience and mitigation. This is particularly important given the importance of rice production to Nigeria's economy and the vulnerability of rice cultivation to climate change.
2021-11-17T16:30:55.010Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "c916568cf51d7bdf5dd03f717b8f3678d4400172", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/12/11/1503/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "300c56b5332ec7ad912ce8bd13bbb9846fdec55c", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
246534838
pes2o/s2orc
v3-fos-license
Linking Foraging Domestic Burglary: An Analysis of Crimes Committed Within Police-Identified Optimal Forager Patches Crime linkage is a systematic way of assessing behavioural or physical characteristics of crimes and considering the likelihood they are linked to the same offender. This study builds on research in this area by replicating existing studies with a new type of burglar known as optimal foragers, who are offenders whose target selection is conducted in a similar fashion to foraging animals. Using crimes identified by police analysts as being committed by foragers this study examines their crime scene behaviour to assess the level of predictive accuracy for linking crimes based on their offending characteristics. Results support previous studies on randomly selected burglary offence data by identifying inter-crime distance as the highest linking indicator, followed by target selection, entry behaviour, property stolen and offender crime scene behaviour. Results discuss distinctions between this study and previous research findings, outlining the potential that foraging domestic burglary offenders display distinct behaviours to other forms of offender (random/marauder/commuter). Introduction Decision-making by offenders, specifically that which relates to the spatial choice of the offender, is underpinned by several commonly acknowledged key theoretical frameworks including routine activity theory (Cohen and Felson 1979) and crime pattern theory (CPT) (Brantingham and Brantingham 1993). Within a policing context, studies of routine activities and CPT are often applied to direct and control police resources into geographic areas that have suffered high levels of victimisation (Halford 2018). The central concept of such approaches is to maximise the effectiveness of the capable guardianship provided by the increased police presence to prevent and reduce crime in the affected area (Halford 2018). The police commonly use this method to target burglary offenders. To help develop such patrol plans police commonly use a 3-step process. First, they seek to identify linked crimes committed by the same offender. This is achieved by focusing on crime characteristics to understand which provide the most accurate indicator that two crimes are linked (committed by the same person) or unlinked. This area of research is covered in detail later in this section. Second, they look for insights that can help determine the nature of the offending behaviour. Research that has explored the geographical movements of offenders to help define the nature or characteristics of their offending has labelled burglary offenders as 'commuting', those who travel extensively from outside of the attacked area (Canter and Larkin 1993;Rossmo 2000), and 'marauders', who move uncoordinatedly and select targets randomly Rossmo 2000). Finally, they select a theoretical framework on which to base their prediction. There are several theoretical frameworks that the police use to underpin their patrol decision-making which include near-repeat theory that posits that those who live nearest to a recent victim of crime are at the highest risk of becoming a target (Pitcher and Johnson 2011;Ratcliffe and Rengert 2008), and optimal forager theory that indicates they move between patches seeking to maximise the cost-benefit return of offending (Johnson and Bowers 2004a, b), similar to foraging behaviour displayed by animals. This study adds to the literature in two of the aforementioned areas of research into burglary, namely the area of crime linking and the theoretical framework of optimal foraging theory. The study achieves this by replicating existing studies on crime linking but using the methodologies on a previously unexamined type of burglar, the optimal forager, as to date, this has not been conducted. The purpose 1 3 of doing this is simple. If the police are to use a theoretical framework (optimal forager) on which to base predictions of crime hotspots for patrolling, then it stands to reason that the preliminary crime linkage process should draw upon crime linkage research based on foraging offenders. Doing so will enable the assessment as to whether or not burglars who behave in a foraging manner display distinct characteristics that provide different linkage factors than previous literature that has drawn their data samples from randomly chosen pools of burglary crime data. To provide a good grounding so that the studies contribution can be appreciated it is necessary to provide an overview of both optimal forager theory and crime linking literature. Optimal forager theory argues that offenders will act much the same way as a predator (Addis 2012;Fielding and Jones 2012;Johnson and Bowers 2004a, b). The optimal forager theories roots lie within ecology where it is found that foraging behaviour is driven by a need to find resources but is weighted against the risk of mortality (Pyke et al. 1977;Pyke 2019 andRodriguez et al. 2019). In an effort to reduce the risk, foraging animals return to the same known areas but switch between patches based upon complex factors such as abundance of prey, level of predation risk and energy expenditure (Charnov 1976). A criminal will consider the same issues when searching for a victim by calculating the travel time, hour of day or night, risk of detection or apprehension vs. the potential for criminal reward and the level of effort required to achieve it (Johnson 2014). When making these calculations Johnson et al. (2008) theorised that both the animal and criminal will display similar foraging behaviour as they search for their prey or victim. It is suggested that the two foraging principles that they will adopt are the central place foraging approach (Johnson 2014) where the offender will conduct their searching based around a specific base or home and as such their prey will fall within their routine activity node (Orians and Pearson 1979), or the optimal patch where the offenders behaviour will be dictated by the amount of time available to forage combined with their knowledge and previous success in a certain area (Pyke 1977(Pyke , 2019. The theory of bounded rationality suggests such complex forms of decision-making within criminal offenders are unlikely (Johnson andPayne 1986 andOpp 1997), suggesting instead that simple heuristics are at play. Studies of burglars have, however, argued that experienced offenders do make complex decisions regarding offending behaviour (Nee andMeenaghan 2006 andTaylor 2014), even if it is conducted heuristically. Accepting this theoretical argument it is worth noting that the criminogenic application of optimal forager theory does not seek to dissect the complexity, or otherwise, of the decision-making process, and only to predict the final outcome, namely the offending patch. When considering the foraging patch it is suggested that areas that are closer together are more likely to be similar in their abundance of prey or victims and as such a clustering effect emerges similar to the underpinning theory to the near-repeat approach and what provides the potential to predict future victims (Fortin and Dale 2005) but with a subtle distinction. Figure 1 shows a theoretical scenario of how a forager moves between areas which are defined as 'patches' (Charnov 1976). It can be seen in Fig. 1 that foraging patches rarely overlap and in fact are often distinct microgeographical areas. Identification of similar behaviour in the spatial distribution of domestic burglary crimes is indicative that the offender is behaving in a foraging manner, as opposed to simply targeting nearby victims. This is a subtle but important difference to near-repeat theory which includes significant overlapping of targeted victim locations. Since its emergence the optimal foraging theory has achieved momentum within policing and is now one of the most commonly used predictive policing approaches within the UK, used by as many as 9 police services (Halford 2018). As we have outlined, in the UK the optimal forager patch prediction is the final step, in a 3-step process used to direct police resources into areas predicted as high risk for future domestic burglary, which is used as a crime reduction and prevention tactic. Police analysts presently identify the presence of a foraging burglary offender through the identification of previously committed, linked, serial offences that occur in clusters within non-overlapping geographic patches (Halford 2018). Using the aforementioned criteria, the prediction of the future foraging patch is then based upon the professional judgement of the police analyst (Halford 2018). However, before this stage can be reached, step 1 must be conducted and all burglary crimes within the localised area must be examined to identify those which are linked to a prospective forager. Similar to other forms of crime linkage, the linking of crimes which are attributed to a foraging offender is presently based upon professional judgement of a police analyst (Burrell and Bull 2011;Martineau 2014 andPakkanen et al. 2012) which can be open to influence from bias, misinformation and inaccuracy (Woodhams and Davies 2019). Once analysed, linked and predicted, police analysts produce an 'optimal forager' briefing product which contains maps of areas predicted as being at heightened risk of burglary. These are then used to co-ordinate the deployment of police resources. This could be argued to be an overly simplistic and unscientific approach, and it stands to reason that the accuracy of such predictions is likely to increase if the accuracy of the crime linking step is improved. This is why this study seeks to examine the police predicted forager crimes for potentially unique behavioural and physical characteristics to improve the process of linking used to underpin the police optimal forager analysis. If foraging offending can be confirmed with greater accuracy through improved crime linking, it will then be possible to more accurately predict their future foraging patch, theoretically enabling the police to more effectively target their predictive responses. To further the knowledge in this area this study replicates previous similar research completed on serial burglary offenders (Markson et al. 2010) which examined modus operandi, temporal and spatial factors. This studies novel contribution is that it goes further than the previous study outlined and examines a wider range of offending behavioural and physical characteristics displayed by burglary offenders, and is focused on those specifically identified by the police as foraging burglars, to identify which provide the greatest accuracy in terms of crime linking. Traditionally the most beneficial method for linking crimes prior to conviction is through forensic evidence such as DNA, fingerprints or footwear impressions, and when available, CCTV evidence (Rossmo 2000). However, the fact that as little as 5.2% of all theft-related offences (including burglary offences) were detected by police services in the UK in the year ending September 2020 (ONS 2021) indicates that the presence of such evidence is unfortunately rarer than one may think. As Bennell and Jones astutely outline: "Without such physical evidence, linking crime scenes may hinge solely upon behavioural information revealed through examination of crime scene characteristics and offence locations" (Bennell and Jones 2005: 23) In practical application, crime linkage utilises the theoretical principle that when an offender commits a crime they repeat certain behaviours or target victims based on previously successful crimes. To link two crimes or identify a crime series an individual must examine them in detail and identify factors which contribute to the conclusion that they are linked. This is fraught with danger if it is done without the backing of any empirical analysis as it could lead to both false positives and missed opportunities (Tonkin 2012). If this occurred in an operational environment the subsequent forager prediction produced and the direction of police resources from that information is less likely to be effective. Despite its importance, until recently only limited academic research had been conducted that examined the effectiveness of crime linkage through non-tangible information such as behavioural characteristics, target selection and offence location (Bennell and Jones 2005;Bennell and Canter 2002;Santtilla andKorpela et al. 2004 andTonkin et al. 2008). It is only within the last decade that the field of crime linkage has begun to forge its own corner within criminological study and this has been enabled and supported by extensive further research (Albertetti et Research Aim and Hypothesis This study continues the previous works and hypothesises that serial foraging criminals display distinct behavioural and/or physical characteristics within their offending. Its primary aim is to examine the offending behavioural and physical characteristics displayed by offenders identified by the police analyst as foraging burglars to identify which provide the greatest accuracy in terms of crime linking. Doing so provides three outcomes (1) a methodology for police analysts to utilise to more accurately link offences committed by foraging offenders and other forms of burglary offender, (2) post-analysis, the results could provide decision-making thresholds to underpin the professional judgement-based linkage analysis that underpins the forager predictions, (3) finally, it continues to add to the knowledge base within crime linkage by providing an analysis of linked offences committed by suspected foraging burglary offenders to compliment other existing studies. Data and Methods The data used in this study come from a total of 2916 recorded crime records. These crimes were extracted from 50 optimal forager briefing products. A briefing product is a document used by analysts that contains maps of 'forager patch' locations predicted as being at risk of victimisation and include additional information that describes the linkage rationale and key offence behaviour's or characteristics. The briefing documents used were operational products that were all created by analysts within Lancashire constabulary and were used to direct 'real-world' police patrols. The methodology used by the analysts for identifying the optimal forager patches is achieved through the identification of previously committed, linked, serial offences that occur in clusters within non-overlapping geographic patches (Halford 2018), which then informs the analysts prediction of a future patch. All of the high-risk areas highlighted within the products had been specifically identified by the analyst as optimal forager burglary patches. From within the briefing products 2916 domestic burglary crimes were identified that were specifically committed inside the foraging patch area identified by the police analyst. From those crimes, only 874 had an offender identified. For a pair to be defined as linked the offender must have been convicted. By adopting this strict criterion this study was able to make conclusions on data that has already surpassed a high burden of proof. Of those crimes 152 different offenders were identified. A subcategory of only 53 offenders committed 8 or more offences and these made a natural sample group. As such, 3 offenders were weeded at random leaving 50 offenders who each accounted for 4 pairs, from different patches (a total of 8 crimes). This created a total of 200 linked pairs (400 crimes) of domestic burglary offences. The 400 crimes were then subjected to logistic regression and receiver operator characteristics analysis. To enable the analysis process each set of linked crimes were coded using a binary format where 1-1 refers to the first crime committed by offender 1 and 1-2 refers to the second, for example; 1-1, 1-2, 2-1, 2-2, 3-1, 3-2, 4-1, 4-2 and so on for each of the identified crime pairs. Each pair of linked crimes then had the behavioural and physical characteristics present recorded against them as 0 s and 1 s, with 0 indicating absent and 1 indicating presence. This data were then analysed using a bespoke crime linkage software programme called B-Link which has been used to conduct such crime linkage in other published studies (Bennell and Jones 2005;Bennell and Canter 2002;Markson et al. 2010 andTonkin 2012). The B-Link system calculated the coefficients used in this study by completing the following functions; (a) it creates all possible crime pairs, (b) it indicates whether the pairs that are constructed are linked (1) or not (0), and (c) and then calculates a variety of different crime similarity scores for each crime pair based on the data that were coded, for example; property stolen, in the input file. In total 79,800 possible permutations were analysed. On completion, the B-Link system then provides data based on a variety of statistical analysis including simple matching, Jaccard's coefficient, Yule's Q, Pearson's phi and Sorensen-dice. For the purpose of this study, Jaccard's coefficient was selected for reporting results as this has been utilised in studies the analysis replicates (e.g. Bennell et al. 2010a, b andEllingwood et al. 2013). It has been suggested this is used due to the fact that it is simple and efficient (Melnyk et al. 2011). For example, Ellingwood et al. (p2, 2013) outline that "for a pair of crimes (A and B), J is where a equals the number of behaviours common to both crimes, and b and c equal the number of behaviours unique to crimes A and B, respectively". This captured in the formula below; Logistic Regression Analysis Because of the simplicity of the main question being asked in this study 'are two crimes linked or not?' and the potential multiple outcomes, crime linkage lends itself well to forms of analysis that can analyse several variables. Regression analysis has shown itself to be a valuable tool in analysing variable data (Chatterjee andPrice 1991:1 andPeng et al. 2002). This is because it offers an easy and simple method for analysing the relationships between variables (Peng et al. 2002). As such, this method was used to assist in the identification of the probability of an outcome between several variable factors. The prediction of the final dichotomous outcome being sought, i.e. are these two crimes linked, is largely reliant on variables which can change dependent on the scenario (Chatterjee and Price 1991:1, Peng et al. 2002). Scenarios where the relationship between the dependent variable and a number of independent variables is examined are referred to as multiple regression analysis (Chatterjee and Price 1991), and this is the depth of regression analysis that this study used. As it pertains to this research the dependent variable or dichotomous outcome that is to be identified is simply whether or not two crimes are linked. The independent variables that will be utilised to assist in identifying the positive likelihood of this factor will be the characteristics of the crime scenes. These characteristics will be both behavioural and physical and are outlined in detail in Table 1. Logistic regression analysis centres around the concept of the logit which is an odds ratio which in its most basic form is derived from a 2 × 2 contingency table which would pit predicted outcomes versus actual outcomes (Peng et al. 2002:3). In the scenario of linking two crimes together this would be (the prediction that the crime is linked or unlinked) (x) (the reality that it is linked or unlinked). Combining these two statements, as outlined by Bennell and Canter, provides four possible options with two being positive and two being negative (2002). Bennell and Canter outline that these positive and negative results are referred to as hits, correct rejections, false alarms and misses (2002) and are shown in Fig. 2. Firstly, the hit would indicate that two crimes have successfully been identified as being linked. The false alarm indicates that a hit has occurred when in fact it has not. A correct rejection simply means that the crimes being examined have been identified as being unlinked and finally the miss means that two crimes have incorrectly been identified as being unlinked when in fact they are. By studying both the behavioural and physical characteristics of a crime as previously outlined the study will seek to identify what is statistically the most likely conclusion, i.e. a hit and a correct rejection. Logistic regression analysis is well suited for this task as it will provide an odds ratio that the user will be able to base their decision-making upon. Receiver Operator Characteristics (ROC) Analysis Receiver operator characteristics analysis is used to identify information thresholds that can aid in diagnostic decisionmaking (Swets et al. 2000). It is in this context that the technique is used within this study. Several studies have been done to date using this technique to compliment the regression analysis used to predict crime linkage and calibrate the validity of crime linking features. Combining these enables the production of decision-making thresholds Canter 2002 andJones 2005). Bennell and 1 3 Canter (2002) outlined that a decision-making threshold is important in the absence of any categorical linking criteria such as forensic evidence. This is because it provides a 'cutoff point' whereby a layman can deduce that any reading above that figure can imply a positive decision, which in this study would mean that two crimes are linked (Swets 1992). The threshold analysis is also important because although regression analysis may be able to provide us with some surety regarding the linkage likelihood of two crimes, what it cannot do is put this result into context. ROC analysis can provide this context by offering an easy to understand result that measures between 0 and 1. The key statistic in ROC analysis is the area under the curve (AUC). This represents the predictive accuracy of the data that give rise to the ROC curve; in this case, the criminal behaviours and characteristics used to deduce if crimes are linked (committed by the same person). For instance, an AUC of 0.0 indicates certainty the behaviours and characteristics are not suitable as indicators that crimes are linked. An AUC of 0.5 indicates that the data do not perform any better than chance, whereas an AUC of 1.0 indicates perfect prediction that presence of these factors indicates the crimes are linked and provides 100% certainty they are committed by the same offender. In reality, only forensic evidence could achieve this high bar. However, lesser results, such as 0.7 out of 1 for example, are far above chance and would allow the decision maker to make a judgement (Bennell and Jones 2005). Based on this methodology, the desired outcome is a high AUC. The p value indicates whether the AUC result is statistically significant. Analysis Once compiled, the linked and unlinked crimes were then examined and key behavioural and physical characteristics extracted to be analysed using the B-Link and SPSS statistics programs. Prior to analysing the crimes, it was necessary to identify the categories of behaviours and physical characteristics within them. Previous studies have broadly defined and grouped these together as entry behaviour, property stolen and target selection, and included time of the offence and inter-crime distance (Bennell and Jones 2005;Canter 2002 andTonkin 2008) with the latter two being the most accurate. As these areas have been previously identified through a tested pool of studies these headings were again used to collate the behavioural and physical characteristics of the offences. In addition to these categories a further category of 'offender behaviour' was created to further knowledge in this area. Although this broad category heading (offender behaviour) has been studied before, the novel contribution of this study is that it contains certain behaviours such as the use of gloves, a vehicle or violence (as outlined in Table 1) in the commission of the offence that have never previously been examined in this context. The individual behavioural and physical characteristics are outlined in Table 1. These particular behaviours and characteristics were grouped under these headings because this is how they are captured on the crime input computer system during the recording process. As such there was no requirement for further coding to identify which required grouping under which heading as this was dictated by the recording process itself. There was additional information provided by the officers in a free text area, but this was not analysed within this study. To then assess how accurately you can discriminate between the linked and unlinked crime pairs provided in the B-Link output file the data are analysed further using the regression analysis function within the software SPSS. This analysis shows how good each behavioural and physical characteristic is at distinguishing between linked and unlinked crime pairs. To complete this process, the Jaccard's coefficient was selected as it was identified as the most useful tool for statistically assessing the similarity between binary attributes, in this case physical and behavioural characteristic of linked dwelling burglary offences committed by serial foraging offenders. A separate Jaccard's coefficient was calculated for entry behaviour, offender behaviour, property selection and target selection, and these were entered alongside inter-crime distance as variables in the logistic regression. SPSS subsequently provided an output of this analysis informing how useful each indicator is as a linking variable. To conclude the analysis, the logistic regression data were then re-analysed again using SPSS to conduct the receiver operator characteristic analysis resulting in production of charts displaying the area under the curve (AUC) and accompanying confidence data for each behavioural and physical category. Results In relation to the logistic regression analysis Table 2 shows that all of the examined behavioural and physical characteristics examined were found to have a high degree of predictive accuracy (as measured by Wald's statistic) and a satisfactory fit with the data. However, it was a combined model that included both target selection and inter-crime distance that produced the highest logit and predictive accuracy, closely followed by that of the individual models for target selection and inter-crime distance, which was the reason these two models were combined in an effort to produce an optimal model. Results from the ROC curve analysis are outlined in Table 3. The graphs relating to these findings can be seen in full in Appendix 1. The ROC curve charts that illustrate the area under the curve can be seen in Figs. 2, 3, 4, 5, 6, 7 and 8. An AUC of 0.5 indicates that the result is approximately the same as chance alone. An AUC of 1.0 indicates perfect discrimination and means that the larger the AUC, the higher the predictive accuracy (Woodhams et al. 2019). AUCs of between 0.5 and 0.7 are indicative of low levels of accuracy, 0.7 to 0.9 indicate moderate levels of accuracy and 0.9 to 1.0 high levels (Bennell and Jones 2005;Swets 1992). Inter-crime distance was shown to be the most effective predictor of crime linkage of foraging dwelling burglary offenders and provided the greatest AUC. This was in line with previous studies conducted in respect of randomly selected linked burglaries Canter 2002 andJones 2005). However, the link in respect of predicted foraging offenders was slightly less (AUC = 0.89) than that that identified in previous studies (Bennell and Jones 2005) of linked dwelling burglaries (AUC = 0.90). In respect of target selection, previous studies (Bennell and Jones 2005;and Tonkin et al. 2011) examining randomly selected linked burglary dwellings have identified wide varying degrees of prediction accuracy in respect of target selection, with AUCs of 0.58 and 0.73, respectively; however, both were below the results identified within this study (AUC = 0.76). In an effort to optimise the predictive accuracy the two optimal characteristics of target selection and inter-crime distance were combined. The combined characteristics produced a strong logit result indicating increased predictive accuracy and suggesting that crimes committed can be more accurately predicted as linked when the inter-crime distance and target selection characteristics are considered together. When ROC analysis was also conducted a reduced standard of error was also achieved than that of target selection alone, and the confidence interval ranges also improved but was still below that of inter-crime distance, as was the strength of the AUC. This was an unexpected result but one that has likely occurred due to the goodness of fit of the data which was non-significant. The remaining models of entry behaviour, property selection and offender behaviour produced low levels of predictive accuracy. As has been outlined earlier in this section other studies (Bennell and Jones 2005) have suggested that traditional entry behaviour characteristics are one of the lowest (AUC = 0.59) in terms of predicting linkage of dwelling burglaries. However, this study places it above property selection (AUC = 0.59) and offender behaviour (AUC = 0.58) with an AUC of 0.66, this is 0.07-0.08 above these indicators, respectively, and greater than other similar studies on randomly selected linked dwelling burglaries, but still remains a low level of predictive accuracy. In respect of property selection, the type of stolen property that a foraging burglar aims for provided a very low level of predictive accuracy (AUC = 0.59). This is not a surprising result and is in line with other studies into randomly selected linked dwelling burglaries and commercial burglaries (Bennell and Jones 2005) who predicted AUCs of 0.59 and 0.58, respectively. The results from this study suggest that out of all potential behavioural and physical characteristics researched that offender behaviour provides the lowest level of discriminatory predication (AUC = 0.58). This is a level that is barely above that predicted by chance. Discussion This study has filled a small gap that existed within the literature, admittedly related to a niche area of criminology, but one that is presently widely used by police services in the UK to underpin their patrol decision-making. In doing so it has raised some interesting discussion points. The findings may prove useful for future research and operational use alike. Unsurprisingly, the study identified that inter-crime distance remains the most accurate indicator that crimes are linked, even for foraging offenders. This is in line with multiple previous studies Canter 2002 andJones 2005) not focused on foraging offenders but provides a strong indicator that this form of dwelling burglary offender does not commit their burglary offences in geographical areas that overlay one-another. If they did, then much lower scores of predictive accuracies would have been seen, but this was not the case. This suggests that foraging behaviour is occurring and is operating in a patch selection manner as described by Charnov (1976) and outlined in Fig. 1. It also indicates that increased capable guardianship in the form of police patrols in predicted forager areas is likely to be an effective response that could reduce or prevent crimes committed by foraging offenders. Although the personal behaviour of foraging burglary offenders has previously been researched in isolation in respect of its ability to predict linkage between offences, previous studies have categorised these personal behaviours as 'entry behaviour' or 'target selection behaviour' to define what is commonly referred to as a modus operandi or MO (Bennell and Canter 2002;Tonkin et al. 2012 andTonkin andGrant et al. 2008). This study disconnects the physical entry characteristics from other behaviours displayed by the offenders while committing the crime and as such provides new insights on behaviour committed by burglars during the offence. Only one other study has attempted to do this by examining the internal search behaviour of offenders (Tonkin et al. 2011) in which a strong discriminatory prediction accuracy was identified (AUC = 0.66). This study however identified a very low predictive accuracy from analysing the personal behaviour of the foraging offenders. As a result, this suggests that as a decision-making threshold for practitioners conducting the linking process, the MO may not be as robust a linkage indicator as previously believed. That said, one possible explanation for this result is that all but one of the offender behaviours in this study are reliant on being identified by either eye witness evidence, i.e. multiple offenders and use of a motor vehicle, or being identified thorough forensic examination of the scene, i.e. the wearing of gloves to mask the presence of finger prints. As such it is not possible to generalise as to whether these characteristics are unreliable linkage factors in respect of foraging burglary offenders, all burglary offenders alike, or is purely down to their absence due to a lack of witnesses or positive forensic results. This study also identified that entry behaviour was far more accurate in identifying linked cases than has previously been seen and in this study, higher than both offender behaviour and property selection. This is a potential important finding as it suggests that the type of burglary offending (in this study it is foraging) may have an impact on the linkage accuracy of behavioural and physical characteristics. In an operational context this means that if analysts spend greater time applying scrutiny to identifying the form of offending, i.e. foraging, commuting, random or organised burglary offending, it will better inform the linking process by providing them a more evidence-based decision-making threshold, potentially increasing the accuracy of their linkage predictions. In respect of property selection, this study was firmly in line with previous findings and indicates that foraging burglary offenders do not display increased preference of property selection than any other form of offender. It is highly likely that as Bennell and Canter have previously argued (2002) that indicators that offenders have least control over provide the lowest accuracy in terms of linkage prediction. Property stolen is one of these indicators as ultimately what is stolen is controlled by the property available, which is highly likely to be incredibly similar between homes. Furthermore, they are often intrinsically linked together. For instance, cash, vehicle keys and identification are all property items that are frequently located together within a handbag, purse or wallet and as such, difficult to distinguish as a behavioural characteristic meaning its use as a decisionmaking threshold for linkage prediction may be limited. As alluded to above, this section suggests the potential that the form of burglary offender can be identified from their behaviours and subsequently used to inform linkage models and spatial offending predictions made. This approach could possibly be used to identify an offender as a forager, who moves between patches seeking to maximise the cost-benefit return of offences as outlined by Johnson and Bowers (2004a, b), or the commuter, who travels extensively from outside of the attacked area, as identified by Rossmo (2000), the marauder, who moves uncoordinatedly and selects targets randomly, also identified by Rossmo (2000) and the organised burglary criminal. It may of course be possible for an offender to possess characteristics of more than one type of offender, and as such this study suggests that further research is required to assess whether this argument holds true. It could be argued that in reality, this is hard to conduct without first knowing where the offender lives. However, experience indicates that there are often elements of investigations that can enable distinction without successfully identifying the home of the offender, for example, if a vehicle is involved in the offences tactics such as automated number plate reading could identify the suspects have travelled an extensive distance and as such, could be considered a commuting offender. Similarly, this concept could be tested retrospectively by examining detected crimes committed by commuting offenders if a defined parameter could be set, i.e. offenders who travel more than 10 miles could be classified as a commuter for the purpose of exploring this avenue further. The value in this further research is that if it does hold true, it can then enable researchers and practitioners alike to potentially distinguish between types of burglary offenders, and as a result, apply bespoke models of prediction and improve linkage decision-making. For example, boost near-repeat theory may be more appropriate as a predictive methodology for random-marauding offenders, and the optimal forager method may be more suitable for foraging burglars, the offender type being distinguished by low or high levels of linkage accuracy in respect of entry and target behaviour (boost near-repeat offenders displaying lower linkage accuracy in both for example). The suggestion that different types of burglars display different behaviours when committing and selecting their crimes has been studied. Fox and Farrington (2012) examined burglary offenders in the context of behaviours that were organised, methodical, disorganised, chaotic, and opportunistic and found there to be distinct evidence supporting the position that not all burglars were created equally. What this study did not do is attempt to classify these within known methodologies for predicting burglary locations or in the context of spatial decision-making, as this study now proposes. To conclude, this study does possess several key limitations which must be taken into account when considering the findings. Firstly, the data within this study were only obtained from within one police force that utilises the optimal forager predictive policing methodology when there are believed to be as many as nine actively using the method (Halford 2018). Furthermore, free text data from within the recorded crimes analysed were not examined which may have provided greater insight. More detailed analysis coding the criteria within each individual characteristic and behaviour could provide further insight than this study has achieved as it only compared them as clustered groups. Significantly, the study was also heavily reliant on the work conducted beforehand by police analysts who had complete autonomy to identify offences as being conducted by a foraging burglar and the prediction of the hotspot patches they predicted, which all of the analysed crimes were taken from. As this was done using professional judgement the data may have included false positives and omitted others as a result of a false-negative decision. Future research that encompasses data from more police forces, widens its collection to include free text data and/or, enhanced coding of each specific characteristic would help confirm or refute findings within this study. Furthermore, further research on examining definable profiles of burglary offenders and their crime linkage characteristics would advance the suggestion that they display distinct characteristics that can be aligned to alternate predictive methodology and continue to add to the body of knowledge in respect of foraging offenders. Funding I can confirm that no funding or grants have been received to support this study. Declarations Ethical Procedure The research meets all applicable standards with regard to the ethics of experimentation and research integrity, and the paper has been submitted following due ethical procedure, and there is no duplicate publication, fraud, plagiarism, or concerns about any of the included forms of experimentation. Informed Consent During the conduct of this study no identifiable personal data has been. As such, no identifying details (names, dates of birth, identity numbers, biometrical characteristics (such as facial features, fingerprint, writing style, voice pattern, DNA, or other distinguishing characteristic), and other information) have been examined or utilised and are not contained within the study. As such, under these conditions consent was not required consent as the submission does not include any identifiable data and/or images that may identify any person. In addition, to affirm this ethical conduct of the study was assessed and confirmed prior to conduct and consent to publish has been agreed by the data provider. Conflicts of Interest As the author of this paper I can confirm I have no financial or personal relationship with other people or organizations that could inappropriately influence or bias the content of the paper. In respect of professional conflicts, the only potential points of interest here are (1) That Dr Craig Bennell supplied me the computer software B-link to assist with this study as part of my Ph.D and (2) Dr Ray Bull has previously seen a preliminary proposal presentation on elements of my study in the UK as part of a seminar he attended in approximately 2016 at Birmingham University. Research Involving Human and Animal Participants This article does not contain any studies with human participants or animals performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated Appendix 1 Receiver Operating Characteristics otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-02-05T14:46:46.040Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "f70f920c4bded59d40c238d3d80fcc9566aa95f3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11896-022-09497-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e53cfebdccc922e848e975d406d98c576b530d62", "s2fieldsofstudy": [ "Law", "Sociology" ], "extfieldsofstudy": [] }
125557446
pes2o/s2orc
v3-fos-license
Instantaneous frequency estimation using the discrete linear chirp transform and the Wigner distribution In this paper, we propose a new method to estimate instantaneous frequency using a combined approach based on the discrete linear chirp transform (DLCT) and the Wigner distribution (WD). The DLCT locally represents a signal as a superposition of linear chirps while the WD provides maximum energy concentration along the instantaneous frequency in the time-frequency domain for each of the chirps. The developed approach takes advantage of the separation of the linear chirps given by the DLCT, and that for each of them, the WD provides an ideal representation. Combining the WD of the linear chirp components, we obtain a time-frequency representation free of cross-terms that clearly displays the instantaneous frequency. Applying this procedure locally, we obtain an instantaneous frequency estimate of a non-stationary multicomponent signal. The proposed method is illustrated by simulation. The results indicate the method is efficient for the instantaneous frequency estimation of multicomponent signals embedded in noise, even in cases of low signal to noise ratio. Instantaneous frequency estimation using the discrete linear chirp transform and the Wigner distribution Osama A. Alkishriwo and Luis F. Chaparro Abstract-In this paper, we propose a new method to estimate instantaneous frequency using a combined approach based on the discrete linear chirp transform (DLCT) and the Wigner distribution (WD). The DLCT locally represents a signal as a superposition of linear chirps while the WD provides maximum energy concentration along the instantaneous frequency in the time-frequency domain for each of the chirps. The developed approach takes advantage of the separation of the linear chirps given by the DLCT, and that for each of them, the WD provides an ideal representation. Combining the WD of the linear chirp components, we obtain a time-frequency representation free of cross-terms that clearly displays the instantaneous frequency. Applying this procedure locally, we obtain an instantaneous frequency estimate of a non-stationary multicomponent signal. The proposed method is illustrated by simulation. The results indicate the method is efficient for the instantaneous frequency estimation of multicomponent signals embedded in noise, even in cases of low signal to noise ratio. Index Terms-Instantaneous frequency, discrete linear chirp transform, time-frequency analysis, Wigner distribution, estimation I. INTRODUCTION I N many applications in biomedicine, speech processing, communications, radar, underwater acoustics, where nonstationary signals are present, it is typically necessary to estimate the instantaneous frequency of the signals [1]. Timefrequency distributions (TFDs) are widely used for IF estimation based on peak detection techniques [2], [3], [4]. The most frequently TFD used for linear chirps is the Wigner distribution (WD) due to its ideal representation for such signals. However, in the case of multicomponent signals, Wigner distribution does not perform well because of the presence of extraneous cross-terms. Recently, the discrete linear chirp transform (DLCT) [5] was introduced as an instantaneous-frequency frequency transformation, capable of locally representing signals in terms of linear chirps. It generalizes the discrete Fourier transform and has an instantaneous-frequency time dual transform, and very importantly it can be efficiently implemented using the fast Fourier transform (FFT). The work of [6] in multicomponent signal IF estimation requires to have a TFD that has high resolution and is free of cross-terms. In [7] an iterative method is proposed for IF estimation using the evolutionary spectrum. In general, the instantaneous frequency estimation requires signal separation, for Manuscript received .... multicomponent signals, and high resolution time-frequency distributions. In this paper, we propose a new method that takes advantage of the DLCT for signal separation, and of the WD for high resolution in the time frequency space. II. THE DISCRETE LINEAR CHIRP TRANSFORM (DLCT) Given a discrete-time signal x(n), with finite support 0 ≤ n ≤ N − 1, its discrete linear chirp transform (DLCT) and its inverse are [5] The DLCT decomposes a signal using linear chirps φ β,k (n) = exp j 2π N (βn 2 + kn) characterized by the discrete frequency 2πk/N , and a chirp rate β, a continuous variable connected with the instantaneous frequency of the chirp: Assuming a finite support for β, i.e., −Λ ≤ β < Λ, it is possible to construct an orthonormal basis {φ β,k (n)} with respect to k in the supports of β and n. To obtain a discrete transformation, we approximate the chirp rate as The DLCT is a joint instantaneous-frequency frequency transform that generalizes the discrete Fourier transform (DFT); indeed X(k, 0) is the DFT of x(n). Thus, the DLCT can be used to represent signals that locally are combinations of sinusoids, chirps, or both. It is important to remark that in a discrete chirp, obtained by sampling a continuous chirp satisfying the Nyquist criteria, the chirp rate β cannot be an integer. Indeed, if a finite support continuous chirp is sampled using a sample frequency as determined by the Nyquist criteria, the obtained discrete signal is where we letβ = αT 2 s be the chirp rate and ω 0 = Ω 0 T s be the discrete frequency. Then the modulated chirp is not an integer for M ≥ 2. Therefore, for not aliased chirps, we need |β| ≤ 0.25. For each value of β it can be shown that equals x(n) so that the inverse DLCT is the average over all values of β. III. INSTANTANEOUS FREQUENCY ESTIMATION In this section, we introduce a procedure that combines the DLCT and the WD to estimate the IF. Locally, the DLCT approximates the signal as a sum of linear chirps, for each of which the WD provides the best representation. Superposing these WDs we obtain an estimate of the overall instantaneous frequency of the signal. The Wigner distribution of a signal x(t) is given by [8] W (t, Ω) = 1 2π And for a linear chirp x(t) = exp(j(αt 2 /2 + Ω 0 t)) with instantaneous frequency Thus the Wigner distribution of a linear chirp concentrates the energy exactly along the instantaneous frequency in an optimal way. However, the IF is only clearly seen when the signal is a single chirp, additional terms -cross-terms -appear when the signal is composed of more than one chirp. If the signal x(n) is the input to the system shown in Fig. 1, the output of the DLCT will be approximated by a sum of linear chirps. Therefore, we can find the WD of each of these linear chirps and synthesize them to obtain a WD free of cross-terms. Assuming that x(n) is approximated using the DLCT as and P is the number of chirp components. The WD of each chirp is given by Adding the W i (n, ω) we obtain an approximation of the Wigner distribution W(n, ω) corresponding to x(n), but free of cross-components. Since the Wigner distribution concentrates the energy along the instantaneous frequency, the IF is estimated bŷ As indicated above, the instantaneous frequency is approximated locally by linear chirps. Thus the signal in general is windowed before applying the above procedure locally. The estimated IFω(n) is obtained from the peak detection approach for the high resolution time-frequency distribution which is a result of combining the DLCT with the DW. The accuracy of the estimation is measured by the mean square error where . is the average. IV. SIMULATIONS To evaluate the performance of the proposed instantaneous frequency estimation method, we consider multicomponent signals with linear, quadratic, and sinusoidal instantaneous frequencies. Also, we add noise to the signals and test our procedure for several signal to noise ratios (SNRs) values. where N (n) is a complex white gaussian noise with a total of variance σ 2 is added to the signal. Figures 2 (a) and (b) display the WD and the short time Fourier transform (STFT) of x 1 (n) for a SNR= −5 dB while Fig. 2 (c) shows the superposition of the WDs of the chirp components (synthesized WD). Notice that the WD does not clearly display the chirps due to crosscomponents and the smearing of the noise over the timefrequency space and the STFT is not robust against noise. The estimated and the original instantaneous frequencies of the signal x 1 (n) at SNR= −5 dB are given in Figs. 2 (d) and (e). The mean square error (MSE) for the instantaneous frequency is shown in Fig. 2 (f). It shows that the estimated IF using the proposed method matches well the original IF even at low SNRs. Example 2. Let the signal x 2 (n) be a multicomponent signal which has two intersected components in the time-frequency Figures 3 (d) and (e) illustrate the original IF(ω(n)) as well as its estimate (ω(n)). The MSE error as a function of SNR is given in Fig. 3 (f). Tables I and II summarize the MSE measured in dB for the estimated IF using synthesized WD, STFT, and WD under Example 3. In this example we show the potential of our algorithm in estimating the IF of actual multicomponent signals such as bat echolocation signal. The signal is given in Fig. 4 (a). The WD and the STFT of it are given in Figs. Fig. 4 (d) shows the synthesized WD using the proposed method. The estimated IF of the bat signal is illustrated in Fig. 4 (e). The proposed IF estimation method performs well since it shows five components in the time frequency plane. In addition, we can observe that the bat signal suffers from aliasing in the third and fourth components as explained in Figs. 4 (d) and (e). Comparing our results with [10] for the same bat signal, our IF algorithm gives better estimation because it shows more information of the signal in the time-frequency plane. V. CONCLUSIONS In this paper, we propose a new method for IF estimation based on the discrete linear chirp transform and the Wigner distribution. It is shown, that we can approximate locally a signal by linear chirps using the DLCT. Separating them, finding the WD of each of these linear chirp and superposing them, a WD free of cross-terms is obtained for the signal under analysis. Simulations show we can obtain accurate IF estimation by the proposed method for even low levels of SNRs. Our procedure takes advantage of the maximum energy concentration of the Wigner distribution of linear chirps obtained from the DLCT. Work is underway on the application of this procedure in biomedical applications.
2018-10-11T15:07:06.000Z
2018-10-11T00:00:00.000
{ "year": 2018, "sha1": "e5fd34461e3e737ead4136dc1e8d69c930016498", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e5fd34461e3e737ead4136dc1e8d69c930016498", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
232106706
pes2o/s2orc
v3-fos-license
Antimicrobial Resistance Prediction in Intensive Care Unit for Pseudomonas Aeruginosa using Temporal Data-Driven Models One threatening medical problem for human beings is the increasing antimicrobial resistance of some microorganisms. This problem is especially difficult in Intensive Care Units (ICUs) of hospitals due to the vulnerable state of patients. Knowing in advance whether a concrete bacterium is resistant or susceptible to an antibiotic is a crux step for clinicians to determine an effective antibiotic treatment. This usual clinical procedure takes approximately 48 hours and it is named antibiogram. It tests the bacterium resistance to one or more antimicrobial families (six of them considered in this work). This article focuses on cultures of the Pseudomonas Aeruginosa bacterium because is one of the most dangerous in the ICU. Several temporal data-driven models are proposed and analyzed to predict the resistance or susceptibility to a determined antibiotic family previously to know the antibiogram result and only using the available past information from a data set. This data set is formed by anonymized electronic health records data from more than 3300 ICU patients during 15 years. Several data-driven classifier methods are used in combination with several temporal modeling approaches. The results show that our predictions are reasonably accurate for some antimicrobial families, and could be used by clinicians to determine the best antibiotic therapy in advance. This early prediction can save valuable time to start the adequate treatment for an ICU patient. This study corroborates the results of a previous work pointing that the antimicrobial resistance of bacteria in the ICU is related to other recent resistance tests of ICU patients. This information is very valuable for making accurate antimicrobial resistance predictions. I. Introduction A ntimicrobial resistance occurs when a germ develops the capacity to not respond to the drugs designed to combat them [1]. Nowadays, antimicrobial resistance is one of the greatest threats to the global health system [2]. Apart from the health consequences, the economic impact deriving from antimicrobial resistance is not a trivial issue, resulting in a 7% reduction in the Gross Domestic Product by 2050 [3]. Indeed, it has become more acute in recent years due to the excessive use of antibiotics in many facets of daily life [4]. The acquisition of antimicrobial resistance is favoured in hospital environments, being even worsened for patients admitted to the Intensive Care Unit (ICU). This could be motivated by the duration and intensity of the drug treatment, as well as by the use of life support devices. The critical health status of ICU patients pushes actions to anticipate the result of the cultures provided by the microbiology laboratory, which usually takes 48 hours. A culture is a biological sample collected to isolate a bacterium, aiming to analyze its susceptibility to different antibiotics. The test used to measure this susceptibility is called antibiogram, and its result (susceptible/ resistant) is commonly used by clinicians to determine the antibiotic treatment [5]. It is interesting to note that several families of antibiotics may have similar susceptibility when tested on a given germ species [6]. There are several species with high prevalence, for example, Acinetobacter spp.; Enterococcous fecalis and Enterococcus faecium; Escherichia coli; Klebsiella pneumoniae; Pseudomonas aeruginosa; and Staphylococcus aureus, among others. In this paper, we focus on Pseudomonas aeruginosa for the following reasons: (1) its virulence, specially in the ICU;(2) its ability to cause chronic infectious diseases; and (3) its ability to develop multi-drug resistance [7], [8]. For all these reasons, anticipation to the culture result in case of resistance, is vital to isolate the patient and control the spread of antimicrobial resistance among other ICU patients. Computational tools inspired on data-driven models may be supportive to clinical decisions previous to the antibiogram result. The article [6] introduces the concept drift observed in antimicrobial resistance data sets, and it uses a windowing scheme together with dynamic classifiers to perform resistance prediction. It classifies cultures as susceptible or resistant to some antibiotics using a database of EHR which includes years from 2002 to 2004, considering cases of meningitis. A high number of the state-of-the-art studies use whole genome sequencing [9]- [ 12]. Because of its considerable cost, in this study we propose to predict resistant bacteria based on Electronic Health Records (EHR) data from ICU, together with historic antibiogram results. This data is already available in most hospitals, and therefore the methodology proposed in this paper can be straightforward extrapolated. Comparable approaches are studied in previous works [6], [13]- [18]. In [17], bacterial infection in the ICU using EHR data is predicted (binary classification task) by applying a set of machine learning (ML) methods. The prediction is carried out at the patient level in order to determine which patients no longer require more antimicrobial treatment. Longitudinal data from 2001 to 2012, extracted over the 24-hour, 48-hour or 72-hour window following their first antibiotic dose, are considered. No temporal modelling was explicitly taken into account. The work in [18] presents an study for predicting bacterial resistance also using EHR data, from 2013 to 2015. An ensemble of ML methods is used to classify isolated bacterial cultures as susceptible or resistant to a particular antibiotic. The temporal relation among instances is considered here, with features indicating the proportion of past antibiotic resistance infections identified as having the highest average impact. This study also concludes that the feature encoding the date of the culture has some effect on the prediction, probably due to the fluctuating resistance frequencies through time. Owing to the dynamics of antimicrobial resistance, we analyze in this paper electronic health records collected during 15 years, from 2004 to 2019, by the University Hospital of Fuenlabrada (UHF) in Madrid, Spain. This data have been partly considered in previous studies carried out by the authors [14], [15], [16], [19]. In particular, authors in [14] used a reduced dataset taking into account two years less (from 2004 to 2017) than in the current work. All patients admitted in the ICU in this period were considered in [14], regardless of their length stay. Additionally, authors in [14] used ML to determine whether a Pseudomona Aeruginosa bacterium will be resistant or not (binary target) to different families of antimicrobials without considering information about historic antibiogram results. In [15], we analyzed for the first time the dynamics on Pseudomona Aeruginosa by considering incremental time windows on a period of time from 2004 to 2013, with two families of antibiotics. It was also our first incursion on the use of features taking into account the result provided by previous antibiograms of other ICU patients. This current paper extends the work in [15] while considering the predictive window length (one month) that best results provided in [15]. Specifically, to carry out predictions, the Random Forest (RF) method has been added to previously considered method, Logistic Regression (LR). We have increased both the number of years under study and the number of antibiotics (from 2 to 6). We have also considered as features the result provided by previous antibiograms of each patient, weighted by a factor depending on the time elapsed since the last antibiogram was tested. Furthermore, two approaches have been explored to analyze the dynamic of antimicrobial resistance by evaluating the models in several time horizons. The rest of the paper is as follows. In Section II, we describe the data set analyzed in this paper and provide a graphical exploration of it. Section III introduces the data preprocessing as well as the methods used for temporal modelling. Results and discussion and provided in IV. Finally, the conclusions are presented in Section V. A. Data Set Description Data considered in this work correspond to 3812 admissions of 3346 ICU patients, collected at the UHF during a period of 15 consecutive years (from July 2004 to May 2019). Note that, since the number of ICU admissions exceeds the number of patients, there are patients with more than one ICU admission during this period. A total of 43658 cultures were collected. Although there are more than 290 different types of bacteria and 27 antimicrobial families, we only take into account here the cultures where Pseudomonas have been detected, ending up in a total of 764 cultures. For this bacterium, the antibiograms considered in this work test the response (encoded as susceptible (s) or resistant (r)) against the following set of family of antibiotics a = {amg, car, cf4, pap, pol, qui}. Elements in the set a refer to: Aminoglycosides (AMG), Carbapenems (CAR), 4th Generation Cephalosporins (CF4), Extended-spectrum penicillins (PAP), polymyxins (POL) and Quinolones (QUI), respectively. Since data-driven models are based on learning from instances, we consider here the target c&a i , as the antibiogram result for a specific family of antibiotic a i , for every culture collected to any patient. The feature vector associated with each target is represented by the 40 features described in Table I. We define here the instance as the pair composed by the feature vector (input features to the data-driven models) and the target (outcome of the data-driven models). As for the input features, we first analyze demographic data: age, gender, group of illness A (cardiovascular events), B (kidney failure, arthritis), C (respiratory problems), D (pancreatitis, endocrine), E (epilepsy, dementia), F (diabetes, arteriosclerosis) and G (neoplasms), and pluripathology (indicating whether the patient has more than two comorbidities). The median age of patients admitted to the ICU was 64 years (interquartile range 55-73, range 18-87), with a majority of men (70%). Pluripathological patients are 40.6% of the patients, with comorbidities mostly related to respiratory problems (33.4%), diabetes (26.3%) and neoplasms (33.1%). We then focus on the information about the ICU admission: date of admission to the ICU, department of origin before ICU admission (surgery, internal medicine, urology,...), reason for admission (serious infection, acute respiratory failure, hypovolaemia,...) and patient category (medical or surgical). The clinical origin before the ICU admission more common was surgery (31.1% patients) and emergency department (18.4%). The reason of admission more common was serious infection (22.5% patients) and acute respiratory failure (18.4% patients). The most common patient category was medical (52.2 %). This work also analyses the information related to the cultures. Specifically, we consider the culture type (exudate, drainage, biopsy, sputum, bronchoaspirate, etc.); first level grouping for the type of culture, which classifies the cultures into surface, liquids, respiratory, etc.; and the second level grouping for the type of culture, used to identify a clinical sample or a surface culture. Besides, the date of the culture, the weekday the culture was collected , as well as the month and the year. Finally, to collect temporal information in each instance associated to patient p, the current study proposes to generate two kind of features linked to previous resistant antibiograms. In particular, we consider: (1) previous resistant results of the same patient, and (2) previous resistant results of all patients who recently stayed in the ICU. Own past cultures features. The first kind of features is associated with the detection of resistant bacteria in previous antibiograms for a specific patient p, and aims to quantify the current "intensity" of these bacteria. These features consider the result of antibiograms of Pseudomonas Aeruginosa during an interval between 21 days and 48 hours previous to the current culture being studied for patient p, c ( p ) . The 48-hour limit is considered since it is usually the time the results of the antibiogram take to be available. Furthermore, cultures are gathered until 21 days before the date d of current culture c, because if the antibiogram result is positive, from a clinical point of view, it is kept as positive for the following 21 days. Thus, when a culture is collected, a total of six features, one per antimicrobial family, are generated: p&amg, p&car, p&cf4, p&pap, p&pol and p&qui. Each feature takes into account the antibiogram results for the corresponding antimicrobial family, e.g. p&pap just consider previous results associated with patient p for the family of antibiotics PAP. Because of that, the group of own past cultures of patient p, named C (p) , is divided into six data sets . To illustrate how the value for each feature p&a i , i = 1, 2, ···, 6 is obtained, let us consider that the subset has cultures, i.e. . Each culture has associated: (1) a date when it was collected ; and (2) a susceptibility test result , which is susceptible or resistant depending on whether the bacterium is susceptible or resistant to a i , respectively. To calculate the potential contribution of a culture to the feature p&a i , i = 1, 2, ... , 6, the Negative Exponential Function (NEF) is applied as follows: (1) where the value of parameter λ is experimentally set to 0.095. To compute the feature value p&a i for the instance associated with culture c (p) of patient p, the maximum outcome in Equation (1) is obtained according to Equation (2): (2) ICU-patients past cultures features. The second kind of features are named r&amg, r&car, r&cf4, r&pap, r&pol and r&qui. These features aim to encode the "intensity" of resistant bacteria in the ICU during the time previous to the date d of the current instance and culture. Differently from the previous set of six features p&a i , now the "intensity" takes into account the number of patients (different from current patient p) that were infected by a resistant bacterium and, for each of them, the time elapsed since the bacterium was detected. For a particular feature, a single value is calculated by considering the result of past susceptibility tests of Pseudomonas Aeruginosa for the P patients, denoted as p j with j = 1 ··· P, in the ICU during the time interval between 21 days and 48 hours previous to date d of culture c (p) of patient p. An exponential decay is again considered to weight the result of each susceptibility test. The group C (p') of past cultures of other patients is divided into six subsets too. Every particular subset is split into n disjoint subsets, as many as patients: where is composed of the antibiogram results for a i in patient p j . As previously mentioned, the set of cultures of patient p are excluded from . Since every culture has a susceptibility test result and a date , the application of the NEF expression equivalent to that in Equation (1), just replacing , and by , and , respectively. Then, each feature r&a i is obtained by adding up the maximum value of Equation (1) for each patient p j , as indicated in Equation (4). (4) B. Graphical Exploration Owing to the high number of features, we start by identifying the most relevant features per family of antibiotics. For this purpose, we consider a filter approach with the Mutual Information (MI) score [20]. Thus, for each family of antibiotics, Fig. 1 shows the five features with the highest MI values, comprising among them the date of culture and the information about the previous cultures both for the own patient and for the UCI environment. According to the mutual information score, the most relevant feature is date_culture for each of the antimicrobial families considered. This results supports the importance of the antimicrobial resistance dynamics, which is common for all families of antibiotics. To get a deeper insight on this issue, Fig. 2 graphically illustrates the evolution of the number of susceptible antibiograms (a) and resistant antibiograms (b) for each family of antimicrobials tested on Pseudomonas along time. Not all families of antibiotics were tested during the whole period considered. Specifically, clinicians first agreed to modify the range of tested antibiotics in the ICU of the UHF, first by including POL in 2007 and then by stop susceptibility testing antibiograms of QUI in 2018, due to its high resistance. Furthermore, there is a very noticeable fall in the number of resistant and susceptible antibiograms in 2013. This decrease is probably motivated because of integration problems due to software update in the ICU health information system in 2013. As stated in the literature, the number of susceptible antibiograms tend to decrease in the most recent years. In this line, we also analyze the annual ratio of resistant antibiograms results for each family of antimicrobials. To obtain this ratio, the number of resistant cultures per year has been divided by the number of total cultures per year (both resistant and susceptible cultures). The general trend is that, as time progresses (and therefore the value of date_culture increases), a higher percentage of instances tend to be resistant. The second most relevant feature for the antimicrobial families AMG, CAR and QUI are p&amg, p&car and p&qui, respectively. This shows the importance of the outcome of previous antibiograms of the same patient for the family under consideration. In the case of CF4, p&cf4 is the 4th most important feature. Though not presented in Fig. 1, p&pap is ranked on the 7th position for PAP, and p&pol in the 11th position for POL. It is interesting to remark here that, in all cases, the MI score for a particular family of antibiotics is higher for the p&a i feature corresponding to that particular family than to any of the other five p&a i features. This points out the relevance of considering the particular antimicrobial family when using results of previous antibiograms. Fig. 4 shows the boxplots for each of the six features named p&a i , associated to the antibiogram results of the same patient for each family of antibiotics (in rows). Blue boxplots refer to p&a i for resistant results, while black ones refer to p&a i for susceptible results. In general, we observe that the median of p&a i is higher when the culture c was resistant than when it was susceptible. The results shown in Fig. 4 for CAR and QUI are particularly interesting for susceptible cultures (black boxplots) for all the families, with most of the previous antibiogram results being susceptible. However, for CF4 and PAP, most of antibiogram results are susceptible for p&cf4, p&pap and p&pol, whereas for POL it only happens for p&pol. Note that, regardless the family of antibiotics tested, the boxplot of p&car and p&qui for resistant cultures (blue bloxplots) is very similar to the boxplot associated to the corresponding family of antibiotics considered (e.g, see p&amg, p&car and p&qui in Fig. 4 for AMG, or p&pap, p&car and p&qui for PAP. The r&a i features are also among the most relevant features according to the MI score. In this case there is no clear distinction on the ranking depending on the antimicrobial family. It supports the importance of taking into account the existence of any resistant germ in the ICU. The feature r&pol (not included among the top five features in Table I) seems to be the one providing less information, probably because of low number of antibiograms with a resistant result for this family. Fig. 5 presents the bloxplots for the r&a i features. In comparison with boxplots in Fig. 4, note that boxplots of the r&a i features are not limited to a maximum of one, since the number of patients contributing in Equation (4) is n (usually greater than 1). For each antibiogram a i , the median values of the r&a i features resistant and susceptible results is much closer between them than when comparing the p&a i features. It is also remarkable that boxplots associated with r&pol show a median value very close to zero both for resistant and susceptible cultures, in line with previous comments. Furthermore, when analyzing POL, the median value is higher for susceptible than for resistant cultures, excepting for r&pol, showing a different behavior of this antibiotic. Finally, among the features in the top five with a higher MI score, we also find days_to_culture (for POL) and age (for QUI). Both features are also among the top ten for the rest of the antimicrobial families. From a clinical viewpoint, it is known that both age and a longer ICU stay are risk factors to become infected [14]. A. Data Preprocessing Before using the data set to predict the result of the susceptibility test, a previous stage of preprocessing is needed. The first aspect to be considered is that six binary classifiers are going to be built in order to predict whether a culture is susceptible or resistant to each of the six different antimicrobial families. A different approach to tackle this problem would be to train a multi-class classifier. However, generating different classifiers allows to individually tune the hyperparameters of each of them and also makes the interpretation and analysis of results easier. To train them, the main data set is divided in six smaller data sets, each of them just considering one binary target c&a i . After that, all the instances representing cultures from patients that have stayed less than 48 hours in the ICU, are removed from every of the six data sets. Table I, the number of features is 40 for every data set, considering the respective target feature. The number of instances are 755, 643, 749, 749, 483 and 708 for AMG, CAR, CF4, PAP, POL and QUI data sets, respectively. Since instances represent cultures, and cultures have an intrinsic temporal ordering, instances are sorted in a temporal manner, with older instances at the beginning of the data set and the newer ones towards the end. As indicated in The missing values of the data sets are found in the 12 generated features (r&a i and p&a i ). The percentages of missing values for each of the data sets and features are detailed in Table II and Table III. It is remarkable that the percentages of missing values for p&a i features are higher than those of r&a i features. This happens because, in general, during the same time interval the number of cultures associated to a group of patients will be higher than the number cultures associated to just one patient. It is also notable that, overall, p&pol and r&pol have a high percentage of missing values with respect to the rest of the features of their respective type. This is caused by the very few resistant instances there are for POL family , probably because POL started to be tested in 2007 and the rest of antimicrobial families in 2004. In the clinical setting, dealing with missing values is an interesting and challenging topic which may have different implications. In this study, missing values are replaced by zeros because of the clinical meaning of p&a i and r&a i features. The reason for a p&a i feature not having a value is that, for the particular patient and time interval considered, it is not found a resistance test result for the specific antimicrobial family studied. If that is the case, it means that, probably, clinicians have considered that the patient may not be infected by a bacterium resistant to the antimicrobial family. Therefore, it can be inferred that likely, in the time prior of the culture being analyzed, the patient was not infected with a resistant bacterium. It seems reasonable to assign a zero in this case, since the feature gets a higher value the more recent a resistant bacterium was detected. Regarding r&a i features, a similar reasoning can be done. If in the time interval observed, none of the patients in the ICU were tested for resistance to the particular antimicrobial family, it implies clinicians considered it was unlikely to find this kind of resistant bacterium. Thus, it is probable that, prior to the culture, there were no patients infected with a bacterium resistant to the feature's antimicrobial family, causing zero to be an appropriate value. The categorical features in the data sets are converted into numerical before using them with the machine learning methods considered in this work. The two features representing dates (date_ culture and start_date) are categorical and ordered. Because of that, dates are encoded with integers, assigning lower values to older dates, and higher values to recent dates, indicating, in that way, the ordering among them. The value of a particular date is calculated as the difference, in number of days, between the particular date to be encoded and the first date in the data sets of the specific feature. Having all features expressed as numerical, Pearson correlation is applied to detect the most correlated ones. If two features (both different from the target feature) are highly correlated, they are adding redundant information to the prediction, and therefore one of them should be removed. In this study it is considered that two features are highly correlated if their correlation coefficient is higher than 0.9 or lower than -0.9. In all of the six data sets, the same four features (date_culture, year_culture, start_date and year_ admission) are highly correlated among them. Because of that, just date_culture is maintained and the other three are removed from the data sets. After that, the number of features in every data set is 37 including the target feature. B. Predictive Methods In this section, we describe briefly the data-driven classifier considered in this work. Specifically, LR is tested as base line method, and it was also used in our previous work [15]. In this study, RF has been added to carry out predictions since its interpretability capabilities. The LR method, very common in the clinical literature, allows us to conduct a linear analysis when the dependent variable is binary. It was used in our previous study [15] because of its simplicity to serve as a baseline, and to evaluate the feasibility of learning from data. In this work, it is again used to classify the instances, now with a greater amount of data and a higher number of antimicrobial families to be analyzed. This is done in order to have a more solid insight on whether the target can be predicted with the available features and the performance this method can provide. Before using LR, each feature is standardized by removing the mean and scaling to unit variance. The another data-driven method explored here is RF, a machine learning approach commonly used for regression and classification [21], [22]. It is an ensemble method, that is, a RF model is built from multiple decision trees named estimators, which are able to generate individual predictions. RF combines the different predictions of its decision trees (which, individually, tend to over fitting to the training set) to provide a better prediction, providing a better generalization to data not considered in training. The RF method is very robust, since it can handle data sets with an extensive number of features, high dimensionality and heterogeneous features, while having very few hyperparameters. Because of this, RF is often used as a first approach to develop machine learning systems, as it enables to get an overview of the performance on a particular task. C. Temporal Modeling Analyzing the problem to be solved, some special characteristics have to be considered when designing the experiments. The first one is the temporal ordering among instances of the data sets. Since instances are associated with cultures with a susceptibility test, they have an inherent order marked by the date when they were collected. This forces to maintain this same order when predicting instances, that is, past instances cannot be predicted with instances in their respective future. This particularity arises from the fact that, in the real world, when predicting an antibiogram result, future results are not available. Antimicrobial resistance is a phenomenon that changes over time as bacteria mutates. It allows bacteria to be more resistant to antibiotics as time progresses. As previously mentioned, the features considered include demographic data, information about the patient's admission, and information about the culture and antibiogram results. Since bacteria's mutations are not among the available features, the feature's values telling apart one class from another may change along time. This fact has been previously described as the concept drift in which the concept being studied depends on some hidden context, not explicitly given in the form of predictive features [6]. An approach that is normally used to tackle this type of problems is the so called windowing, which generalizes from a sliding window that moves over the data set instances and applies the knowledge gathered to predict only in the immediate future. The other particularity is the data scarcity. As previously mentioned, the maximum number of cultures (755) is observed for the AMG antimicrobial family. With the time interval considered (15 years, from 2004 to 2019), there is at most an average of 50 cultures per year. Data scarcity is a trouble spot when using windowing, because in this paradigm, usually, just a small fraction of the data set (the one considered by the sliding window at each particular time) is used for training. A solution proposed in the previous work [15] was to build an incremental training window as the one depicted in panel (d) of Fig. 6. This type of window, which grows in length, contains instances that are as temporarily close as possible to the test instances. Then, the concept drift can be avoided by predicting temporarily close instances to the training set, but it also contains instances far in the past, so that the number of available instances for training is higher than when using sliding window. In addition to the incremental training window, this work considers a more commonly used sliding training window with fixed size to compare their prediction performance. Below we first describe the characteristics of the test window, which is the same for both types of training windows. After that, we present the characteristics of the two types of training windows considered in this work. The test window consists in a sliding window with a fixed size of just 1 month. Considering just a small amount of time, it is ensured that test instances are as close as possible to the training set. In the experiments of this study, this window begins just considering the first month (January) of 2016. After that, in each prediction step, the test window shifts one month towards later dates. In Fig. 6, steps are indicated at the end of each row as (1), (2), (3), ... (N) for every approach. In the last step, this window considers the last month of the data set. The test window, when shifted, does not overlap with its previous position, that is, in each step predicted instances are different from instances predicted in any other step. The incremental training window, as previously mentioned, is a window of increasing size. In the experiments, this window starts containing instances from 2004 to 2015. In the following steps, the window increases in size one month at a time. In the last step, the training window includes all the instances in the data set except the last month, which is the one considered by the test window. The sliding training window with a fixed size consists in a window just considering 4 years of instances. In every step, this window shifts 1 month towards last instances of the data set, in the same way as the test window does. Since the train and test windows always shift the same amount of time, the distance between them, if any, is always the same. The last step, as previously explained, is the one in which the test window considers the last month of the data set. This kind of window is tested with three different configurations, 0 years approach, 2 years approach and 4 years approach, which are represented in panels (a), (b) and (c) of Fig. 6. In the 0 years approach, the distance between the training and test windows is 0 years, that is, the training window is next to the test one. In this case, the training window considers years from 2012 to 2015 in the initial step. In the 2 years approach the distance among windows is 2 years, therefore taking into account that the test window initially contains the first month of year 2016, the training window includes years from 2010 to 2013, so that the desired distance is respected. Similarly, in the 4 years approach, the window starts considering years from 2008 to 2011, because of the same reason. These three different configurations are considered in order to observe how the prediction evolves as the windows move away from each other, and therefore, the concept drift is more noticeable. For both types of training windows, at each step, a classifier is trained, and the performance is evaluated on a test set with each of the two methods considered (LR and RF). It is relevant to take into account that patients from training and test windows are different. That is, when predicting a particular patient's susceptibility test result, it is ensured that there are not other susceptibility results of the same patient in the training set. Also, in the approaches where training and test windows are next to each other (as in the incremental training window and the 0 years approach), a margin of 48 hours is considered between them, since it is the time required for getting the antibiogram's results. As the windows traverse the data set, they encounter class imbalance, due to the temporal evolution of bacterial resistance. This causes that, in the time interval considered by test windows, there is a higher number of instances from one class. Because of that, in order to evaluate the prediction of the classifiers, is not enough to consider the global accuracy. To get a realistic approximation of the classifier performance, the success in susceptible instances and the success in resistant instances are also calculated. The names assigned to these figures of merit are Total Accuracy (A T ot ), Resistant Accuracy (A Rst ) and Susceptible Accuracy (A Scb ), respectively. For a test window with Ns susceptible instances and Nr resistant instances, if the method succeeds in predicting Ss susceptible instances and Sr resistant instances, these figures of merit are computed as follows: These three figures of merit are calculated for the test set of the particular approach considered. In order to get the mean value of these measurements, for every step, the values of Ns, Nr, Ss and Sr are accumulated and, at the end, the three figures of merit are obtained. This accumulation is carried out because test windows may have a different amount of instances, due to the fact that not all 1-month time intervals contain the same number of antibiograms. For that reason, an average would not be adequate, since some instances would have more weight than others depending on the number of instances in their test window. In addition to the experiments using the different windows, a series of experiments are carried out considering different aspects of the prediction. First, it is analyzed the prediction contribution of the most relevant features according to the MI score. In particular, the features studied are date_culture and the two groups of features related to p&a i and r&a i . To assess their contribution, the target is predicted with and without considering these features, and the two outcomes are compared. Secondly, since the incremental training window considers a high amount of instances (from the beginning of the data set) it is proposed to assign weights to its training instances. The purpose is to give a higher importance to the training instances that are temporarily closer to the test, which theoretically would have a more similar distribution to the test instances, and lower importance to instances far from the test. Equation (8) details how the weight is generated for each instance. (8) where d l represents the date of the last culture in the training window, and d c is the culture date for the instance which weight is being calculated. In the equation, the difference of these two dates is expressed in days. The parameter λ is empirically chosen for each experiment as the one providing the best results among the following: 0, 1e-05, 1e-04, 1e-03, 1e-02, 0.1 and 1. If λ is very small, all instances get a very similar weight, regardless of how far they are from the end of the training window. For instance, for λ = 0, all instances has a weight of 1. On the other hand, if the value of λ is high, only a very few instances very close to the end of the training set get a weight close to 1, and the great majority of instances get a weight very close to 0. Note that when the value of λ is zero, it is the same case as the incremental training window without weights. In the case of high values for λ, it is more similar to the 0 years approach of the sliding training window with a fixed size. So, in the end, these weights allow to regulate the amount of past instances considered for prediction. To encode the models obtained from different combinations of windowing and features, a number is assigned to each model, with the following description: M1. Sliding training window with a fixed size and following the 0 years approach. It uses neither r&a i nor p&a i features. M2. Sliding training window with a fixed size and following the 2 years approach. It uses neither r&a i nor p&a i features. M3. Sliding training window with a fixed size and following the 4 years approach. It uses neither r&a i nor p&a i features. M4. Sliding training window with a fixed size and following the 0 years approach. It uses r&a i features but not p&a i features. M5. Sliding training window with a fixed size and following the 2 years approach. It uses r&a i features but not p&a i features. M6. Sliding training window with a fixed size and following the 4 years approach. It uses r&a i features but not p&a i features. M7. Sliding training window with a fixed size and following the 0 years approach. It uses both r&a i and p&a i features. M13. Incremental training window with instance weighting. It uses r&a i features but not p&a i features. M14. Incremental training window with instance weighting. It uses both r&a i and p&a i features. Each of the above kind of models are designed with and without considering the date_culture feature, also with the two aforementioned machine learning methods, LR and RF. After studying the outcomes of the different experiments, the feature relevance is calculated again, now with an embedded method from the RF model. Also, date_culture and the p&a i set of features is analyzed in more depth by making the predictions with just one of these features at a time. IV. Results and Discussion The Results and Discussion section is divided in two different subsections. In the Subsection A, the performance of the predictive methods is assessed by considering different experiments. In the Subsection B, the features identified as the most relevant along the study are further analyzed. A. Prediction The prediction results are detailed in Tables IV, V, VI, VII, VIII and Sliding Training Windows with Temporal Distance Variation Among Training and Test Windows The figures of merit provided by models considering the temporal distance between the training and test sets are in rows with numbers 1 to 9 in the M column of Tables from IV to IX. In the case of the LR method when considering the feature date_culture, the evolution of the figures of merit is not consistent among antimicrobial families when analyzing the separation between training and test windows. In some families, the Total Accuracy increases as the training window approaches the test window, while the opposite happens for other families. The same is observed with Resistant Accuracy and Susceptible Accuracy, its behavior varies depending on the antimicrobial family being predicted. Predicting with RF and using feature date_culture, the evolution of the figures of merit is more similar among the different antimicrobial families. In general, Total Accuracy increases, Resistant Accuracy increases and Susceptible Acurracy decreases as the training window approaches test window. When this pattern is less evident, it may be helpful to analyze when both r&a i and p&a i features are considered. Also, the general performance of the three figures of merit appears to be better when both r&a i and p&a i features are used. For LR and not using the feature date_culture, the aforementioned pattern appears, in which Total Accuracy increases, Resistant Accuracy increases and Susceptible Accuracy decreases when reducing the distance between windows. Comparing these results with those provided by LR and date_culture, two remarks deserve to be underscored: for the families in which this pattern was not previously evident (such as AMG, CAR and QUI), now windows 4 and 2 years apart have lower Total Accuracy and lower Resistant Accuracy, with similar figures of merit in the 0 years-apart windows; on the other hand, for the families where this pattern was reasonably evident (such as CF4, PAP and POL), the figures of merit usually improve, while maintaining the same pattern. Also using both the r&a i and p&a i features tend to improve the performance. Considering RF for prediction and not using the feature date_culture, the same behavior as in LR without date_culture, is observed for all antimicrobial families: note the same pattern for the evolution of the figures of merit (Total Accuracy increases, Resistant Accuracy increases and Susceptible Accuracy decreases as the distance between train and test windows decreases). Comparing these results to previous ones of RF using date_culture, it is noticed that now, for all families, windows of 4 and 2 years apart have lower Total Accuracy and lower Resistant Accuracy, with similar or improved figures of merit in the 0 years-apart windows. Furthermore, using both r&a i and p&a i features tend to provide a better performance. In the considered experiments (from model 1 to model 9), it is also noticeable how results change depending on the antimicrobial family. It is specially remarkable for the CAR and POL families. Considering CAR, it is observed that, for the majority of models, the values of Total Accuracy and Resistant Accuracy are very high, while Susceptible Accuracy values are very low, in most cases zero. On the other hand, for the POL family, Total Accuracy and Susceptible Accuracy are very high and Resistant Accuracy is low in general, with many zero values. These results suggest that the outcomes depend on the class distribution along time, for each antimicrobial family. In Fig. 3 it is noticed that CAR is the family with the highest ratio of resistant instances (almost 1 for the last years of the data set), and POL is the family with the lowest ratio of resistant instances. Although less obvious, the rest of the families also appear to be influenced by their respective class distribution. Firstly, it is interesting to discuss the common pattern observed in almost all families, which causes Total Accuracy to increase, Resistant Accuracy to increase and Susceptible Accuracy to decrease as the distance between train and test windows gets smaller. The reason of this behavior is the temporal class imbalance, that is, in the first years of the data set, the majority of instances belong to the susceptible class, but as time progresses, the majority of instances become resistant, as it is depicted in Fig. 3. Using sliding training windows with fixed size and the approach with 4 years of distance between windows, the training window has to shift towards the past since the test window starts in 2016 for all experiments, therefore containing years from 2008 to 2011 for the first step of the training window, as explained in Section III.C. Being in the past, it contains a higher number of susceptible instances compared to resistant ones, which causes to perform better in predicting susceptible instances (better Susceptible Accuracy) and worse in predicting resistant instances (worse Resistant Accuracy). The opposite happens when the distance between windows is 0 years. In this case the window is near the last years of the data set, therefore it contains more resistant instances (improving Resistant Accuracy) and less susceptible instances (decreasing Susceptible Accuracy). The Total Accuracy improves when the distance is small because in test window the majority of instances are, mostly, resistant. If the majority class is well predicted, the Total Accuracy is high. We conclude that not all the three figures of merit improve as expected when distance is diminishing, in fact one of them gets worse. Applying oversampling to the minority class in this kind of fixed-size temporal windows, in order to balance the number of the two kind of instances, could improve the accuracy in the minority class. Secondly, it is relevant the change in behavior of prediction when date_culture is not considered in both LR and RF methods. Overall, when using date_culture for prediction in the 4 years and 2 years approaches, the Resistant Accuracy increases and the Susceptible Accuracy decreases compared to models not using date_culture. This probably happens because date_culture is compensating the lack of resistant instances of training windows in 4 and 2 years approaches, by telling the classifier the most probable class in test years, which tend to be resistant, and hence Resistant Accuracy is high in most cases, causing Susceptible Accuracy to decrease. The disadvantage of using date_culture is that it causes the minority class to worsen its prediction, since it introduces bias towards classifying instances as the most probable class of the time interval. Since, in the 0 years approach, without considering the date_culture feature, the results are similar or better than when date_culture is taken into account, we conclude that it is convenient not to use this feature. Incremental Window The experiments concerning the results of prediction by using an incremental training window are in rows with numbers from 10 to 12 in the M column of Tables from IV to IX. In the case of using the LR method and including the feature date_culture, adding just features r&a i does not generally improve figures of merit. With the addition of both features r&a i and p&a i , half of the antimicrobial families (AMG, CF4 and PAP) improve their results, although this improvement is mild. With RF and using the date_culture feature, the inclusion of the r&a i features does not improve performance. Conversely, adding r&a i and p&a i features improves results in 5 out of the 6 families (AMG, CF4, PAP, POL and QUI), with no worsening of the figures of merit of the CAR family. For both LR and RF models without date_culture, it is noticed that including just the r&a i features does not provide an improvement in performance. However, taking into account both the r&a i and p&a i features, there is a significant improvement for almost all antimicrobial families. Total Accuracy and Resistant Accuracy are, in general, considerably lower when r&a i and p&a i features are not used together, in comparison with the results provided by including date_culture. Taking into account the results with sliding windows of fixed size of 4 years and the current ones with an incremental training window, it is observed that, in general, the best results are obtained with an incremental training window. Though for some antimicrobial families, a specific combination of sliding windows can outperform the results of the incremental training window, there is not a common approach of sliding windows with better results for all families. Furthermore, when the incremental training window outperforms, it is for very little. The exception is the POL antimicrobial family, which achieves clearly better results with the 0 years approach. With the incremental training window, best results are mostly achieved by not including date_ culture, and adding both the r&a i and p&a i features. This confirms that the use of incremental training window represents a useful temporal approach to tackle the task presented in this study. It is notable that, although MI suggested that the set of r&a i features contain relevant information to predict the targets, its use in conjunction with other features does not appear to improve performance. On the other hand, the p&a i features show a great potential to predict the result of the susceptibility test, since they improve performance in almost all cases. It is also worth to analyze the fact that, if date_culture is not used, Total Accuracy and Resistant Accuracy get a low value when the r&a i and p&a i features are not jointly used, in comparison with the results obtained by using date_culture. The reason of this behavior is similar as the one indicated in previous experiments when not using the date_culture feature. Without date_culture, classifiers tend to predict much of the test instances as susceptible, because it is usually the majority class in incremental training windows (windows starting at the beginning of the data set). The date_culture feature compensates this by introducing bias towards predicting the majority class in the time interval, which in test (near the end of the data set) is resistant. In any case, using date_culture worsens the Susceptible Accuracy. By adding the p&a i features, it is not necessary to count with date_culture to get a good performance. Moreover, results with p&a i features and without date_culture, improve both Resistant Accuracy and Susceptible Accuracy because this kind of features do not introduce a temporal bias towards one of the two classes. Incremental Window with Weights The prediction results using an incremental training window and instance weighting are in rows with numbers 13 and 14 in the M column of Tables from IV to IX. The λ values for each particular case are expressed in Table X. It is observed that, using instance weighting, results improve for most of the antimicrobial families. The following are the best figures of merit of A T ot -A Rst -A Scb provided by applying instance weighting: • AMG: 79.55%-75.76%-90.91%. Obtained using RF, without date_culture and with both the r&a i and p&a i sets of features. The weight hyperparameter is λ =1e-05. • CF4: 60.67%-58.62%-64.52%. Obtained using RF, without date_ culture and with both the r&a i and p&a i sets of features. The weight hyperparameter is λ =1e-05. Our results show that M13 and M14 performance, in the majority of families, improves or is maintained when the p&a i set of features is taken into account, confirming what was observed in the two previous groups of experiments. The only exception to that is the POL antimicrobial family. When the date_culture feature is used, just the POL family gets better results; in any other case, it is better to not consider this feature. The substantially different behavior of POL is probably due to the very small number of resistant instances for this family, which makes it very dependent on the date_culture feature. Besides that, for half of the families (CAR, PAP and POL), the best method is LR, while for the other half (AMG, CF4 and QUI), RF gets the best results. It is also important to analyze the hyperparameter λ used to assign weights to instances. As previously explained, when the value of λ is small, a greater number of instances get a similar high weight (close to 1); otherwise, when λ is high, just a few instances, temporally close to the test set, get a high weight and the rest of instances get very small weights. For AMG, CF4 and PAP, λ is very small and results are very similar to those of the respective incremental window without weights. This happens because almost all instances are being considered. On the other hand, families CAR, POL and QUI, with a greater λ, show results that are, mostly, more similar to the respective sliding training window with a fixed size than to the incremental window. Comparing the results of the incremental window with the performance for the rest of experiments, it is noticed that it improves the results for 3 of the 6 families, which are AMG, PAP and QUI. In the case of CAR, the whole incremental training window achieves better results than the version with weights. As before, the family CF4 gets better performance with a specific combination of sliding windows, probably because some particularity of its distribution; POL notably gets its best result with the 0 years approach windows, without date_culture and with neither the r&a i nor p&a i sets of features. B. Relevant Features Analysis Taking into account previous results, it seems that some features with high MI score, such as r&a i , do not help to predict the target feature. The feature date_culture, which has the highest MI score, increases the performance in some particular cases, but also introduces bias, and the best results in previous experiments are achieved when this feature is not used. On the other hand, the set of features p&a i , also with high MI scores, appears to improve performance in almost all antimicrobial families. Our analysis reveals the inconsistency between features ranked as relevant according to MI and those that actually increase prediction performance. In order to contrast feature relevance, they are now obtained with an embedded method. Since RF has been used as classifier, tree-based estimators have been selected to compute the new feature importance, with Fig. 7 showing the ranking in relevance. Now, the most relevant feature for AMG, CAR, CF4, PAP and QUI are p&amg, p&car, p&cf4, p&pap and p&qui, respectively. In the case of POL, p&pol is ranked on the 7th position. Regarding date_culture, it is still very important. In the case of POL, date_culture is the most important one. The set of features r&a i are not considered important overall. The new ranking in feature relevance agrees to a greater extent with the prediction performance observed. The set of p&a i features are the most important ones, except for the POL family, where the most relevant feature is date_culture. These results make sense, since date_culture was the only feature improving performance in the POL family, due to small number of resistant instances. Also, the r&a i features get low relevance values, as expected. The reason why this method provides more insightful results is probably because it takes into account all other features in the data set, while in MI the feature relevance is calculated separately for each feature. To further analyze the impact of the most relevant features, the antibiogram result has been predicted using just one feature. Two experiments have been carried out, each for one of the most important features in the data set (the p&a i features and date_ culture). Results with the respective p&a i features are detailed in similar and the figures of merit are relatively high for most of the families. This evidences the high prediction power of this kind of features, even when using for prediction just one of them. Table XII presents the results with just date_culture. We observe that the prediction is dramatically biased towards the majority class when the LR method is considered, which in most cases is resistant due to the fact that test instances are in the future with respect to training instances. In the case of the POL antimicrobial family, results are biased towards the susceptible class since it generally is the majority class. Using RF, prediction is also biased, although to a lesser extent. As expected, the only family improving its performance when using just date_culture feature is POL. V. Conclusions One important and increasing problem in daily operation of worldwide health systems, and in particular, of hospitals is antimicrobial resistance. This resistance in some microorganisms (bacterium, viruses, etc.) appears when these microorganisms become to be resistant to antimicrobial drugs to which they were susceptible before. This change is due to a mutation of the microorganism or to the acquisition of the resistance gen. This problem is even more difficult in hospital ICUs, due to the critical condition of those patients. Therefore, a reliable and anticipated prediction for a given bacterium of being resistant or not to one or more antimicrobial families in a patient culture would greatly help physicians in their fight against those microorganisms. In this study, a real anonymized data set with information about patients staying at the ICU in the University Hospital of Fuenlabrada (UHF) has been used. The data set is related to 3812 admissions of 3346 ICU patients, collected at the UHF during a period of 15 consecutive years (from July 2004 to May 2019). The collected data set from UHF was browsed to generate the final data set under study with the information regarding the patients and their different cultures. Originally there were 40 features, but after the application of some pre-processing techniques they were reduced to 37 to avoid the use of high correlated features. The analysis have been focused on the Pseudomonas Aeruginosa bacteria because is one of the most dangerous bacteria in the ICU and its proved ability to develop multi-drug resistance. Furthermore, six antimicrobial families were considered: Aminoglycosides (AMG), Carbapenems (CAR), 4th Generation Cephalosporins (CF4), Extendedspectrum Penicillins (PAP), Polymyxins (POL) and Quinolones (QUI). Logistic Regression and Random Forest models were tested. Different temporal modeling strategies were proposed based on different windowing schemes (sliding training window, incremental training window) to capture the concept drift phenomenon related to the resistance process of microorganisms. In addition, some new temporally-oriented features (p&a i and r&a i features) capturing the resistance/susceptibility information regarding past cultures of the same patient or regarding the other patients were proposed and evaluated to improve the prediction accuracy of the different models. A temporal weighting scheme of the instances was proposed and it improved the prediction accuracy. Using or not some important features, according to the MI score, like date_culture, p&a i features and r&a i features were tested in fourteen models (M1 to M14). The results show that the Random Forest method with an incremental win-dow approach, using temporal weighting of the instances and the temporally-oriented features of past cultures is better, especially because both the accuracy for resistant bacteria and susceptible bacteria is more balanced. Regarding previous studies such as [6], [17] and [18], some similarities and differences are observed with this study. There are many differences between [6] and our work, such as the time interval considered in the data set, the number of instances, the generation of new longitudinal features or the methods used, but the concept drift is observed in both works. It is even more noticeable in our work due to the long time interval considered, with the windowing approach showing great benefits when applied to this problem. Unlike the work in [17], our study applies temporal modelling with windowing, including data from the 21 days previous to the antibiogram result to be predicted. In this line, authors in [18] also consider the date of culture and apply a temporal modelling, but without windowing. Remarkable contributions of our study are the new generated sets of features that consider temporal data contained along the data set, which regards the previous resistance of bacteria for the patient under study (p&a i ), and the resistance of bacteria previously detected in the ICU (r&a i ). In line with [18], our work also reveals that data from past cultures contain a relatively high amount of information to predict antimicrobial resistance. Particularly, the p&a i set of features showed to be the most useful for correct prediction when used in combination with some other features or even, in the case of some antimicrobial families, when used alone. Another relevant contribution of our study is the incremental training window scheme applied together with instance weighting. It allows to accurately classify cultures when the underlying data distribution dramatically changes along time. Our method introduces a more general and robust solution than those previously proposed, since it can be applied to heterogeneous data sets either with just a few or many years to be predicted, which is able to evolve along time and tackle the scarcity problem. Furthermore, it is able to provide high performance results for the majority of families, similar to the ones in other studies despite not using many of the most important risk factors identified in the literature, such as the antibiotics administered to patients. In addition, the thorough analysis of the relevance and interaction of different features will largely help in the development of future works. There are different challenges to be addressed for future work. On the one hand, oversampling techniques on training can be tested to check their influence on the model performance. On the other hand, we also consider including other features that could have some influence on the appearance of resistance bacteria in the ICU, like some additional patients' details about their admission, whether they required intubation or not and whether they needed mechanical ventilation or not. It would also be interesting to consider the inclusion of features encoding the antibiotic usage in a temporal context, at a patient level and ICU level. In order to properly tackle the different resistant phenotypes observed in this study, the non-uniform distribution of genotypic resistance mechanisms could be considered. It is also relevant to analyze in a different manner (such as assigning particular weights) cultures isolated from some specific sites such as tracheostomy or environmental water sources, because of their ability to generate aerosols close to patients, increasing the probability of nosocomial bacterial transmission.
2021-03-04T14:10:59.936Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d6296e0a0e56f347e6560802b04bba3d1720324c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.9781/ijimai.2021.02.012", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "05cb6dc039a4c4c208008cb0e7f575b81b5a5119", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
256552870
pes2o/s2orc
v3-fos-license
Piezoelectric-AlN resonators at two-dimensional flexural modes for the density and viscosity decoupled determination of liquids A micromachined resonator immersed in liquid provides valuable resonance parameters for determining the fluidic parameters. However, the liquid operating environment poses a challenge to maintaining a fine sensing performance, particularly through electrical characterization. This paper presents a piezoelectric micromachined cantilever with a stepped shape for liquid monitoring purposes. Multiple modes of the proposed cantilever are available with full electrical characterization for realizing self-actuated and self-sensing capabilities. The focus is on higher flexural resonances, which nonconventionally feature two-dimensional vibration modes. Modal analyses are conducted for the developed cantilever under flexural vibrations at different orders. Modeling explains not only the basic length-dominant mode but also higher modes that simultaneously depend on the length and width of the cantilever. This study determines that the analytical predictions for resonant frequency in liquid media exhibit good agreement with the experimental results. Furthermore, the experiments on cantilever resonators are performed in various test liquids, demonstrating that higher-order flexural modes allow for the decoupled measurements of density and viscosity. The measurement differences achieve 0.39% in density and 3.50% in viscosity, and the frequency instability is below 0.05‰. On the basis of these results, design guidelines for piezoelectric higher-mode resonators are proposed for liquid sensing. Introduction Monitoring the properties of a liquid has provided an important platform for resonant devices based on microand nanoelectromechanical system (MEMS and NEMS) technology to achieve miniaturization and portability 1,2 . Among these properties, density and viscosity are regarded as key quantities of a liquid in various industries, such as the process monitoring of (bio)chemical reactions 3,4 , the weight evaluation of active particles 5,6 , and the concentration control of solutions for extraction 7 . In general, dynamic mode cantilevers have received extensive attention in physical and chemical sensing. The fundamental out-of-plane bending is a typical vibrating mode of cantilevers that can be flexibly excited by electrostatic 8 , piezoelectric 9,10 , photothermal 11 , and magnetic 12,13 forces. Meanwhile, torsional vibration [14][15][16] is used as an alternative for enhancing the sensing behavior of cantilevers. These conventional vibrations of a cantilever-based resonator have been shown to present two separate dependencies of resonant frequency on density and Q-factor on viscosity. Cantilever-based resonators under liquid immersion have consistently proposed challenges for overcoming high viscous damping. This condition requires resonators to raise vibration orders or excite nonconventional vibrations because the hydrodynamic force 17,18 can be influenced by the vibrational mode, which then manages resonance behavior. For example, the in-plane mode 19,20 has been adopted to increase the Q-factor of a resonator by transferring the shear force rather than the compressive force to the fluid. A higher vibration mode [21][22][23] enhances the Q-factor by decreasing the vibration amplitude of cantilevers owing to a higher modal stiffness. However, an inherent problem for these vibrations is that resonant magnitude is a function of the product of the liquid density and viscosity 24 . This condition restricts a decoupled solution for these vibrations. Furthermore, the pure shear forces generated by a resonator operating in in-plane modes, such as in-plane bending and extensional modes, make determining density and viscosity independently impossible. This scenario can be described by the second Stokes problem 24,25 . With regard to measurement performance, Toledo et al. 26 presented a piezoelectric microresonator resonating at fourth-order vibrations, which addressed the density and viscosity by using four calibrated coefficients. The mean deviations are 0.38% for density (0.98-1.08 g/ml) and 7.36% for viscosity (1.71-1.97 cP). Bircher et al. 27 presented a nanomechanical resonator vibrating at the third mode. For the single calibrated coefficient, the mean deviations are 3.2% for density (998-1154 kg m −3 ) and 10.1% for viscosity (1-10.5 cP). While using three calibrations, the mean deviations are 0.8% for density and 3.2% for viscosity. By comparison, a commercial density-viscosity meter (Anton Paar 4101, Lovis 2000) reaches 0.03% for density (0-3 g/ml) and 0.5% for viscosity (0.3-10000 cP). Thus, the performance of MEMS resonant sensors remains a critical concern for liquid sensing, although this technology has been regarded as an essential solution due to its real-time and portable operation. When focusing on the output interface of resonators, optical characterization is still a common approach 14,27,28 ; it detects slight deflections even in the sub-Angstrom regime, but it is bulky and alignment-dependent. Thinfilm piezoelectric-on-silicon technology offers full electrical interfaces, including input and output transductions, for resonators to simplify their design by applying piezoelectric self-actuation and self-sensing methods. However, only a few studies have taken full advantage of this capability because resonance amplitude decreases dramatically in liquid media, making maintaining electrical access a challenge, particularly for microscale or nanoscale resonators vibrating in higher-order modes. Here, we present a wide-stepped microcantilever resonator for piezoelectrically actuating higher-order nonconventional vibrations. The resonator aims to simultaneously reduce viscous losses and circumvent the limited product function with the density and viscosity to establish a separate function for them by resonant parameters, which is used as a solution to enhance the performance of the density and viscosity decoupling determination. Higher-order vibrations feature moderate frequencies within the kHz range, allowing self-sensing by reading out the piezoelectrically induced voltage. To the best of our knowledge, these two-dimension modal dynamics based on a plate-structural cantilever have not been studied in detail, although a cantilever naturally satisfies the inviscid fluid condition and has been widely employed for liquid monitoring. This study analytically modeled the two-dimensional flexural modes to characterize the modal width effect of the plate cantilever, verifying it from theoretical and experimental results. Experiments were performed to determine the density and viscosity of the measured liquid by using separate estimation equations with only a single calibration. Finally, output characteristics were discussed and design methods were presented, particularly for piezoelectric cantilevers in higher-order flexural modes in liquid media. Resonator design and fabrication The micromachined cantilever is designed with a stepped shape, as shown in Fig. 1a. It consists of a support beam (Region I) and a cantilever plate (Region II). The actuation and sensing electrodes are separated by different beams. Two of the electrodes with an active piezoelectric layer width of B e are used for actuation, while the other electrodes with an piezoelectric layer width of b e are used for sensing. The electrodes with width b e are fabricated on four slender sensitive beams (length × width = 129 μm × 18 μm) for concentrating on the deformation strain. The support beam is designed with a length of L 1 = 318 μm and a width of W 1 = 246 μm. Meanwhile, the cantilever plate has a length of L 2 = 1089 μm and a width of W 2 = 1382 μm. The thickness T of the micromachined cantilever is 25 μm. Both B e and b e represent the widths of the top electrodes, which should be enlarged to enhance the self-actuation and self-sensing capabilities of the piezo-cantilever resonator. In addition, to compromise the tolerance width in the fabrication process, Be and be are designed with the corresponding values of 80 μm and 10 μm, respectively. The proposed cantilever with a plate structure and a wide step-change along the width is designed to generate the width effect. Thus, the out-of-plane mode is determined by the vibrations not only along the length but also along the width. In this manner, the two-dimensional mode can be nonconventionally excited with respect to the slender cantilever. The smaller width of Region I is used to adjust the higher-order modal stiffness with a moderate value to enhance resonance stability. The piezo-cantilever resonators are fabricated in 4-inch Si wafers, and the cross-sectional schematic is displayed in Fig. 1b. Both sides of the wafer are covered by 400 nm thermal silicon oxide layers as electrical isolations. Then, a 130 nm Mo layer is deposited by sputtering technology onto the silicon oxide layer as the bottom electrode of the cantilever. The Mo electrode without patterning contributes to increasing availability for flexural modes by guaranteeing the in-phase sinusoidal drive applied to a parallel connection of the electrode pairs. The piezoelectric aluminum-nitride (AlN) film with a thickness of 1 μm was deposited by reactive magnetron sputtering (RMS) technology. Then, the top electrodes consisting of chromium and gold films are fabricated with thicknesses of 20 nm and 200 nm, respectively. Thereafter, the wafer is covered with a SiO 2 layer via low-pressure chemical vapor deposition (LPCVD). This layer is used as a passivation layer for the resonator. Finally, the stepped piezo-cantilever is released by etching technology to obtain the proposed resonator. The fabrication process flow is shown in Fig. 2. Modal analyses The resonant frequency and mode shape in vacuum represent natural features of a piezo-cantilever, which are the foundation for expressing resonance behavior in liquid media. For the proposed piezo-cantilever resonator, Regions I and II of the cantilever play a dominant role in its resonance behavior, as shown in Fig. 1a. Meanwhile, the four slender sensitive beams have a considerably smaller volume, and thus they can be reasonably disregarded in the subsequent analyses. In addition, classic plate theory is adopted because the thickness is considerably less than length and width (including Regions I and II). The modeling analyses of the cantilever are based on the following assumptions 29,30 . (1) The material is linearly elastic and transversely isotropic; (2) the out-of-plane vibration mode is dominant and its deformation is small, so the shear and nonlinear deformation are neglected; (3) the piezoelectric strain coefficient d 31 (or d 32 ) is constant; and (4) the transverse normals remain perpendicular to the middle surface after deformation. The theoretical modeling is analyzed in the Supplementary Information, and Table 1 lists the main resonance parameters of the cantilever under the first six orders of flexural vibrations. The first six eigenvalues of Λ i = β i L, in which L is the whole length of the cantilever, have notable changes compared with the conventional values, where Λ 1 = 1.8751, Λ 2 = 4.6941, and Λ 3 = 7.8548 17,31 . This indicates that the stepped cantilever has the capability to excite distinctive mode shapes in both vacuum and liquid-phase environments. The preceding analyses require a more detailed correction in accordance with certain mode shapes for two reasons. (i) Considering that the wide-stepped design of the cantilever exerts a dramatic influence on its resonance response, the vibration mode must be analytically discussed further. (ii) The width effect induced by the plate structure cannot be disregarded because it exerts a significant influence on the resonance behavior of the cantilever. In this sense, Eq. S(5) displayed in Supplementary Information is sound for the two-dimensional modes where W(y) ≠ 1. The frequency responses of the piezo-cantilever in terms of its deflection and output voltage are measured using a Polytec MSA500 scanning laser Doppler vibrometer and a SR830 lock-in amplifier, respectively. Figure 3a shows the independent vibration modes of the cantilever within five orders. To clarify, only the vibration of Region II is described because Region I primarily acts on stiffness, while Region II plays a crucial role in the mode shape of the stepped piezo-cantilever. The first flexure is the fundamental mode, which emits a slender beam-like vibration. Other higher-order vibrations exhibit the mode shape along the large width in varying extents, which verifies two-dimensional modes. The higher-order flexural and torsional modes have completely different appearances with respect to typical cantilevers. When comparing the electrical output characteristic, the voltage peak outputted by the flexural mode is larger than the torsional peak, making the flexure more attractive to achieve precise sensing in liquid media. To verify the modal analysis, the 2nd and 3rd flexural modes are of concern. Meanwhile, the 1st flexural mode is used for comparison. Modal correction for the 2nd flexural mode For the 2nd flexural mode of the cantilever, deformation clearly occurs along the width, particularly along with the stepped interface width at O 2 Fig. 1a). The considerable difference between the widths of Regions I and II generates a stepped width effect acting on cantilever vibration. The correction factor is introduced to offset the deviations between the theoretical and experimental results. COMSOL Multiphysics software is used as the finite element method tool to illustrate the stepped width effect on cantilever vibration. The width (W 2 ) of Region II is constant, while the width (W 1 ) of Region I variably increases to taper the stepped width. Figure 3b depicts the analytical and simulated resonant frequencies along with their relative errors. The ratio of the resonant frequency between the theoretical and simulation results is shown in Fig. 3c. In the case of the relative errors below 10%, the analytical results are acceptable for the ratio ζ W = W 2 /W 1 reaching 1.25, indicating that the basic one-dimensional modeling can be reasonably approximated for the 2nd flexural mode. However, a higher ζ W (ζ W > 1.25) makes the stepped edge of Region II easier to generate additional vibration when the vibration of the cantilever has lower modal stiffness. Then, the frequency resonant under the 2nd flexural mode should be corrected by the factor C f , which depends on the ratio ζ W = W 2 /W 1 . In addition, its expression is concluded with a fitting algorithm over the calculated values (black points in Fig. 3c): Modal correction for the 3rd flexural mode It is noted that the 3rd flexural mode of the stepped piezo-cantilever is a combination of vibrations that occur simultaneously along the length and width of the cantilever plate (region II). It can be divided into the 1st length flexure and 1st width flexure modes but without interdependence. To model this novel mode shape, the free-free length vibration based on flexural mode of the cantilever should be coupled in the overall shape function W t , which can be presented in the following 32-34 : where the frequency parameter κ 1 is 3π/(2W 2 ). W 1 ðxÞ and W 1 ðyÞ represent the normalized displacements of Region II under the first-order vibrations in x-and y-directions, respectively. W 1 ðxÞ can be defined by above analytical results from Eqs. S(7) to (11) (in the Supplementary Information) and Table 1. The numerical method based on energy principle can be utilized for predicting eigenmodes of the stepped piezocantilever with the two spatial coordinates. Region II is still dominant in the 3 rd flexural mode of the stepped piezo-cantilever, and its peak kinetic energy Tmax and peak strain energy Umax can be expressed as: where ω11 is the resonant frequency in the 3rd flexure mode of the stepped piezo-cantilever. Additionally, the equation T max = U max is satisfied for the stepped piezo-cantilever at the resonance. The substitution of Eq. (2) into Eq. (3) and Eq. (4) brings about the optimal solution of the 3rd flexural resonant frequency. Modal verification The first three-order flexural vibrations of the stepped piezo-cantilever are analyzed, accompanying the modeling correction for the width effect under the specific order. The corrected natural resonant frequencies of the stepped piezo-cantilever are listed as f vac−1 = 7.964 kHz, f vac-2 = 57.194 kHz, and f vac-3 = 103.603 kHz. In parallel, the experimental resonant frequencies of the stepped piezo-cantilever resonator are measured in six different liquids, the densities and dynamic viscosities of which are obtained from the Reference Fluid Properties (REFPROP) software, as shown in Table 2. On the condition that the mode shape variation affected by liquid flow is sufficiently small to be neglected, the corresponding resonant frequency of the stepped piezo-cantilever resonator immersed in liquids can be derived, as shown in Fig. 3d. The resonant displacements of the stepped piezo-cantilever are simulated by COMSOL software and displayed by the inserted figures. Compared with the experimental results, the simulated results achieve a maximum absolute relative error of 4.61% for the 1st flexure, 6.56% for the 2nd flexure, and 8.02% for the 3rd flexure. For analytical results, the maximum values of the absolute relative error are 3.54% for the 1st flexure, 3.11% for the 2nd flexure, and 2.14% for the 3rd flexure. The analytical results exhibit excellent agreement with the experimental results, verifying the modal modeling for basic and two-dimensional flexural modes. The undamped frequencies are verified as well, which is feasible to simplify the estimated equation for the experimental densities in subsequent chapters. Performance characterization The mode shape as well as its resonant frequency of the cantilever resonator are key characteristics for liquidphase sensing applications, such as the density and viscosity determination. The experiments with various liquid samples (listed in Table 2) are implemented to characterize the sensing behaviors of the stepped piezocantilever resonator. The stepped piezo-cantilever resonators with different shapes and vibration modes immersed in a small volume of liquid are investigated, including the stepped piezo-cantilever with a rectangular plate (as mentioned earlier) and trapezoidal plate, and their resonance at 2nd and 3rd flexural modes. The fabricated resonators are shown in Fig. 4a-c. Differentiating with the rectangular stepped piezo-cantilever, the trapezoidal stepped piezo-cantilever has a free side in the x-direction with a width of 900 μm. In Fig. 4d, the proposed sensing chip was fabricated with dimensions of only 3.4 mm × 4.2 mm, which is smaller than some reported density and viscosity chips 9,10,16 . This makes the sensing chip achieve an advantageous capacity in fluid monitoring, such as the embedded installation for miniaturized devices. In addition, the piezoelectrically induced voltage is used as the electric output of the chip but without the circuit compensation, which is available for chip-size reduction and enhancing the convenience of the sensor device. The experimental setup is shown in Fig. 4e. The experimental temperature is controlled within approximately 20°C ± 0.3°C. The actuation electrodes of the stepped piezo-cantilever resonator are powered by a 33220A Agilent signal generator with an alternating excitation voltage of 2 V, and large voltage outputs can be obtained from the sensing electrodes. The stepped piezocantilever resonator chip is bonded on a PCB board and mounted into a customized polymethyl methacrylate shell with a 5 mL liquid cavity for immersion measurement. The mode shape of the trapezoidal stepped piezocantilever resonator is also verified in air by a laser Doppler vibrometer and compared with the piezoelectric output, as shown in Fig. 4f. If the resonator under the 2nd and 3rd flexural modes has more outstanding amplitudes, it is more similar to the rectangular stepped piezo-cantilever. The frequency responses of the rectangular and trapezoidal stepped piezo-cantilever resonators in liquids are shown in Fig. 5a. The n-hexane and D4 medium are selected to validate the maximum change in output amplitude. Notably, the resonance peak of the cantilevers at higher orders is sufficiently distinct to be detected precisely. This condition can be attributed to the small parasitic component and leakage current. Thus, complex electronic circuits will have no requirement for signal compensation. Figure 5b-e show the sensing characteristics of different stepped piezo-cantilever resonators under the 2nd and 3rd flexural modes in liquids. The attractive linearities occur on both between the density and resonant frequency, and between the viscosity and Q-factor of the stepped piezo-cantilever. In addition, their linear correlation coefficients exceed 0.994. The liquid density contributes to the variation of the resonant frequency, while the viscosity (1/√μ) is the dominance in Q-factor reduction rather than the viscosity-density product (1/√ρμ) nonlinear functions. These critical behaviors enable the proposed resonator chip to determine the two liquid quantities separately. The variation slope of the resonant frequency with the density can be employed to estimate the mass change sensitivity (MCS). The MCS represents the minimum liquid mass change that can be discernable under a resonant frequency resolution, which can be expressed as: where V f is the liquid occupying volume that surrounds the stepped piezo-cantilever. This can be concretized as a circular cylinder 35,36 with a diameter and height equal to the dominant scale along the width and length directions of the stepped piezo-cantilever. The rectangular stepped piezo-cantilever resonator responds to an MCS of 268 ng/ Hz and 156 ng/Hz under the 2nd and 3rd flexural vibration modes, respectively. The trapezoidal stepped piezo-cantilever resonator responds to an MCS of 272 ng/ Hz and 91 ng/Hz under the 2nd and 3rd flexural vibration modes, respectively. Additionally, the frequency instability is evaluated by the standard deviations. For the rectangular cantilever resonator, the maximum frequency instability is 0.05‰ at the 2nd flexure and 0.03‰ at the 3rd flexure. For the trapezoidal cantilever resonator, the calculated values are 0.05‰ at the 2nd flexure and 0.02‰ at the 3rd flexure. These results indicate that the stepped piezo-cantilever resonator is a fine potential choice for (bio) chemical liquid sensing. Considering that the Q-factor can be linearized by the viscosity transformation of the liquid, the estimation of the measured viscosity μ f,e can be expressed as: where ρf,e denotes the measured density, χ1 is the calibration coefficient that can be determined by experimental and reference values, and QL and QA are the Q-factors of the stepped piezo-cantilever resonator in liquid and air immersion, respectively. The QA can be derived in Figs. 2a and 3f, and its values for the rectangular stepped piezo-cantilever resonator are 419 (2nd flexure) and 601 (3rd flexure); for the trapezoidal stepped piezo-cantilever resonator, they are 526 (2nd flexure) and 825 (3rd flexure). Based on the well-known inviscid theory, the relationship of fluid density and resonant frequency of the slender cantilever (whose length/width greatly exceeds unity) in vacuum and fluid immersions is presented as 17 : The empirical corrections should be developed for the cantilever that is affected by the finite aspect ratio (length/ width) with respect to the classic fluid-structure interaction. The resonant frequency of the cantilever in vacuum as well as corresponding eigenmodes have been verified early, whose width effect coupled with different vibrations have been optimized. Thus, we induce the calibration factor χ 2 to substitute for the other parameters, which are related to the specific cantilever material and dimensions. Thus, the estimation equation of ρ f,e can be presented as: Only one calibration liquid can provide the two calibration coefficients χ 1 and χ 2 in this study to simplify the density and viscosity estimations for the rectangular stepped piezo-cantilever resonator. The cross-section factor S c in Eq. (8) is feasible for the cantilever plates with variable widths, such as the trapezoidal and triangular structure. In addition, the S c can be employed as a constant that depends on specific geometries 37 and vibrational modes of the trapezoidal stepped piezocantilever resonator. The values of f vac can be equal to the rectangular stepped piezo-cantilever, and another calibration liquid is used for S c determination. The calibration parameters are given in Table 3. The measurement values of density and viscosity by different resonators, along with their deviations between them and reference values, are presented in Tables 4 and 5. The rectangular and trapezoidal stepped piezocantilever resonators achieve attractive measurement accuracies for different liquid quantities. The density average deviations of the rectangular stepped piezocantilever resonator are 0.72% and 0.39% under the 2nd and 3 rd flexural modes, respectively. In addition, the viscosity average deviations are 6.19% and 4.95% under the two order modes, respectively. In parallel, the density average deviations of the trapezoidal stepped piezocantilever resonator are 0.63% and 0.45% under the 2nd and 3rd flexural modes, respectively. In addition, the average deviations for the viscosity are 4.74% and 3.50% under the two order modes, respectively. The higherorder flexural vibration obtains better measurement performance for the same-shape cantilever. The decoupled measurement equations for the density and viscosity are also important factors for enhancing the sensing precision, which confirms the advantages of the proposed stepped piezo-cantilever resonators. The performance comparison of this work and other reported resonator-based density and viscosity sensors 38,39 is shown in Table 1S of Supplementary Information. Compared with other studies, the proposed sensor with piezoelectric self-actuation and self-sensing capabilities has achieved comprehensively high performances in measurement accuracy, density sensitivity, and viscosity within the allowable range. The accuracy of the resonant device has a significant dependence on the resonator design, while a higher Q-factor allows for wider viscosity determination. The Q-factor of our sensing device can be improved by adopting a more complex fluid-structure interaction model, not only to mitigate fluid damping via dimensional alteration but also to reduce its decay rate, which is related to liquid viscosity. Optimization consideration The characteristics of the output amplitudes for the stepped piezo-cantilever resonators at the resonance are summarized in Fig. 6. The changes ratio of amplitude V to the Q-factor (VQR=ΔV/ΔQ) in liquids is a constant for a given stepped piezo-cantilever shape and resonance mode, which can be defined as the decay rate of the Q-factor of resonators. The stepped piezo-cantilever resonator with a lower VQR value is able to sense higher viscous liquid media if the Q-factor can be acceptably measured by electrical characterization. On the other hand, the VQR also requires a balance between the voltage output and the resonance peak. A higher-order mode generally leads to a weaker resonance output, though a high Q-factor can be maintained. The VQR consideration provides several optimization points to the piezoelectric resonator under higher-order vibrations. First, the parasitic effects, which mainly result from the parallel capacitance and serial resistance, should be reduced, and the piezoelectric strain coefficient d 31 should be increased. In this study, the trapezoidal stepped piezocantilever resonator under 3rd flexural mode has a lower Q-factor than the rectangle stepped piezo-cantilever resonator, whereas it has approximately 1.8 times higher resonant frequency than the rectangle stepped piezocantilever resonator. This result is possibly due to the unequal material parameters for the stepped piezo-cantilevers, especially the AlN piezoelectric layer. The doped AlN deposition 40 has been proven to increase the piezoelectric coefficient. In addition, a higher-order resonance is still preferred, and the actuated and sensing electrodes can be further optimized to fit the specific mode shape to realize high-precision operation for a microresonator immersed in viscous liquid media. Conclusion A stepped piezo-cantilever resonator with AlN material is presented to excite two-dimensional flexural modes for liquid quantity determination. The model is based on an analysis of the flexural vibrations of the resonator, and it consists of a modeling correction for the width effect Table 4 Density ρ f,e and viscosity μ f,e values estimated by the rectangular stepped piezo-cantilever resonator, where the deviations ε ρ and ε μ denote the absolute relative errors between the experimental and reference values. Medium Mode ρ f,e (kg m −3 ) μ f,e (cP) ε ρ (%) ε μ (%) Indicates the referenced viscosity and density for calibration procedure, respectively. Table 5 Density ρ f,e and viscosity μ f,e values estimated by the trapezoidal stepped piezo-cantilever resonator, where the deviations ε ρ and ε μ denote the absolute relative errors between the experimental and reference values. acting on the resonance. The resonant frequencies of the resonator under different flexural modes are predicted in vacuum and liquid immersion. The relative errors for the resonant frequency between the model prediction and experimental results are less than 4%. The experiment indicates that the density and viscosity of the liquid can be separately linearized by the sensing characteristic of the resonator, which allows for decoupled determination based on theoretical analyses. The experimental results indicate accuracies of 0.39% for density and 3.50% for viscosity in working ranges of 659.36-946.31 kg m −3 and 0.31-2.57 cP, respectively. In view of the output characteristic, the VQR is proposed for the piezo-cantilever resonator to pursue well-balanced electrical access at higher-order resonances. These results provide optimization guidance for the efficient design of self-actuated and self-sensing piezoelectric resonators, making them a powerful alternative in liquid monitoring.
2023-02-04T14:44:40.116Z
2022-04-02T00:00:00.000
{ "year": 2022, "sha1": "f95c61deed0504a741a5ec22d8f7dfdd9dd6f576", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41378-022-00368-0.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f95c61deed0504a741a5ec22d8f7dfdd9dd6f576", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [] }
1568942
pes2o/s2orc
v3-fos-license
Mechanism of ion permeation in skeletal muscle chloride channels. Voltage-gated Cl- channels belonging to the ClC family exhibit unique properties of ion permeation and gating. We functionally probed the conduction pathway of a recombinant human skeletal muscle Cl- channel (hClC-1) expressed both in Xenopus oocytes and in a mammalian cell line by investigating block by extracellular or intracellular I- and related anions. Extracellular and intracellular I- exert blocking actions on hClC-1 currents that are both concentration and voltage dependent. Similar actions were observed for a variety of other halide (Br-) and polyatomic (SCN-, NO3-, CH3SO3-) anions. In addition, I- block is accompanied by gating alterations that differ depending on which side of the membrane the blocker is applied. External I- causes a shift in the voltage-dependent probability that channels exist in three definable kinetic states (fast deactivating, slow deactivating, nondeactivating), while internal I- slows deactivation. These different effects on gating properties can be used to distinguish two functional ion binding sites within the hClC-1 pore. We determined KD values for I- block in three distinct kinetic states and found that binding of I- to hClC-1 is modulated by the gating state of the channel. Furthermore, estimates of electrical distance for I- binding suggest that conformational changes affecting the two ion binding sites occur during gating transitions. These results have implications for understanding mechanisms of ion selectivity in hClC-1, and for defining the intimate relationship between gating and permeation in ClC channels. i n t r o d u c t i o n The molecular cloning of a voltage-gated Cl Ϫ channel from the electric organ of Torpedo (Jentsch et al., 1990) and the subsequent characterization of a large number of mammalian homologs (Jentsch, 1994) has established a new gene family (ClC-family) lacking any structural similarity to other known ion channels. At present, the basic mechanisms responsible for ion permeation and gating in ClC channels are incompletely understood. We have focused on the human muscle ClC-isoform, hClC-1. This dimeric channel (Fahlke et al., 1997 b ) is important physiologically for the control of sarcolemmal excitability and it is the genetic locus in mouse, man, and goat for a specific form of inherited myotonia (Steinmeyer et al., 1991;Koch et al., 1992;George et al., 1993;Beck et al., 1996). We have previously characterized a recombinant human ClC-1 (hClC-1) expressed heterologously in both Xenopus oocytes and human embryonic kidney cells , and found that its functional attributes are identical to native skeletal muscle channels . Investigation of the dependence of gating properties on pH and chloride concentrations has helped us to develop a first gating model of this channel . Gating of hClC-1 appears to be mediated by two structurally distinct mechanisms: a fast voltage-dependent process and a slow voltage-independent process controlling opening and closing transitions through block of the pore by a probable cytoplasmic gate . The voltage-dependent process governs the distribution of open channels in three kinetically distinct states: fast deactivating, slow deactivating, and nondeactivating. More recently, we have characterized an hClC-1 mutation (G230E) that causes autosomal dominant myotonia congenita and confers altered ion selectivity on the channel (Fahlke et al., 1997 a ). In that report, examination of the effect of I Ϫ on both wildtype and mutant channels provided preliminary evidence for the existence of two distinct anion binding sites within the hClC-1 pore. To improve our understanding of the basic properties of the hClC-1 pore, and to learn more about the relationships between gating and permeation, we now investigate in more detail the interaction of I Ϫ and other analogs of the normal permeant ion with these binding sites. One specific goal of this investigation was to determine if the different conducting states defined by our gating model exhibit differences in ion binding characteristics. This work reveals clear differences in the characteristics of I Ϫ block exhibited by the three ki-netic states and implies that conformational changes within the conduction pathway are occurring during gating. We also define the hClC-1 conduction pathway as a multi-ion pore, and argue in favor of an ion-selectivity mechanism based on differential ion binding. These results provide much needed characterization of the ion permeation process and clarify a functional link between gating and ion conductance in hClC-1. Oocyte Preparation and Two-Electrode Voltage Clamp Isolation, maintenance, and cRNA injection of Xenopus oocytes were performed as previously described (Beck et al., 1996). Standard two-microelectrode voltage clamp was performed using an amplifier (OC-725B; Warner Instruments Corp., Hamden, CT). Microelectrodes were pulled from borosilicate glass to have a resistance between 0.7 and 1.3 M ⍀ when filled with 3 M KCl. The oocytes were bathed in ND-96 solution (Dascal et al., 1986) containing 96 mM NaCl, 4 mM KCl, 1.8 mM CaCl 2 , 1 mM MgCl 2 , 5 HEPES (adjusted to pH 7.4 with NaOH). To test the effect of various anions on hClC-1 currents, the bathing solution was changed to a modified ND-96 in which NaCl was replaced by an equimolar quantity of NaSCN, NaNO 3 , NaCH 3 SO 3 , Na-cyclamate, or Na-gluconate. For the calculation of relative current amplitudes, both instantaneous and late current amplitudes were divided by the instantaneous current amplitude measured at Ϫ 145 mV in the same cell using standard ND-96 solution. In general, endogenous oocyte chloride currents can be distinguished easily from hClC-1 currents by their different kinetics. Among the various reported types of endogenous oocyte chloride currents, calcium-activated chloride currents bear the closest resemblance to hClC-1. However, calcium-activated chloride currents display a clear activating phase upon voltage steps to positive potentials (Tokimasa and North, 1996) that is absent in hClC-1-expressing cells at similar test potentials . Therefore, oocytes exhibiting an activating component larger than 1 A at ϩ 55 mV were excluded from analysis. Whole-Cell Recording HEK-293 cells (CRL 1573; American Type Culture Collection, Rockville, MD) were stably transfected by the calcium phosphate precipitation method using the plasmid pRc/CMV-hClC-1 as described . Standard whole-cell recording (Hamill et al., 1981) was performed using an Axopatch 200A amplifier (Axon Instruments, Foster City, CA). Pipettes were pulled from borosilicate glass and had resistances of 0.6-1.0 M ⍀ . Cells with peak current amplitudes Ͻ 10 nA were used for analysis. More than 60% of the series resistance was compensated by an analog procedure. The calculated voltage error due to series resistance was always Ͻ 5 mV. No digital leakage or capacitive current subtraction was used. Currents were low pass filtered with an internal amplifier filter and digitized with sampling rates at least three times larger than the filtering frequency using pClamp (Axon Instruments). Cells were held at 0 mV for at least 15 s between test pulses. The standard bath solution contained (mM): 140 NaCl, 4 KCl, 2 CaCl 2 , 1 MgCl 2 , and HEPES, pH 7.4. In experiments testing the effect of external I Ϫ or other anions, the standard bath solution was modified by replacing variable amounts of NaCl with equimolar quantities of NaI or the corresponding sodium salt of other an-ions. For determination of I Ϫ dissociation constants ( K D ) for the extracellular ion binding site, measurements were initially made in an extracellular solution composed of (mM): 140 Na-gluconate, 4 KCl, 2 CaCl 2 , 1 MgCl 2 , 5 HEPES, pH 7.4, and then changed to a modified solution in which Na-gluconate was replaced by NaI. For experiments with I Ϫ -containing solutions, agar bridges (3 M KCl in 0.1% agar) were used to connect the bath solution to the amplifier. The standard pipette solution was (mM): 130 NaCl, 2 MgCl 2 , 5 EGTA, 10 HEPES, pH 7.4. In some experiments, CsCl was substituted for NaCl without appreciable differences in the results. For determination of K D for I Ϫ binding to the intracellular site, measurements were made in a pipette solution containing (mM): 50 NaCl, 80 Na-gluconate, 2 MgCl 2 , 5 EGTA, 10 HEPES, pH 7.4, or in solutions in which Na-gluconate was replaced by equimolar NaI. All solutions were adjusted to pH 7.4 with NaOH or CsOH. Unless otherwise stated, standard solutions were used. For experiments with I Ϫ -containing solutions, agar bridges (3 M KCl in 0.1% agar) placed inside the patch pipette were used to connect solutions with the amplifier. Excised Patch Recording For recording from inside-out excised patches (see Fig. 7), pipettes were pulled from borosilicate glass to have resistances between 1.2 and 2.0 M ⍀ , coated with Sylgard, and filled with standard extracellular solution. The bath solution was identical to the standard intracellular solution described above for whole-cell recording. Using a solution changing system (SF-77 Perfusion Fast-Step System; Warner Instrument Corp.), the intracellular membrane side of the patch was first exposed to a solution containing (mM): 50 NaCl, 80 Na-gluconate, 2 MgCl 2 , 5 EGTA, 10 HEPES, pH 7.4, and baseline recordings were made. Subsequently, the solution was changed to (mM): 50 NaCl, 50 NaI, 30 Na-gluconate, 2 MgCl 2 , 5 EGTA, 10 HEPES, pH 7.4, and the measurements were repeated. Data Analysis Current deactivation was tested after a 50-ms prepulse to ϩ 55 mV. The time course of current deactivation was fit with an equation containing a sum of two exponentials and a time-independent value as follows: I(t) ϭ a 1 exp( Ϫ t / 1 ) ϩ a 2 exp( Ϫ t / 2 ) ϩ d , where a 1 , a 2 , and d are amplitude terms, 1 and 2 are time constants for fast and slow deactivation, respectively. The fractional current amplitudes were calculated by dividing by the peak current amplitude ( I max ) as follows: A 1 ϭ a 1 / I max , A 2 ϭ a 2 / I max , C ϭ d / I max . For the calculation of I Ϫ dissociation constants ( K D ) for the external binding site, fractional current amplitudes (A 1 , A 2 , and C) determined at several test potentials were plotted versus the extracellular [I Ϫ ]. The K D values at given test potentials were obtained as described in results . Calculation of K D for the intracellular binding site was performed by plotting reciprocal values of the deactivation time constants ( 1 Ϫ 1 , 2 Ϫ 1 ) versus the intracellular [I Ϫ ], and then deriving K D as a fit parameter by the method explained in results. r e s u l t s Block of hClC-1 by External I Ϫ To examine the effect of external I Ϫ on hClC-1, initial experiments were performed in Xenopus oocytes to permit current recording from the same cell in the presence of various external solutions. Fig. 1 illustrates current recordings made in oocytes expressing hClC-1 before ( Fig. 1 A) and after ( Fig. 1 B) substitution of 96 mM NaCl by equimolar NaI in the extracellular solution. External I Ϫ causes a significant reduction in the 'instantaneous' current amplitude measured 2 ms after hyperpolarizing voltage steps from a holding potential of Ϫ30 mV, and shifts the reversal potential to a more positive voltage (Fig. 1, C and D), indicating a greater permeability of hClC-1 for Cl Ϫ than I Ϫ . In addition, external I Ϫ affects hClC-1 gating, resulting in less complete deactivation (Fig. 1 B) and loss of the characteristic inverted bell shape of the steady state current-voltage relationship observed at negative test potentials ( Fig. 1 D). Similar effects of external I Ϫ on hClC-1 have been observed in mammalian cells by whole cell recording (Fahlke et al., 1997a). The effect of external I Ϫ on hClC-1 current amplitude is concentration and voltage dependent. Increasing the external I Ϫ from 16 to 96 mM causes a concentration-dependent reduction of instantaneous ( Fig. 1 C) normalized inward current amplitudes recorded at voltages negative to the reversal potential. At voltages positive to the reversal potential, reduction of normalized instantaneous outward current amplitude is near saturation at 16 mM I Ϫ . Thus, external I Ϫ reduces both inward and outward current, having its greatest effect on outward currents measured at positive potentials. These data are consistent with voltage-dependent block of hClC-1 by external I Ϫ . Block of hClC-1 by Other Extracellular Anions We investigated the effect of other extracellular anions on hClC-1 expressed in oocytes by substituting 48 mM Cl Ϫ in the external solution by equimolar amounts of Br Ϫ , SCN Ϫ , NO 3 Ϫ , CH 3 SO 3 Ϫ , cyclamate, or gluconate. Fig. 2 shows representative voltage-clamp recordings for control conditions (Fig. 2 A), 48 mM CH 3 SO 3 Ϫ (Fig. 2 B), 48 mM NO 3 Ϫ (Fig. 2 C), and 48 mM SCN Ϫ (Fig. 2 D). The equimolar substitution of Cl Ϫ with each of these anions causes a variable reduction of inward current amplitudes within the negative potential range. These effects vary depending on the replaced anion and therefore are not simply caused by reduction of the external Cl Ϫ concentration. Plots of normalized current amplitudes versus voltage for seven different ionic conditions are shown in Fig. 3. Fig. 3, A and B shows substitution experiments with anions believed to be permeant (Br Ϫ , SCN Ϫ , NO 3 Ϫ ), while Fig. 3, C and D illustrate data from experiments in which the substituted anions were suspected of being impermeant (CH 3 SO 3 Ϫ , cyclamate, and gluconate). All anion substitutions cause a shift of the reversal potential to more positive voltages, indicating lower permeability, relative to Cl Ϫ , for each of the tested anions. In addition, normalized inward and outward current amplitudes are blocked by all anions except gluconate (the effect of gluconate on outward current can be explained by the reduction of external Cl Ϫ concentration). Blocking anions also affect gating properties, as can be observed in the steady state current-voltage plots, especially at voltages negative to the reversal potential (Fig. 3, B and D). The blocking potency and the potency to change current kinetics are correlated (Fig. 3 E). Gating Effects of External I Ϫ on hClC-1 We next studied in more detail the effect of external I Ϫ on hClC-1 gating properties by using whole-cell recording of HEK-293 cells stably expressing the channel . Either in the absence or presence of extracellular I Ϫ , the time course of current deactivation upon hyperpolarizing voltage steps can be fit with a sum of two exponentials and a constant value. These fits provide two different data sets: the time constants of deactivation ( 1 and 2 for fast and slow deactivation, respectively), and the fractional amplitudes of two deactivating and one nondeactivating current components. We interpret the fractional current amplitudes as estimates of the proportion of channels existing in each of three different kinetic states: fast, slow, and nondeactivating . Fig. 4 illustrates the effect of external I Ϫ on these gating parameters for hClC-1 stably expressed in HEK-293 cells. Replacement of 40 mM NaCl by an equimolar concentration of NaI in the external solution has no effect on the time constants of deactivation ( Fig. 4 A), but there are dramatic changes in the voltage dependence of the fractional current amplitudes. In Fig. 4, B-D, a pronounced concentration-dependent leftward shift of the fractional amplitudes for fast deactivating ( Fig. 4 B, A 1 ), slow deactivating ( Fig. 4 C, A 2 ), and nondeactivating ( Fig. 4 D, C) current components can be seen. In the negative voltage range in which this effect was observed, there is an increase of the fraction of channels that either deactivate with a slow time constant or do not deactivate at all. This behavior of the fractional current amplitudes explains the less com- Figure 2. Effect of other extracellular anions on hClC-1 expressed in Xenopus oocytes. Current responses to voltage steps between Ϫ125 and ϩ35 mV in 40-mV steps from a holding potential of Ϫ30 mV from a single oocyte are shown. In A, the extracellular solution was ND-96. For the three other recordings, 48 mM NaCl was substituted by an equimolar concentration of NaCH 3 SO 3 (B), NaNO 3 (C), and NaSCN (D). plete deactivation observed in the presence of external I Ϫ (Fig. 1 B). Kinetic States Differ in Affinity for External I Ϫ Because hClC-1 is conducting in all three kinetic states and I Ϫ binds to a site within the conduction pathway (Fahlke et al., 1997a), it is reasonable to hypothesize that I Ϫ can bind to the channel whether it is in the fast, slow, or nondeactivating state (scheme i). In this scheme, the channel can exist in one of three states, A 1 (fast deactivating), A 2 (slow deactivating), and C (nondeactivating) in the absence of I Ϫ , and similarly it can exist in one of three states (A 1 -I Ϫ , A 2 -I Ϫ , and C-I Ϫ ) when I Ϫ is bound. The I Ϫ concentration dependence of the fractional current amplitudes (Fig. 4 A) indicates that the rate constants connecting the I Ϫ -bound states (A 1 -I Ϫ , A 2 -I Ϫ , and C-I Ϫ ) are different from those connecting the unbound states (A 1 , A 2 , C). At saturating I Ϫ concentration, all channels are occupied by I Ϫ , and the measured fractional current amplitudes thus represent the distribution of channels only in the three I Ϫ -bound kinetic states. At all voltages, the measured fractional amplitudes of the fast (A 1 ) and the slow (A 2 ) deactivating component reach limiting values of zero (Fig. 5, A and B) at high I Ϫ concentrations. Correspondingly, the constant fractional amplitude (C) approaches a value of one (Fig. 5 C). Therefore, reaction rate constants between the three different I Ϫ bound states (A 1 -I Ϫ , A 2 -I Ϫ , Figure 3. Normalized current-voltage relationships for hClC-1 in various extracellular solutions. For each anion tested, 48 mM NaCl was replaced by equimolar NaX, with X denoting different anions. For each anion, instantaneous (A and C) and steady state (B and D) currents were measured during voltage steps from a holding potential of Ϫ30 mV and normalized to the instantaneous current amplitude at Ϫ145 mV for the same oocyte in ND-96. Each point represents mean Ϯ SEM for at least three cells. (A) Voltage dependence of the instantaneous current amplitude for extracellular solutions containing 96 mM NaCl (᭹), 48 mM NaCl ϩ 48 mM NaBr (ᮀ), 48 mM NaCl ϩ 48 mM NaSCN (᭡), and 48 mM NaCl ϩ 48 mM NaNO 3 (᭞). (B) Voltage dependence of the steady state amplitudes from the same recordings as shown in A. (C) Voltage dependence of the instantaneous current amplitude for extracellular solutions containing 96 mM NaCl (᭜), 48 mM NaCl ϩ 48 mM Na-gluconate (᭺), 48 mM NaCl ϩ 48 mM Na-cyclamate (᭹), and 48 mM NaCl ϩ 48 mM NaCH 3 SO 3 (ᮀ). (D) Corresponding steady state values from the experiment shown in C. (E) Correlation between a blocking parameter (I peak in the presence of 48 mM anion divided by I peak in the presence of Cl Ϫ measured at Ϫ145 mV) and relative late current (I late divided by I peak measured at Ϫ145 mV) for various extracellular anions. (scheme i) and C-I Ϫ ) must be negligible, and we can simplify the state diagram (scheme ii). In this scheme, each I Ϫ bound state can be reached only from the corresponding unbound state. Transitions between corresponding bound and unbound states can be characterized by two rate constants: a first order dissociation constant (k off ) and a second order association constant (k on ) for I Ϫ . The proportion of channels in a particular kinetic state (i) occupied by I Ϫ is given by i is a dissociation constant equal to the ratio k off,i /k on,i . This expression assumes that binding of I Ϫ to the channel equilibrates much faster than the voltage-dependent gating process, and that only one external I Ϫ binds to the channel in each kinetic state. The latter assumption is tested below. The experimentally determined fractional current amplitude for a given kinetic state (A i ) in the presence of I Ϫ is a weighted mixture of two different probabilities: p I (i) for I Ϫ occupied channels, and p o (i) for unoccupied channels; these variables are related in the following equation: (1) which simplifies to: (2) in which i is a given kinetic state (fast, slow, or nondeactivating), A(i) is the probability for a given state at a specific I Ϫ concentration and test voltage, while p o denotes the probability without and p I with I Ϫ bound to the channel. This probability expression can provide information on the dissociation constant for I Ϫ in the three different kinetic states, and by analyzing data from several test potentials, we can determine the voltage dependence of K D . This analysis does not exclude the possibility that more than one external binding site exerting similar effects on gating exist per channel pore. To test this possibility, we determined Hill coefficients (n) from plots of fractional current amplitude versus [I Ϫ ] by fitting a modified version of Eq. 2: (3) We determined the concentration dependence of the effect of external I Ϫ on the three fractional current amplitudes (Fig. 5). Fractional current amplitudes were determined from current recordings made during various test pulses between Ϫ75 and Ϫ165 mV after a max- imally activating prepulse (ϩ55 mV) in the presence of different I Ϫ concentrations. To avoid effects caused by changing concentrations of Cl Ϫ , the extracellular Cl Ϫ concentration was held constant (10 mM) and bath gluconate was varied inversely with changes in I Ϫ concentration. Gluconate has no effect on gating properties of hClC-1 (Fig. 3). We evaluated our model in two steps. First, we determined the Hill coefficient for each fractional current amplitude by fitting regression lines of the log (A/ (A max Ϫ A) vs. log([I Ϫ ]) relationship where A represents the different fractional amplitudes (Fig. 5, A-C, insets). All calculated slopes had values Յ1, consistent with a single external I Ϫ binding site. Next, we fitted Eq. 2 to the data (Fig. 5, A-C) to obtain the voltage dependence of the dissociation constants for I Ϫ in each kinetic state and the voltage dependence of p o and p I (these fit parameters are plotted in Fig. 6, see below). For the A 1 , A 2 , and C components, the data are well fit with a single hyperbola, although data for the slow de- activating component (A 2 ) are more scattered than for the other two components. This suggests that this model is a reasonable first approximation of the interaction between I Ϫ and hClC-1. This analysis gives us the dissociation constant, K D , for external I Ϫ binding to the channel in each kinetic state (Fig. 6, A and B). For the fast deactivating component (A 1 ), K D is nearly voltage independent. By contrast, for both the slow deactivating component (A 2 ) and the nondeactivating component, the K D is voltage dependent and can be well fit with the Woodhull formula (Woodhull, 1973), giving the K D (0 mV) and the electrical distance ␦ (Fig. 6, A and B, Table I). Furthermore, fits of the data in Fig. 5 with Eq. 2 provide limiting values of P at very high (p I ) (Fig. 6 C), or zero (p o ) (Fig. 6 D) I Ϫ concentration that can be used to describe the fractional current amplitudes when all external binding sites are either occupied or unoccupied by I Ϫ . The plot of the calculated p o for the three different kinetic states resembles the experimentally determined voltage-dependent behavior of the fractional current amplitudes measured in the absence of I Ϫ . By contrast, the plot of calculated p I for the three different states indicates that binding of external I Ϫ locks the channel in the nondeactivating state (Fig. 6 C). Block of hClC-1 by Internal I Ϫ To examine the effect of internal I Ϫ on hClC-1, we initially recorded currents from inside-out patches excised from cells stably expressing the channel. Currents were recorded from the same patch before and after application of 50 mM NaI to the cytoplasmic face of the membrane. In the presence of 50 mM I Ϫ , the inward current amplitude is greatly reduced, whereas the outward current is unchanged (Fig. 7, A and B). In addition to reduction of the inward current amplitude, there is an apparent slowing of the deactivation process ( Fig. 7 B), and the pronounced inward rectification of the instantaneous current-voltage relationship is abolished (Fig. 7 C). We next examined the concentration dependence of the effects of internal I Ϫ on hClC-1 by using whole cell recording (Fig. 8). Currents recorded from cells exposed to various intracellular I Ϫ concentrations were normalized to levels measured at the most positive test potential, a valid procedure in view of the demonstrated lack of effect of internal I Ϫ on outward current. Fig. 8, A and B illustrates the concentration-dependent reduction of normalized inward current amplitude by internal I Ϫ . The effects are consistent with intracellular I Ϫ block of hClC-1 currents. Figure 7. Effect of intracellular I Ϫ on hClC-1 currents in excised inside-out patches from HEK-293 cells. The patch pipette was filled with standard extracellular solution, and the bath contained standard intracellular solution. The intracellular side of the patch was exposed to a solution containing (mM): 50 NaCl, 80 Na-gluconate, 2 MgCl 2 , 5 EGTA, 10 HEPES, pH 7.4 (A); or (mM): 50 NaCl, 50 NaI, 30 Na-gluconate, 2 MgCl 2 , 5 EGTA, 10 HEPES, pH 7.4 (B). (A and B) Current responses to voltage steps between Ϫ145 and ϩ95 mV in 60-mV steps. Each voltage step was preceded by a 300-ms prepulse to ϩ50 mV, and followed by a fixed test pulse to Ϫ125 mV. (C and D) Voltage dependence of the instantaneous and late current amplitudes as indicated. Block of hClC-1 by Other Internal Anions Other anions exert similar effects on hClC-1 when present inside the cell. Fig. 9 A shows whole-cell recordings made from cells dialyzed intracellularly with 50 mM NaI (Fig. 9, A and B), 50 mM NaSCN (Fig. 9, C and D), or 50 mM NaNO 3 (Fig. 9, E and F). Compared with recordings made with standard pipette solutions, the deactivation process is much slower for all tested anions, but this kinetic effect is most pronounced for I Ϫ . Analysis of the voltage dependence of the instantaneous current amplitude (Fig. 9, B, D, and F) shows that the degree of inward rectification of the instantaneous current amplitude is also decreased. These results indicate that several anions are able to interact with an internal ion binding site. As observed for the external binding site, the blocking potency of internally applied anions is correlated with the potency to change current kinetics (Fig. 9 G). Kinetic Effects of Internal I Ϫ We further evaluated the kinetic changes caused by internal I Ϫ by examining the channel with whole cell recording in the presence of various concentrations of NaI in the pipette solution. The time course of current deactivation measured under these conditions could be well fit with a function consisting of two exponentials and a constant term as described in methods. Both fast and slow deactivation time constants are increased in a concentration-dependent manner by intracellular I Ϫ , and by contrast to the effect of external I Ϫ , both deactivation time constants become voltage dependent (Fig. 10, A and B). Interestingly, however, the two time con-stants behave in opposite directions in response to voltage. Whereas the fast time constant increases with more negative test potentials (Fig. 10 A), the slow time constant decreases with hyperpolarization ( Fig. 10 B). We have previously modeled hClC-1 deactivation as a first order process mediated by a cytoplasmic gate . The occurrence of fast deactivating, slow deactivating, and constant current components corresponds with the existence of three populations of channels differing in the affinity of the internal vestibule of the ionic pore for this blocking particle. For each kinetic state in the presence of internal I Ϫ , there will be a mixed population of channels whose internal binding site will be occupied or not occupied by I Ϫ (see Scheme 2). Therefore, both deactivation time constants in the presence of internal I Ϫ will be a weighted mixture resulting from these two channel populations. By analogy to Eq. 2, we can relate the I Ϫ dissociation constant (K D ) for the internal binding site to the deactivation time constants by the following formula: (4) where i is either the fast ( 1 ) or slow ( 2 ) deactivation time constant determined in the presence of internal I Ϫ , and min , max , and K D are fit parameters. We restricted this analysis to the fast-and slow-deactivating components because, in the presence of internal I Ϫ , we are unable to distinguish the constant component from an incomplete deactivation in the fast or slow mode. We assumed that intracellular Cl Ϫ equilibrates with the internal binding site much faster than the deactivation process. We determined K D values for these two kinetic states at different test potentials (Fig. 11, A and B). The two kinetic states analyzed in this manner exhibit differences in their affinities for I Ϫ and in the voltage dependence of the effect (Fig. 11 C, Table II). Whereas the fast deactivating state is characterized by a voltagedependent K D , the values for the slow deactivating component are nearly voltage independent. As expected, the derived values for min (Fig. 11 D) closely resemble the experimentally determined values measured in the absence of internal I Ϫ (see Fig. 10). For both the fast and slow processes, the fit parameter max Ϫ1 is zero, indicating that a channel internally occupied by I Ϫ cannot close. Multiple Occupancy of the hClC-1 Conduction Pathway The experiments described above clearly demonstrate the distinct effects of external versus internal I Ϫ on hClC-1, and suggest the presence of two separate ion binding sites within the ion conduction pathway of hClC-1. The binding site accessible to internal I Ϫ appears to interact more directly with the channel closing mechanism, whereas the externally accessible site has effects on the voltage-dependent distribution of channels in the aforementioned three kinetic states. In our previously described gating model of hClC-1, the latter observation would fit with an alteration in voltage sensing caused by external I Ϫ . , and E) Current responses to voltage steps from a holding potential of 0 mV to test potentials between Ϫ165 and ϩ75 mV in 80-mV steps. Each step is followed by a test potential of Ϫ85 mV. Cells were bathed in standard extracellular solution and perfused intracellularly with a solution containing (mM): 50 NaCl, 50 NaX, 30 Na-gluconate, 2 MgCl 2 , 5 EGTA, 10 HEPES where X denotes I Ϫ (A), NO 3 Ϫ (C), or SCN Ϫ (E). (B, D, and F) Voltage dependence of the instantaneous current amplitudes from recordings shown in A, C, and E. (G) Correlation of the potency to block Cl Ϫ currents from the intracellular site and the fast deactivation time constant 1 measured at a test potential of Ϫ145 mV. We defined a blocking parameter by dividing the current amplitude measured at ϩ55 mV (which is not affected by intracellular anions) by the amplitude at Ϫ145 mV for each cell. Data points represent mean Ϯ SEM from three cells. The evidence suggesting that hClC-1 has two distinct ion binding sites within its conduction pathway raises the question of whether these sites can be occupied simultaneously. One approach to address whether hClC-1 has a multi-ion pore is by testing for concentration dependence of the permeability ratio under biionic conditions (Hille, 1992). Therefore, we measured current reversal potentials with whole-cell patch clamp in HEK-293 cells stably expressing hClC-1 under conditions in which Cl Ϫ was the only extracellular permeant anion and I Ϫ was the only intracellular permeant anion. Measurements made with various concentrations of Cl Ϫ and I Ϫ in a fixed ratio revealed concentration dependence of the reversal potential and, by inference, of the P I /P Cl permeability ratio (Fig. 12 A). This finding is consistent with ion-ion interactions within a multi-ion pore. A second line of evidence supporting the idea that hClC-1 is a multiply occupied pore comes from the electrical distance as obtained from Woodhull fits to the voltage dependence of the I Ϫ dissociation constant to the slow deactivating component (Table I). This number is greater than one, a finding typical for multiion channels (Hille and Schwarz, 1978). Based on results demonstrated for ClC-0 (Pusch et al., 1995), we also tested for anomalous mole fraction behavior in hClC-1 expressed in oocytes using mixtures of Cl Ϫ with either I Ϫ or SCN Ϫ . Fig. 12, B and C shows plots of normalized peak instantaneous current versus the mole fraction of the tested anion. In these experiments, we observed no minimum value for normalized current at any tested mole fraction. We also tested mixtures of Cl Ϫ with NO 3 Ϫ and similarly did not observe a minimum value in current versus mole fraction plots. The absence of anomalous mole fraction behavior in hClC-1 does not exclude a multi-ion permeation mechanism (Hille, 1992). Functional Alterations of Ion Binding Sites by Voltage-dependent Gating In this paper, we extend our earlier observations suggesting the existence of two distinct ion binding sites in the hClC-1 conduction pathway (Fahlke et al., 1997a). Specifically, we have characterized in more detail the ability of I Ϫ and other anions to block hClC-1 when applied from both sides of the cell membrane. Examining the effects of external versus internal anion block of hClC-1 has helped us distinguish two fundamentally different ion-channel interactions by virtue of their distinct effects on the kinetics and voltage dependence of channel gating. The ion binding sites responsible for mediating block of Cl Ϫ current appear to be identical to those through which anions exert their effects on gating based upon the close correlation of the two phenomena (Figs. 3 and 9). Interactions between blocking anions and the channel pore depend upon the kinetic state, as illustrated by the effect of voltage-dependent gating events on the quantitative parameters of external and internal ion binding (Tables I and II). By examining the I Ϫ concentration dependence of two parameters, fractional current amplitudes (reflecting external I Ϫ effects) and the deactivation time constants (reflecting internal I Ϫ effects), we were able to discern remarkable changes in the apparent affinity of hClC-1 for I Ϫ during voltagedependent gating transitions. In addition, we observed significant alterations in the electrical distance of I Ϫ binding occurring with changes in the kinetic state. These state-dependent changes in ion binding reveal transitional alterations in the interactions between the open pore and blocking anions as a particular mechanistic feature of voltage-dependent gating in hClC-1. These observations suggest that voltage-dependent gating events may be accompanied by structural rearrangements within the pore that alter the location of the ion binding sites within the electrical field and affect ion binding affinity. These data support our previously published hypothesis of voltage-dependent transitions occurring between conducting states in hClC-1 . The observed differences in the electrical distances of the binding sites are quite large, and seem to indicate drastic structural rearrangements of the pore during gating. However, electrical distances are not comparable with physical distances because of the inadequacy of the constant field assumption. In multiply t a b l e i i occupied ion channels (Hille and Schwarz, 1978), measured electrical distances can be much greater than the physical distances and may even exceed unity. In these ion channels, even slight physical movements of ion binding sights can cause large differences in the measured electrical distances. Relationship between Ion Permeation and Gating in hClC-1 It is very apparent from this study of hClC-1 and previously published work on the Torpedo channel, ClC-0 (Pusch et al., 1995;Chen and Miller, 1996), that ion permeation and gating are functionally linked in ClC channels. At the present time, there are differing opinions regarding the explanation for this functional linkage. In ClC-0, evidence has been presented that translocation of the permeating ion through the conduction pathway confers the majority of the voltage dependence of activation, and that there is little or no contribution of intrinsic protein charge movement to this process (Chen and Miller, 1996). We have presented a different viewpoint on the mechanism of voltage-dependent gating in hClC-1 . Based upon macroscopic analysis of gating, we have proposed a model in which voltagedependent conformational changes modulate gating by altering the affinity of the channel for a cytoplasmic blocking particle. These voltage-dependent conformational changes result in three different kinetic states in hClC-1 distinguished by their time course of deactivation. This voltage-responsive phenomenon can be modulated by permeant ions. As presented in this paper, occupation of the external binding site by I Ϫ locks the channel in the nondeactivating state. Chen and Miller (1996) recently examined the Cl Ϫ dependence of ClC-0 activation using measurements of opening rate constants derived from single channel recording of purified channels reconstituted into planar lipid bilayers. They found that the external Cl Ϫ concentration giving a half-maximal opening rate was not affected by the membrane potential, and this observation led them to conclude that initial binding of Cl Ϫ to the closed channel is voltage independent. Unlike the closed channel, open ClC-0 channels in lipid bilayers exhibit two ion binding sites that can sense the membrane potential and are located at electrical distances of 0.35 and 0.65 from the cis side (White and Miller, 1981). Although it is not entirely clear, it seems logical that the ion binding site in the closed channel that mediates Cl Ϫ activation is the same as the site accessible from the external solution in the open channel. In hClC-1, it is interesting to note that the site accessible to the external solution is minimally sensitive to voltage when the channel exists in the fast deactivating mode (␦ ϭ 0.1; Table I), but the same site is located more deeply in the electric field in both the slow deactivating and constant current conformations. Mechanism of Ion Selectivity of hClC-1 Channels The observation that almost every tested anion is permeant and yet capable of blocking the channel from both sides of the membrane provides information about the mechanism of ion selectivity in hClC-1. The qualitative similarities of effects of the various tested anions on current kinetics suggests that all of the anions we tested interact with the same binding sites. This idea is reinforced by the observed correlation between the potency to block inward current with the ability to alter macroscopic gating properties (Figs. 3 and 9). The two sites differ slightly in ionic rank order blocking potency (external site: SCN Ͼ I Ͼ NO 3 Ͼ CH 3 SO 3 Ͼ Br; internal site: I Ͼ NO 3 Ͼ SCN). By conventional wisdom, anions that block Cl Ϫ current do so because of higher affinity for a binding site within the conduction pathway. For hClC-1, the permeability sequence determined by examining reversal potentials in the presence of different external anion composition (Cl Ͼ SCN Ͼ Br Ͼ NO 3 Ͼ I Ͼ CH 3 SO 3 ) (Fahlke et al., 1997a) correlates inversely with the blocking potency sequence of the internal site, but less well with that of the external site. In qualitative terms, all of the tested anions less permeant than Cl Ϫ exert a blocking action. This is consistent with a mechanism of ion selectivity in hClC-1 based on differential ion binding rather than repulsion or size exclusion. Binding of ions to sites within the conduction pathway requires replacement of ion-solvent with ion-channel interactions (Eisenman and Horn, 1983;Hille, 1992), a process in which hydration energy is spent and electrostatic energy is released. The binding of larger (i.e., I Ϫ ) or polyatomic (i.e., SCN Ϫ , NO 3 Ϫ , CH 3 SO 3 Ϫ ) anions more tightly than Cl Ϫ to the hClC-1 binding sites suggests that hydration forces dominate the ion-channel interaction consistent with weak binding sites in Eisenman terminology (Wright and Diamond, 1977;Eisenman and Horn, 1983). These ion binding sites could conceivably consist of either a fixed charge with a large radius, or a weak dipole (Wright and Diamond, 1977). The measurement of permeability for different-sized anions can provide information about the hClC-1 pore size. Among the larger anions tested, CH 3 SO 3 (ionic diameter Х 0.50 nm; see Halm and Frizzell, 1992) can traverse the pore, whereas gluconate (ionic diameter ϭ 0.59 nm) is impermeant, thus giving an estimate of the minimum pore diameter between 0.5 and 0.6 nm. The inability of gluconate to permeate or block the hClC-1 pore suggests that both ion binding sites are located in a narrow part of the conduction pathway. This is in clear contrast to voltage-gated sodium and potassium channels in which a variety of blockers can bind to a wide vestibule but are impermeant because of size exclusion (Hille, 1992). The minimum diameter of the hClC-1 pore appears to be considerably larger than those found in the conduction pathway of voltage-gated potassium channels, but smaller than that of the nicotinic acetylcholine receptor. This estimated minimum pore diameter for hClC-1 is similar to apical membrane Cl Ϫ channels of secretory epithelial cells (Halm and Frizzell, 1992) and to the skeletal muscle calcium channel (McCleskey and Almers, 1985). Conclusion In summary, the experimental data we present here provides new insights into the nature of the ion conduc-tion pathway in hClC-1. Our studies suggest that hClC-1 has a rather wide ionic pore that is multiply occupied, and is functionally characterized by two distinct ion binding sites. Both sites appear to be weak interacting sites in Eisenman terminology, and the mechanism of ion selectivity in hClC-1 involves differential ion binding. Lastly, we provide evidence that the hClC-1 pore is dynamic in that conformational changes within the conduction pathway underlie the functional link between gating and permeation. These results provide a framework for understanding ion permeation in hClC-1 and will facilitate future experiments aimed at defining the structure and function of ClC channels. We are grateful to Dr. Louis DeFelice and Dr. Richard Horn for their critical reviews of the manuscript.
2016-10-12T18:19:45.623Z
1997-11-01T00:00:00.000
{ "year": 1997, "sha1": "264cef73bc7da844f62e6b5ae18f373ffe1fe072", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/110/5/551.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a670437910d5d70cf993accab1429147560a38d7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
119155570
pes2o/s2orc
v3-fos-license
Problems with mean curvature-like operators and three-point boundary conditions In this paper we study the existence of solutions for a new class of nonlinear differential equations with three-point boundary conditions. Existence of solutions are obtained by using the Leray-Schauder degree. Several papers have been recently devoted to the study of nonlinear ordinary differential equations of the form (1.1), where l(u, u ) = 0 denotes the periodic, Neumann or Dirichlet boundary conditions. In particular, for ϕ(s) = s/ √ 1 + s 2 and Dirichlet conditions, one can consult [5,6,7,10]. In [2], the authors have studied the problem (1.1), where f : [0, T ]×R n ×R n → R n is a Carathéodory function, ϕ : R n → B 1 (0) ⊂ R n , and l(u, u ) = 0 denotes the periodic boundary conditions. They obtained the existence of solutions by means of the Leray-Schauder degree theory. The interest in this class of nonlinear operators u → (ϕ(u )) is mainly due to the fact that they include the mean curvature operator u →div ∇u √ 1+|∇u| 2 . If the function f satisfies the condition ∃ c > 0 such that |f (t, x, y)| ≤ c < a 2T , ∀(t, x, y) ∈ [0, T ] × R × R, the Dirichlet problem has at least one solution. Theorem 1.2. Let f be continuous. Assume that f satisfies the following conditions. . Then the Neumann problem has at least one solution. Inspired by those results, we study the problems (1.1) by using similar topological methods based upon Leray-Schauder degree. The main contribution of this paper is the extension of some results above cited to a more general type of boundary conditions. The paper is organized as follows. In Section 2, we establish the notation, terminology, and various lemmas which will be used throughout this paper. Section 3 is devoted to the study of existence of solutions for (1.1) with boundary conditions of type u(0) = u (0) = u (T ). In Section 4, for u(0) = u(T ) = u (T ) boundary conditions, we investigate the existence of at least one solution for (1.1). Such problems do not seem to have been studied in the literature. In the present paper generally we follow the ideas of Bereanu and Mawhin [1,4]. Notation and preliminaries We first introduce some notation. For fixed T , we denote the usual norm in We introduce the following applications: the Nemytskii operator N f : the following continuous linear applications: For u ∈ C, we write For the convenience of the reader we recall some results, which will be crucial in the proofs of our results. The following results are taken from [8](see also [3,11], respectively). The firs one is needed in the construction of the equivalent fixed point problem. Moreover, the function Q ϕ : B → R is continuous and sends bounded sets into bounded sets. The second one is an extension of the homotopy invariance property for Lerayschauder degree. Problems with bounded homeomorphisms In this section we are interested in boundary value problems of the type is a continuous function. In order to apply Leray-Schauder degree theory to show the existence of at least one solution of (3.2), we consider for λ ∈ [0, 1], the family of boundary value problems Notice that 3.3 coincide, for λ = 1, with (3.2). Now, we introduce the set where clearly Ω is an open set in [0, 1] × C 1 , and is nonempty because Introduce also the operator M : Ω → C 1 defined by Here ϕ −1 with an abuse of notation is understood as the operator . The symbol B a (0) denoting the open ball of center 0 and radius a in C. It is clear that ϕ −1 is continuous and sends bounded sets into bounded sets. When the boundary conditions are periodic or Neumann, an operator has been considered by Bereanu and Mawhin [4]. The following lemma plays a pivotal role to study the solutions of the problem (3.3). is a straightforward consequence of the fact that this map is a composition of continuous maps. In addition That is, (M (λ, u)) is a composition of continuous operators and thus M (λ, u) ∈ C 1 . The continuity of M follows by the continuity of the operators which compose it M . Now suppose that (λ, u) ∈ Ω is such that M (λ, u) = u. It follows from (3.4) that for all t ∈ [0, T ]. Then, taking t = 0 we get Differentiating (3.5), we obtain that In particular, Applying ϕ to both of its members, differentiating again and using (3.6), we deduce that for all t ∈ [0, T ]. Thus, u satisfies problem (3.3). This completes the proof. The following lemma gives a priori bounds for the possible fixed points of M . . Assume that f satisfies the following conditions. Let ρ, κ ∈ R be such that L + 2 c − L 1 < κ < a, ρ > r(2 + T ) and consider the set Since the set {0}× u ∈ C 1 : u 1 < ρ, ϕ(P (u)) ∞ < κ ⊂ V , then we deduce that V is nonempty. Moreover, it is clear that V is open and bounded in [0, 1] × C 1 and V ⊂ Ω. On the other hand using an argument similar to the one introduced in the proof of Lemma 3.1, it is not difficult to see that M : V → C 1 is well defined and continuous. Furthermore, using Lemma 3.3, we have that u = M (λ, u) for all (λ, u) ∈ ∂V . Let us show that M (Λ) ⊂ C 1 is compact. To see this consider first a sequence (v n ) n of M (Λ) and let (λ n , u n ) n be a sequence in Λ such that v n = M (λ n , u n ). Using (3.9), we have that there exists a constant L 1 > 0 such that, for all n ∈ N, Because λ n H(N f (u n ) − Q(N f (u n ))) + ϕ(P (u n )) ∞ ≤ κ < a for all n ∈ N, it follows that the sequence (λ n H(N f (u n ) − Q(N f (u n ))) + ϕ(P (u n ))) n is bounded in C. Moreover, for any t, t 1 ∈ [0, T ] and for all n ∈ N we have which implies that (λ n H(N f (u n ) − Q(N f (u n ))) + ϕ(P (u n ))) n is equicontinuous. Thus, by the Arzelà-Ascoli theorem there is a subsequence of (λ n H(N f (u n )−Q(N f (u n )))+ ϕ(P (u n ))) n , which we call (λ n H(N f (u j ) − Q(N f (u j ))) + ϕ(P (u j ))) j , which is convergent in C. Using that ϕ −1 : B a (0) ⊂ C → C is continuous it follows from (M (λ n j , u n j )) = ϕ −1 [λ n H(N f (u j ) − Q(N f (u j ))) + ϕ(P (u j ))] that the sequence ((M (λ n j , u n j )) ) j is convergent in C. Then, passing to a subsequence if necessary, we obtain that (v n j ) j = (M (λ n j , u n j )) j is convergent in C 1 . Finally, let (v n ) n be a sequence in M (Λ). Let (z n ) n ⊆ M (Λ) be such that lim n→∞ z n − v n 1 = 0. Let in addition (z n j ) j be a subsequence of (z n ) n that converges to z. Therefore, z ∈ M (Λ) and (v n j ) j converge to z. This concludes the proof. Main result In this subsection, we present and prove an existence theorem for (3.2). We denote by deg B the Brouwer degree and for deg LS the Leray-Schauder degree, and define the mapping G : Remark 3.6. Using the family of boundary value problems which gives the completely continuous homotopy M defined by and similar a priori bounds as in the Lemma 3.3, it is not difficult to see that (3.10) has a solution for λ = 1. Let us give now an application of Theorem 3.5. where (0, 0) is a regular value of G and J G (x, y) =detG (x, y) is the Jacobian of G at (x, y). Therefore, the problem (3.11) has at least one solution. Existence results for problems with bounded homeomorphisms In this section we study the existence of at least one solution for nonlinear problems of the form (ϕ(u )) = f (t, u, u ) u(T ) = u(0) = u (T ), (4.12) where ϕ : R → (−a, a) is a homeomorphism, ϕ(0) = 0 and f : [0, T ] × R × R → R is a continuous function such that Now, using Lemma 2.1 and (4.13) we introduce the operator M 1 : The following results are taken from [8]. , then u is a solution of (4.12). Applying ϕ to both members and differentiating again, we deduce that for all t ∈ [0, T ]. This completes the proof.
2019-04-12T04:21:51.822Z
2016-10-08T00:00:00.000
{ "year": 2016, "sha1": "0a66d0be76b5baf58e46e6e014d900549ff72478", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.02461", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d9486a62bb179d431ffe21f13f38848e85ce719f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119220203
pes2o/s2orc
v3-fos-license
Finite width induced modification to the electromagnetic form factors of spin-1 particles The inclusion of the unstable features of a spin-1 particle, without breaking the electromagnetic gauge invariance, can be properly accomplished by including higher order contributions as done in the so-called fermion loop scheme (for the W gauge boson), and the boson loop scheme (for vector mesons). This induces a non trivial modification to the electromagnetic vertex of the particle, which must be considered in addition to any other contribution computed as stable particles. Considering the modified electromagnetic vertex, we obtain general expressions for the corresponding corrections to the multipoles as a function of the mass of the particles in the loop. For the W gauge boson no substantial deviations from the stable case is observed. For the rho and K* mesons the mass of the particles in the loop makes a significant effect, and can be comparable with corrections of different nature . I. INTRODUCTION The electromagnetic properties of spin-1 particles (V ) can help us to understand the symmetry structure and interactions of the fundamental particles. For example, considering V as a stable state, the electromagnetic properties of the W gauge boson are predicted by the symmetry structure of the standard model, while for the ρ meson several predictions exist, based on effective models of the strong interaction binding the quarks [1][2][3][4][5], and from lattice-QCD [6,7]. However, they are not stable states. Therefore, in order to draw definite conclusions, a complete study of the additional effects due to their instability is mandatory. The proper theoretical description of such states requires to incorporate their unstable features (parameterized by their finite decay width, Γ) in an electromagnetic gauge invariant way. To do so, several schemes have been developed, we can mention, for example, the so called fermion loop scheme [8][9][10] and the boson loop scheme [11] (suitable for the W and ρ bosons respectively), which consider that Γ is naturally included in the calculations by taking into account the absortive contributions in the electromagnetic vertex and in the propagator. Under these schemes, the electromagnetic vertex V V γ is modified respect to the tree level form in a non-trivial way. This implies that the electromagnetic structure itself suffers modifications. In the present work we analyze the most general results for the electromagnetic vertices obtained in the loop schemes, which include the mass of the particles in the loops, and extract the expressions for the modified form factors. In order to exhibit the size of the corrections due to the unstable nature of the particles, we compare our results for the magnetic dipole moment (MDM) and electric quadrupole moments of the W , ρ and K * mesons with others computed in the literature for contributions from different nature. The gauge invariance requirement, along with the modification in the propagator inherent to the schemes, allows to identify the proper complex renormalization factor of the vector field, which keep the electric charge free of radiative correction contributions. II. FINITE WIDTH EFFECTS The schemes developed in [8][9][10][11] for the introduction of the finite width effects, while keeping electromagnetic gauge invariance, are based in two main observations: In quantum field theory the width is naturally included in the imaginary part of the self-energy of the particles and, the Ward identity is respected at all orders in perturbation theory. These facts are exploited in those schemes by including the resummation of the fermion/boson loops in the propagator and the corrections in the electromagnetic vertices. Then, the imaginary part of the fermion/boson loops introduces the tree level width in the gauge boson propagator and, the gauge invariance is not violated since the fermion/boson loops obey the Ward identity order by order. At tree level, the propagator for a vector boson of mass M V can be set as where T µν (q) ≡ g µν − q µ q ν /q 2 and L µν (q) ≡ q µ q ν /q 2 , are the transversal and longitudinal projectors, respectively. The vertex for the process V (q 1 ) → V (q 2 )γ(k) is defined from the electromagnetic current At tree level, the vertex can be set as that given by the standard model for the W boson: These expressions satisfy the Ward identity Upon the inclusion of the finite width of the boson, by considering the loop contributions, the propagator is modified in a generic form as: where ImΠ T (q 2 ) and ImΠ L (q 2 ), are the transverse and longitudinal part of the absortive contribution of the self-energy induced by the particles in the loop. Similarly, the vertex becomes ıeΓ µνλ = ıe(Γ µνλ where Γ µνλ 1 contains the loop corrections. The Ward identity relates the loop contributions by requiring to satisfy For a boson like the W , the scheme consider that such loops are produced by fermions, while for vector mesons, like the ρ, bosons are the natural particles in the loop. In general, the CP conserving electromagnetic vertex can be decomposed into the following Lorentz structure where the electromagnetic form factors can be identified as: |Q| ≡ α(k 2 ) is the electric charge form factor ( in e units), | µ| = β(k 2 ) ≡ 1 + κ + λ is the magnetic dipole moment form factor (in e/2M V units) and the electric quadrupole form factor is The parameters κ and λ are of common use in the literature to refer to the electromagnetic multipoles [12,13]. The static electromagnetic properties of a particle are defined for the case when the particle is on-shell and in the limit of k → 0. At tree level, for example, the standard model predicts for the W to have α(0) = 1, β(0) = 2 and γ(0) = 0 (κ = 1 and λ = 0), corresponding to |Q| = 1, | µ| = 2 and |X E | = 1. Deviations from these values are generically called anomalous and are produced by the inclusion of higher order contributions [14]. In the present case such contributions are exclusively those required to maintain the electromagnetic gauge invariance, upon the introduction of the finite decay width. A. W boson form factors Let us identify the modification to the W boson form factors introduced by the correction to the electromagnetic vertex. For that purpouse we consider the explicit expression of the vertex obtained in ref. [10], where the mass of the emitting particles in the loop (m) and its weak partner (m ′ ) have been considered. The Lorentz structure, transversality and onshell condition of the boson along with the proper limit for k → 0 leads to the following expressions for the form factors, defined in equation (8): • Electric charge where Σ 2 ≡ m 2 +m ′ 2 , ∆ 2 ≡ m 2 −m ′ 2 and Q i is the electric charge of the radiating particle in the loop, Γ i /M W ≡ g 2 i λ 3/2 /48π is the partial decay width for the modes corresponding to the particles in the loop, g i denotes the coupling and λ ≡ ( A sum over all the allowed flavors and color degeneracies is explicitly included. Since the schemes consider the particles in the loop to be on-shell, the flavors include all the leptons and the u, d, s and c quarks. • Magnetic dipole moment • Isospin limit Let us show, just for illustration, the above expressions in the case of m = m ′ . They become where • Magnetic dipole moment • Electric quadrupole moment • Isospin limit The neutral and charged pions are almost degenerated. Taking the isospin limit the form factors become where λ I = (M 4 ρ − 2Σ 2 M 2 ρ )/M 4 ρ . The corrections for the K * + meson follows from the results for the ρ meson by including the two possible channels for the loop contributions: K * + → K + π 0 and K * + → K 0 π + , with the corresponding masses and partial decay widths. C. Chiral limit The chiral limit correction to the vertex is known to be proportional to the tree level, in both the Fermion and boson loops corrections [11]. Therefore, in this limit we can write the modification to the form factors in a generic form as follows: III. NUMERICAL RESULTS The correction to the vertex seems to induce a modification to the the electric charge. However, since the Ward identity is fulfilled, the modification to the vertex is followed by a modification to the propagator, which produces an exact cancelation of the correction to the electric charge. Let us illustrate this point in more detail, for the sake of clarity we consider the expression in the chiral limit: The modified propagator can be set as [11,15]: , it can be seen as a renormalization of the vector field by the inclusion of the finite width where Z 1/2 = 1/(1 + iγ) 1/2 . Then, the electromagnetic current becomes Therefore, gauge invariance requires that Zα ′ (k 2 ) = 1, which is indeed the case, and the electric charge does not receive any correction. Note that the modification to the vertex given by equation 21 implies that none of the multipoles receive corrections in the chiral limit. An analysis for unstable spin-1/2 particles has also been performed in ref. [16], pointing out to complex renormalization factors as a requirement for properly defined physical quantities. Further considerations on the renormalizability of the wave function can be seen in ref. [17]. The proper values of the modifications to the form factors are then found by the expressions given in the previous section divided by α ′ (0). In Table I, we present For the ρ meson, pions are the only on-shell particles allowed in the loops. In this case the pion−to−ρ mass ratio (≈ 0.18) is not as small as the corresponding for the f ermion−to−W , and therefore a significant effect from the mass of the particles in the loop is expected. We can compare our results as shown in Table I with those shown in Table II become available [6] and, in particular, the dependence on the pion mass they are able to reproduce has been exhibited. In Figure 1, we compare our result for the MDM with lattice calculations as a function of the pion mass [6], we also include predictions from the models at the physical mass of the pion. We observe that the pion mass dependence of our results are mostly flat with a slightly tendency to rise for very large masses. Lattice results are also flat with a tendency to increase for low masses. The corrections for the K * + meson is dominated by the kaon−to−K * mass ratio (≈ 0.55) which is very large thus, although the width to mass ratio of the K * is only ≈ 0.056, the correction to the multipoles are important. Compared with the predictions listed in Table II, they can be about the same or one order of magnitude smaller. (defined for κ =1 and λ = 0), which can be compared, for example, with the one computed in reference [21], where they observe a correction of 0.06 f m, due to the inclusion of the pion contribution respect to a pure quark-antiquark state. For K * the deviation is −0.0005 f m. IV. CONCLUSIONS The inclusion of the unstable features of spin-1 particles, without breaking the electromagnetic gauge invariance, induces a non trivial modification to the electromagnetic vertex of the particle. In this work we have extracted the corresponding modifications to the multipole structure of the W and vector mesons. Our numerical results for the W gauge boson multipoles shows no substantial deviations from the stable case. For the ρ and K * mesons, the mass of the particles in the loop makes a significant effect, pointing out that the unstable nature of the vector mesons can be as relevant as other dynamical effects and should be con- The general grounds of the loop schemes for spin-1 particles, to account for the finite decay width in a gauge invariant way, have been invoked to study spin-3/2 particles [22]. Since in this case the mass ratio between the unstable particle and the ones in the loop can be very large, further studies are desirable to understand at which extend the finite decay width contributes to the multipoles.
2010-03-24T17:34:31.000Z
2010-01-06T00:00:00.000
{ "year": 2010, "sha1": "53c6ee3478d5d0151509c88f97e9cb9c5bdb4c6d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.0998", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "53c6ee3478d5d0151509c88f97e9cb9c5bdb4c6d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269942964
pes2o/s2orc
v3-fos-license
Enjoyment and Affective Responses to Moderate and High-Intensity Exercise: A Randomized Controlled Trial in Individuals with Subsyndromal PTSD This crossover randomized controlled trial examined the acute psychological effects of a bout of moderate-intensity continuous aerobic exercise (MICE) and a bout of high-intensity functional exercise (HIFE), relative to a no-exercise sedentary control (SED), in participants (N = 21; 15 f; 24.7 ± 9.3 years) with subsyndromal post-traumatic stress disorder (PTSD). Affective state (Energy, Tiredness, Tension, Calmness) was assessed before (Pre), immediately after (Post 0), 20-min after (Post 20), and 40-min after (Post 40) each condition. Affective valence was assessed during each condition, and exercise enjoyment was assessed at Post 0. Enjoyment was significantly greater following HIFE and MICE relative to SED. Energy was significantly increased Post 0 HIFE and MICE but decreased Post 0 SED. Tension was reduced following all conditions and was significantly lower at Post 40 relative to Pre for HIFE, MICE, and SED. Tiredness was significantly reduced at Post 40 relative to Pre following MICE only, while Calmness was significantly lower at Post 40 relative to Pre following MICE and SED. Overall, both exercise conditions were enjoyed to a greater extent than the control, but MICE may provide greater psychological benefits with respect to Calmness and Tiredness. This study is among the first to assess acute changes in affective states relative to various exercise modes in individuals living with subsyndromal PTSD. Introduction Post-traumatic stress disorder (PTSD) is a mental health condition brought on by one or more traumatic events and ranges in severity.The lifetime prevalence of PTSD in the United States has been estimated at 6.8% [1], however, other populations can experience elevated risk.Specifically, PTSD prevalence has been cited at 15% in combat veterans [2], and 26% of first responders report significant PTSD symptomology [3].Psychological health conditions may arise due to PTSD such as anxiety, depression, and sleep disorders [4,5].In addition to mental health, physical health is also often impaired.Physical impairments associated with PTSD include chronic pain [6], hypothalamic-pituitary-adrenal axis dysfunction [7], diabetes, hypertension, and hypercholesterolemia [8].Those with PTSD are at an increased risk for cardiovascular disease, metabolic syndrome [9], and pulmonary disease [4,10].Individuals with greater PTSD symptoms (i.e., number and/or severity) have an elevated risk for general health symptoms and medical conditions as well as poorer health-related quality of life [11,12].Overall, those with PTSD have both psychological and physical health impairments that put them at an increased risk for early biological aging [13]. Given the health concerns associated with PTSD, both pharmacological and psychological interventions have been explored with optimistic results [14].A recent metaanalysis identified positive effects for well-established, empirically supported psychological interventions to treat PTSD including cognitive therapy (g = 1.63), exposure therapy (g = 1.08), and eye movement desensitization and reprocessing [15].Furthermore, common pharmacological interventions to treat PTSD have been associated with an effect size of 0.74 for paroxetine, 0.41 for sertraline, 0.43 for fluoxetine, 0.41 for risperidone, 1.20 for topiramate, and 0.48 for venlafaxine [15].While these well-established, empirically supported treatments for PTSD are effective, there are some concerns regarding the participant retention and response rates.Treatment outcome data suggest dropout rates as high as 50% and nonresponse rates as high as 67% for exposure therapy, dropout rates as high as 32% and nonresponse rates as high as 71% for cognitive behavioral therapy, and dropout rates as high as 36% and nonresponse rates as high as 92 percent for eye movement desensitization and reprocessing [16].A recent study on veterans diagnosed with PTSD and prescribed medication indicated a 34.6% dropout rate within 30 days and a 71.8% dropout rate within 180 days [17].Other studies have highlighted nonresponse rates around 33% [18] and dropout rates between 19 and 27% [19] following empirically supported PTSD treatments.Additionally, psychological interventions often require one-on-one manual therapy by a highly trained therapist.Individuals may have limited access to this type of therapy due to the availability of therapists and insurance coverage [20].This level of treatment response suggests that individuals with PTSD may benefit from a combination of treatment lines. As above-mentioned, individuals living with PTSD experience an elevated risk for numerous health concerns and medical conditions.Regular physical activity provides physical and mental health benefits [21][22][23] that may reduce comorbid symptoms associated with PTSD.Specifically, physical activity improves blood pressure [24], hypothalamicpituitary-adrenal axis function [23], and cholesterol [24,25] as well as reduces the risk for cardiovascular disease [26], metabolic syndrome [27], and pulmonary disease [28].Physical activity also improves mental health conditions related to PTSD such as depression, anxiety, and sleep disorders.Following 3-weeks of regular exercise, individuals with multiple sclerosis displayed a significant reduction in depression, fatigue, and sleep complaints [29].Numerous meta-analyses on the anxiolytic effects of exercise on anxiety have revealed moderate effect sizes [30][31][32], highlighting the beneficial effect of exercise on anxiety.However, disengagement resulting from a mental illness may cause these individuals to lead a more sedentary lifestyle despite the health benefits associated with physical activity [33].While these studies were not conducted within a population living with PTSD, it follows that many of these benefits would translate to this special population. Applying exercise interventions for individuals with PTSD has the potential to reduce comorbid symptoms of PTSD and possibly reduce PTSD symptom severity.A recent systematic review and meta-analysis identified 11 studies assessing the exercise effects on PTSD.The results indicate that exercise has a beneficial effect (i.e., small to moderate effect sizes) on PTSD symptoms, depressive symptoms, sleep disturbances, and substance use disorder [4].Another review identified exercise as an effective treatment to reduce PSTD symptomology in individuals with subsyndromal PTSD and highlighted the beneficial effects of comorbid symptoms such as anxiety, depression, and sleep disturbances [34].Additionally, sport and game programs are commonly used as a strategy to reduce PTSD symptoms.However, in an attempt to systematically review randomly controlled trials that evaluated the effectiveness of sports and games on reducing PTSD symptoms, Lawrence, De Silva, and Henley [14] found no studies that met the criteria to be included in the review, thus there is a great need for randomized, controlled trials in this population.Rosenbaum, Sherrington, and Tiedemann [35] conducted a novel randomized controlled trial to help support this need.They found that patients diagnosed with PTSD in a 12-week exercise program that included a combination of resistance training and walking in addition to standard of care had significant reductions in PTSD symptoms and depressive symptoms compared to the control group that only received the standard of care [35].This study is an example of how exercise can be used in conjunction with usual care, as participants received a combination of psychotherapy, pharmacology, and group therapy interventions during the study [35]. Although previous research has demonstrated the positive effects of physical activity on mental health including PTSD, there is still a need for randomized controlled trials that provide outcomes on the intensity and mode of physical activity that may increase exercise adherence in a population living with PTSD or PTSD symptoms.According to hedonic theory [36,37], individual behaviors are highly motivated by the pursuit of pleasure and avoidance of displeasure/pain.Additionally, the dual-mode theory states that in-task valance remains positive at exercise intensities below the ventilatory threshold (VT) or lactate threshold (LT), becomes variable at the VT or LT, and steadily declines at intensities above the VT or LT [38][39][40][41].Therefore, exploring changes in affective states from pre-topost exercise and examining in-task valance could provide valuable information leading to increased exercise adherence [42][43][44].Furthermore, comparing post-exercise enjoyment may provide additional information on exercise adherence as enjoyment has been shown to increase exercise adherence and reduce dropout rates [45].While previous work has highlighted the beneficial effects of exercise at low, moderate, and high intensity [46], physical activity rates remain a concern with about 54% of U.S. adults meeting aerobic physical activity guidelines and about 24% meeting both aerobic and resistance exercise guidelines [47]. While research assessing affective state changes, in-task valance, and post-exercise enjoyment in individuals living with PTSD and PTSD symptoms is limited, previous research has highlighted these effects in other clinical and healthy populations.Brand et al. [48] reported improved mood following the completion of 40-60 min of various modes of exercise (i.e., ball sports, Nordic walking, and workout/gymnastics) in inpatients with a variety of mental disorders.Meyer et al. [49] assessed changes in depressed mood via the Profile of Mood States [50] and brain-derived neurotrophic factor (BDNF) responses via blood draws following the completion of 30-min of aerobic cycling at low, moderate, high, and a preferred intensity in adult females with a major depressive disorder.Results suggest that the imposed exercise intensities were more advantageous for improving depressed mood and increasing BDNF responses.Greene et al. [51] examined changes in affective state, in-task valance, and enjoyment following 15-min of walking, quiet rest, and a highintensity body weight interval exercise session in college students.Their results indicated an elevated enjoyment following both walking and body weight exercise conditions relative to quiet rest, increased in-task valance during walking, and decreased in-task valance during quiet rest and body weight exercise, and improved affective states following walking and body weight exercise relative to quiet rest [51].Moreover, the study by Jung et al. assessed the changes in in-task valance and enjoyment during and following a single bout of high-intensity interval exercise (HIT), moderate-intensity continuous exercise (MICE), and vigorous-intensity continuous exercise (VICE) in healthy men and women.Participants completed one min at 100% Wpeak followed by one min at 20% Wpeak for 20 min during HIT, 40 min at 40% Wpeak during MICE, and 20 min at 80% Wpeak during VICE [52].Results indicated greater enjoyment following HIT relative to MICE or VICE, and increased positive in-task valance during MICE relative to HIT and VICE [52].As above-mentioned, exercise at high intensity (i.e., above VT or LT) typically results in a decrease in positive affect and an increase in negative affect, however, these responses are less understood under the context of interval exercise.A recent review has explored this phenomenon and concluded that enjoyment following interval exercise (often of higher intensity) is equal to or greater than enjoyment following continuous exercise [53]. Given the significant benefits exercise has shown on PTSD symptoms [35], comorbid symptoms of PTSD [29], and improved mood [48], nonadherence to exercise programs, especially within this population, warrants investigation.As above-mentioned, only 54% of adults meet the aerobic physical activity guidelines [47], with individuals suffering from a mental health condition at an increased risk of inactivity.A recent systematic review and meta-analysis indicated that individuals with PTSD were 9% less likely to be physically active and 31% more likely to be obese than the general population [54].Furthermore, Assis et al. assessed the physical activity levels in individuals diagnosed with PTSD and concluded a significant decrease in activity following diagnosis.Specifically, the physical activity rates were about 52% in individuals without a PTSD diagnosis but dropped to 22% in individuals diagnosed with PTSD; participants identified time and lack of motivation as major reasons for nonadherence to exercise [33].Individuals living with PTSD often avoid social interaction and may need alternatives to traditional gym-based exercise programs.A recent survey of men diagnosed with PTSD identified exercising at home and alone as two of the most attractive exercise options [55].Additionally, Pebole et al. examined exercise preferences in 355 females with PTSD and reported 75% of participants preferred to exercise at home, 15% preferred a gym setting, and about 10% preferred exercising outside or associated with a medical provider.Furthermore, 75% of participants preferred to exercise alone or online, 23% preferred to exercise in a group, and about 1% indicated no preference [56].As such, it may be beneficial to explore the acute changes in affective states, in-task valance, and the enjoyment of exercise that can be carried out at home, with minimal equipment and time investment.High-intensity interval exercise has been proposed as a time effective alternative to traditional cardio, providing similar if not enhanced physiological outcomes [57], but much less is known about the acute psychological outcomes. Therefore, the present study aimed to explore three main purposes.The first purpose was to assess the post-exercise enjoyment following high-intensity functional exercise (HIFE), moderate-intensity continuous exercise (MICE), and a sedentary, no-exercise period (SED).The second purpose was to examine the changes in affective states from before, immediately after, and 20 and 40 min after each condition.The third was to examine the in-task valence during each condition.Based on previous research that assessed enjoyment and affective responses to high-intensity interval and moderate-intensity aerobic exercise [51][52][53], it was hypothesized that post-exercise enjoyment would be similar following HIFE and MICE, but that both conditions would result in greater enjoyment relative to SED.Second, it was hypothesized that participants would report significant increases in positive affect and decreases in negative affect after the completion of HIFE and MICE relative to SED.Finally, it was hypothesized that in-task valance would be significantly more positive during MICE relative to HIFE. Experimental Design A randomized, controlled, counterbalanced crossover design was used to compare different exercise intensities and modes on enjoyment and affective responses in participants with subsyndromal PTSD.The study consisted of four laboratory visits.The first visit included filling out the informed consent and health questionnaires in addition to measuring the VO2peak, which was used to determine the intensity levels for subsequent visits.The following three visits included exercise testing at one of three exercise intensities, in random order, with at least 48-h between visits (see Figure 1).All visits occurred in a temperaturecontrolled laboratory.This study was approved by the University's Institutional Review Board, and all procedures followed the institutional guidelines. Participants Participants with subsyndromal PTSD were recruited across a large college campus.A total of 25 participants were assessed, but only 21 met the including criteria for subsyndromal PTSD.Inclusion criteria were as follows: (1) ages 18-64; (2) self-reported exposure to a traumatic event (i.e., criterion A); (3) having at least one symptom for criterion B (i.e., re-experiencing), one symptom for criterion C (i.e., avoidance), two symptoms for criterion D (i.e., negative alterations in cognitions and mood), and two symptoms for criterion E (i.e., hyper-arousal: [58]); and (4) completion of the Physical Activity Readiness Questionnaire to determine if exercise was likely safe [59].A symptom was defined when participants scored a 2 (i.e., moderately) or higher on the PCL-5 Likert scale.Furthermore, all participants signed the university approved informed consent document and no participant reported any contraindications to physical activity.About half of the participants (43%) reported exercising vigorously on a regular basis [exercise frequency = 3.7 ± 1.5 days per week, duration = 65.6 ± 27.9 min per session, intensity = 4.3 ± 1.1 (5 = hard, 7 = very hard using the CR-10 RPE scale; [60])].The final sample identified as predominantly Caucasian (i.e., 85.7%); other descriptive information of the participant sample is included in Table 1.The participants were asked to abstain from exercising and consuming alcohol 24 h before each testing session. Participants Participants with subsyndromal PTSD were recruited across a large college campus.A total of 25 participants were assessed, but only 21 met the including criteria for subsyndromal PTSD.Inclusion criteria were as follows: (1) ages 18-64; (2) self-reported exposure to a traumatic event (i.e., criterion A); (3) having at least one symptom for criterion B (i.e., re-experiencing), one symptom for criterion C (i.e., avoidance), two symptoms for criterion D (i.e., negative alterations in cognitions and mood), and two symptoms for criterion E (i.e., hyper-arousal: [58]); and (4) completion of the Physical Activity Readiness Questionnaire to determine if exercise was likely safe [59].A symptom was defined when participants scored a 2 (i.e., moderately) or higher on the PCL-5 Likert scale.Furthermore, all participants signed the university approved informed consent document and no participant reported any contraindications to physical activity.About half of the participants (43%) reported exercising vigorously on a regular basis [exercise frequency = 3.7 ± 1.5 days per week, duration = 65.6 ± 27.9 min per session, intensity = 4.3 ± 1.1 (5 = hard, 7 = very hard using the CR-10 RPE scale; [60])].The final sample identified as predominantly Caucasian (i.e., 85.7%); other descriptive information of the participant sample is included in Table 1.The participants were asked to abstain from exercising and consuming alcohol 24 h before each testing session. Sample Size Calculation A post hoc calculation was conducted using prior work as a guide G*Power [61] to determine whether the sample size was sufficient.Based on previous results assessing changes in affective states following acute exercise [62], the following parameters were defined: Cohen's f: 0.47; alpha error probability: 0.05; 1-beta: 0.95; with three conditions and four time points; and an estimated correlation among repeated measures of 0.5, therefore, the sample size needed to detect a significant effect was determined to be 15.As such, the 21 participants that met our inclusion criteria were deemed satisfactory. Visit 1 Following the completion of all of the initial questionnaires (i.e., informed consent, PCL-5, PAR-Q), the participants' aerobic capacity was assessed.All participants completed a VO2peak test using a Bruce protocol.Starting at 1.7 mph and a 10% grade, participants completed 3-min stages until their heart rate reached ~85% of their age-predicted maximal heart rate (HR; calculated as 208 − 0.7 × age), a respiratory exchange ratio (RER) of 1.1, or the participant reached volitional exhaustion.After each completed stage, the grade was increased by 2% and speed increased to 2.5, 3.4, 4.2, 5.0, and 5.5 mph, respectively, with each stage [63].All participants were able to achieve either an RER of 1.1 or age-predicted maximal heart rate of 85% before reaching volitional exhaustion.Following completion of the VO2peak test, participants were monitored until baseline HR was reached. Visits 2-4 Visits 2 through 4 were randomized and counterbalanced.All participants completed each of the three conditions: high-intensity functional exercise (HIFE), moderate-intensity continuous exercise (MICE), and a no exercise control (SED) condition.Each condition was 35 min, followed by a 40-min monitoring period (i.e., 75 min total).Affective states were assessed at pre (Pre), immediately post (Post 0), 20 min post (Post 20), and 40 min post (Post 40) each condition.Affective states were assessed at various time points to minimize the possibility of missing transient and/or delayed onset changes.Enjoyment was assessed at Post 0 only.In-task valence was assessed every 5-min during each condition, with in-task valence being assessed immediately after each 3-min of activity during HIFE.In-task valence was included to capture how individuals responded during exercise, as this can have a large impact on enjoyment and adherence to exercise [51,53].Additionally, to maintain appropriate exercise intensities, HR (i.e., continuously monitored) and RPE (i.e., assessed every 5 min) were collected during each condition.For safety, there was a minimum of two research members present, and the participants were permitted to drink water ad libitum.All three conditions were completed within two weeks, with a minimum of 48 h between sessions to limit extraneous variables. HIFE During HIFE, participants completed a 5 min warm-up, 25 min of active exercise, and a 5 min cool-down.Both the warm-up and cool-down were completed at 37-45% of their VO2peak (i.e., 57-63% HRmax), which is the recommended light intensity protocol for a warm-up and cool-down [64].Active exercise consisted of five circuits conducted at a ratio of 3 min of activity followed by 2 min of rest.Activity involved completing three blocks of resistance exercises and two blocks of aerobic exercises.During each block, the participants completed three specific exercises/movements for 30 seconds and repeated each exercise/movement twice per block (see Table 2).During each active circuit, the participants were encouraged to work as hard as they could.The HIFE exercise protocol was created to provide a complete full-body workout that the participants could complete with minimal or no equipment in the comfort of their homes.Individuals with PTSD have indicated a strong preference for exercising both at home and alone [55,56].Additionally, the American College of Sports Medicine recommends incorporating strength training 2 days per week for each major muscle group [65]. MICE During MICE, the participants completed the same warm-up and cool-down as HIFE, but active exercise was 25 min of moderate-intensity aerobic exercise on a treadmill.Treadmill speed and grade were manipulated to keep the RPE between 12 and 15 ("somewhat hard" to "hard" [60]) and HR between 64 and 76% HRmax (i.e., 46-63% VO2peak [64]).To keep a valid assessment of RPE, speed and grade were manipulated following the administration of the RPE scale (i.e., every 5 min when necessary). SED During SED, the participants remained seated in the research lab.All interactions were kept identical to both the HIFE and MICE including the HR and RPE assessments. The Physical Activity Enjoyment Scale (PACES) Post-exercise enjoyment was measured with the PACES [67].This 18-item self-report measure has demonstrated strong internal consistency (α = 0.93).Participants were instructed to pick the number that most closely matched how they felt about the activity they had just completed by using a 7-point Likert scale (e.g., "It's no fun at all" (1) to "It's a lot of fun"( 7)).PACES scores range from 18 to 126, with scores for the present study ranging from 50 to 126, 69 to 126, and 42 to 120 following the HIFE, MICE, and the SED conditions, respectively. Feeling Scale (FS) In-task valence was assessed using the FS [70].The FS is an 11-point, single-item, bipolar measure of pleasure-displeasure. Participants were instructed to indicate how they felt right now on a scale that ranged from +5 to −5, with an option for neutral.Keywords were provided for +5 (i.e., "Very Good"), −5 (i.e., "Very Bad"), 0 (i.e., Neutral), and at every odd integer. Rating of Perceived Exertion (RPE) Perceptions of effort were assessed with the 15-point RPE scale [60].The RPE is a self-report measure of effort that ranges from 6 (no exertion at all) to 20 (maximal exertion).The RPE was used during each condition, and the participants indicated how hard they felt they were working right now.The RPE scale has been validated in the literature as an appropriate method to assess exertion within an exercise setting (r = 0.884; [71]). Heart Rate (HR) Participants were fitted with a Polar© FT1 HR monitor and Polar WearLink Coded 31 transmitters (Polar Electro, Kempele, Finland).HR values were continuously monitored during all conditions. Data Analysis Data analysis was conducted using SPSS 27.0.0for Windows.Data were initially inspected for any unusual data points.As none were found, all participants were included in all analyses.The analysis of differences in the main outcome variable for enjoyment was conducted with a Condition Affective State Pre-to-post affective state changes in Energy, Tiredness, Tension, and Calmness were assessed using a RM ANOVA [Condition The Condition main effect for Calmness [p = 0.049] was significant, but pairwise comparisons did not yield significant differences between any conditions.Calmness was significantly greater at Post In-Task Valence The RM ANOVA for in-task valence revealed a significant Condition [p < 0.001] and Condition-by-Time interaction effect [p < 0.001], but the Time main effect was not significant In-Task Valence The RM ANOVA for in-task valence revealed a significant Condition [p < 0.001] a Condition-by-Time interaction effect [p < 0.001], but the Time main effect was not sign cant [p = 0.220].Specifically, in-task affective valence was more positive during both MI [Mdiff ± SE; 1.4 ± 0.37; 95% CI: 0. Manipulation Check To determine whether participants met the appropriate exercise intensity for HI MICE, and SED, the RPE and HR were assessed during each condition.The Condit main effect was significant for both HR Manipulation Check To determine whether participants met the appropriate exercise intensity for HIFE, MICE, and SED, the RPE and HR were assessed during each condition.The Condition main effect was significant for both HR and RPE.RPE was greater during HIFE [M ± SE; Discussion This randomized, counterbalanced crossover, controlled trial was designed to examine the effects of 25-min bouts of MICE and HIFE, relative to SED, on acute enjoyment and affective responses in individuals living with subsyndromal PTSD.Overall, this study demonstrated that acute bouts of both MICE and HIFE resulted in greater levels of enjoyment than the SED condition.MICE also reduced feelings of tiredness and increased feelings of calmness compared to the SED condition.This suggests that both bouts of MICE and HIFE may be useful approaches to improve acute psychological well-being in this population. The first purpose of the present manuscript was to assess changes in post-exercise enjoyment.It was hypothesized that enjoyment would be greater following HIFE and MICE relative to SED.Both the moderate-and high-intensity exercise conditions yielded significantly higher scores on the PACES relative to the no-exercise control.Furthermore, enjoyment was not different following HIFE relative to MICE.The results of the present study support the hypothesis and agree with previous research assessing enjoyment following low-moderate intensity continuous, high-intensity interval exercise, relative to a no-exercise condition [51].In a scoping review of the literature, Stork et al. [53] found that most studies reported exercise enjoyment to be similar or greater after high-intensity interval exercise compared to moderate-intensity continuous exercise.Furthermore, the review found that most of the participants in the cited studies preferred high-intensity interval exercise over moderate-intensity continuous exercise.As exercise enjoyment has been identified as a strong indicator for continued exercise participation [42][43][44], it could be a valuable tool to increase exercise adherence. It was also hypothesized that participants would show an increase in positive affective states and a decrease in negative affective states following the completion of both exercise conditions.To assess the pleasure and displeasure of HIFE and MICE, participants completed the AD ACL immediately before, immediately after, 20-min post, and 40-min post each condition.Given the nature of the AD ACL, measures of pleasure are highlighted by increases in pleasant-activated affective states (i.e., Energy) and/or pleasant-deactivated affective states (i.e., Calmness).Similarly, measures of displeasure are noted by increases in unpleasant-activated affective states (i.e., Tension) and/or unpleasant-deactivated affective states (i.e., Tiredness).The results generally support this hypothesis.Specifically, the participants indicated significantly higher levels of Energy from pre-to immediate post-exercise for both the HIFE and MICE conditions, while the participants reported a decrease in Energy from the pre-to immediate post-SED condition.Calmness was significantly reduced from pre-to 40-min post-MICE and SED, but was not different from pre-at any time following HIFE.Furthermore, the participants indicated a decrease in Tiredness for MICE relative to SED, whereas the HIFE condition did not differ from either the MICE or SED conditions.Tiredness was significantly decreased from pre-to immediately post-, 20-min post-, and 40-min post-MICE only.Tiredness remained unchanged during HIFE and SED at all time points relative to pre-.There were no differences between conditions for the unpleasant-activated state of Tension. Overall, the results for affective state changes agreed with the previous literature.Greene et al. [51] found increased energy and decreased tiredness following 15 min of walking and body weight interval exercise relative to a quiet rest condition.A systematic review and meta-analysis reported a main effect of 0.47 for increased energy following acute exercise [62].Finally, Jung et al. [52] reported increased energetic arousal following moderate-intensity continuous exercise and an increase in tension arousal following highintensity interval exercise and moderate-intensity continuous exercise, but a larger increase following high-intensity interval exercise.Human behavior is often motivated by the pursuit of pleasure and the avoidance of displeasure (hedonic theory: [37]).Participants felt more energy after moderate-and high-intensity exercise relative to not exercising at all.Moderate-intensity exercise also decreased feelings of tiredness and increased calmness.While both exercise conditions resulted in increased affective states relative to no exercise, there appears to be some added benefits to moderate-intensity exercise with respect to calmness and tiredness. Finally, it was hypothesized that in-task valance would be significantly more positive during MICE relative to HIFE.The results of the present study support this hypothesis, as in-task valence (i.e., feeling scale) was significantly more positive during MICE relative to HIFE.Previous literature has linked exercise intensity to in-task responses.Specifically, the higher the intensity, the less positive/more negative the in-task valence [42][43][44].This is important as in-task valance has been shown to predict post-exercise enjoyment [51].However, while the present study showed a significantly more positive in-task valance during MICE, the in-task valance remained positive during HIFE, and post-exercise enjoyment was not different between MICE and HIFE.This has practical implications on exercise adherence, as affective responses to interval exercise may not follow affective responses to traditional continuous exercise.Previous literature has highlighted an increase in negative affective responses to high-intensity exercise [41,73], possibly due to increases in physiological stressors such as lactate accumulation and increased respiration due to changes in oxygen and carbon dioxide levels, which stimulate the brain to elicit negative affective responses [41].It is suggested that these negative affective responses from highintensity exercise may reduce exercise adherence [44,74].While the present study reported more positive in-task valence during MICE, in-task valence during HIFE remained positive, indicating that affective responses during interval exercise may not follow well-established patterns associated with continuous exercise. The present study is novel as findings on exercise enjoyment, affective states, and intask valence from various exercise intensities in a population living with PTSD/PTSD symptoms in controlled, randomized trials are limited and greatly needed.Teixeira et al. [75] explored exercise affect in healthy young adults in a randomized controlled trial.Randomized controlled trials reduce bias and allow for a more accurate analysis of specific treatment effects [76].The trial may consist of a parallel design (i.e., receives an assigned treatment or condition throughout the study) or crossover design (i.e., receives all treatments or conditions in a randomized fashion).Our study utilized a crossover design that is valuable as the participants serve as their own controls, thereby reducing interindividual variability [77].To avoid carryover effects from the acute bouts of exercise, a wash-out period of at least 48 h was incorporated between sessions.The exercise treatment order was randomized for each participant to further reduce the carryover effects from the previous treatment in the overall results [76]. With regard to assessing the affective responses to high-intensity interval exercise relative to continuous exercise, significant attention needs to be given to the exercise intensity itself.While a considerable body of literature on continuous exercise suggests affective responses to exercise are generally pleasant at intensities below the ventilatory or lactate threshold, highly variable at the lactate or ventilatory threshold, and become less pleasant/more negative at intensities above the lactate or ventilatory threshold [38][39][40][41], much less is known about affective responses to interval exercise.Studies have shown the in-task valence to be less positive/more negative during high-intensity interval exercise relative to moderate-intensity continuous exercise [51,52], and enjoyment to be similar or higher following high-intensity interval exercise [53].However, longitudinal evidence linking affective responses during/after interval exercise to adherence should be considered.A recent systematic search identified eight studies that reported physical activity levels at least 12 months following high-intensity interval versus moderate-intensity continuous exercise.The results highlight a substantial decrease in performed exercise intensity during high-intensity interval exercise that was unsupervised, and ultimately led to the conclusion that high-intensity interval exercise was not advantageous for adherence [78].Thus, highintensity interval exercise may not be sustainable at imposed exercise intensities. Furthermore, one of the confounding variables within the literature assessing highintensity interval exercise is the operational definition used in various studies.Specifically, the work rest ratios and relative exercise intensities can vary drastically between studies.Additionally, it is difficult to classify interval exercise as high-intensity, vigorous-intensity, or moderate-intensity, as rest periods make reaching a physiological steady state impossible.As such, recent evidence has suggested using a percentage of peak workloads to assess the intensity during interval exercise [79].With regard to rest periods, Martinez et al. reported greater feelings of pleasure during, and higher enjoyment following interval exercise using shorter intervals after controlling for total work [80].Additionally, a recent study reported greater feelings of pleasure following resistance exercise that decreased intensity during exercise relative to resistance exercise that increased intensity during exercise [81], thus indicating that exercise intensity progression can impact affective responses.As high-intensity interval exercise was listed as the second most popular fitness trend in 2020 [82], there is a need to study affective responses to high-intensity interval exercise, but caution needs to be given when assessing the true exercise intensity and adherence to these programs. In the present study, exercise intensity was assessed using subjective and objective measures.First, the participants' aerobic capacity was assessed via a VO2peak test.During MICE, their heart rate was monitored and maintained at 64-76% HRmax (i.e., 46-63% VO2peak).During HIFE, the intensity was not specifically controlled for, but the participants were instructed to give their "all out" effort.The VO2peak was reported instead of the VO2max as the criteria to meet VO2max was not met by all participants.Therefore, the true maximum heart rate was likely not reached, resulting in slightly elevated intensity percentages per condition.The average heart rate achieved during the active intervals of HIFE and average heart rate during MICE were 88% and 78% HRmax achieved during the VO2peak test, respectively.Using age-predicted maximal heart rate (i.e., 220-age), the participants achieved an average intensity level of 75.8% during MICE and 85.9% during HIFE [83,84].The American College of Sports Medicine defines moderate intensity as 64-76% of maximum heart rate and vigorous intensity as 77-95% of maximum heart rate [64].Therefore, with an increased maximum heart rate from a true VO2max test, and in agreement with intensity based on the age-predicted maximal heart rate, the HIFE condition and MICE condition meet the standards for vigorous and moderate intensity, respectively.Furthermore, subjective measures of intensity were assessed via the RPE scale.Immediately following the active intervals of HIFE, the average RPE was 14 ("somewhat hard to hard", which indicates vigorous intensity), the average RPE during MICE was 11.5 ("fairly light to somewhat hard", which is on the borderline of light to moderate intensity), and the average RPE during SED was 6.4 ("very, very light", which indicates a sedentary control [64]). There are several limitations to the present manuscript.First, as the present sample was a convenience sample, previous exercise history, sex, and ethnicity were not controlled for.While it would be of value to examine the affective responses and enjoyment in both regular exercisers and sedentary individuals, this was less of a concern for the present study as exercise history did not change the results when used as a covariate.Additionally, the present study included fifteen females and only six males; this gender imbalance was similar to the participant make-up in the repeated measures, randomized, and counter-balanced study by Jung et al. [52] examining high-and moderate-intensity exercise bouts on affective responses in healthy men and women.Thus, future studies should explore affective responses to exercise in a more balanced group of men and women or by comparing larger groups of men and women, especially in participants living with PTSD/PTSD symptoms, as women have been shown to have PTSD prevalence rates nearly double that of men [85].Furthermore, the participants were not clinically diagnosed with PTSD prior to participation in the study.While this limits the reach of the present manuscript, this was less of a concern as the overall PCL-5 score was well above the recommended cut point for a probable PTSD diagnosis.According to the PCL-5 questionnaire, a validated measure of PTSD symptoms, a score between 33 and 80 suggests that the PTSD severity is above the clinical threshold [58,86].The mean score for the 21 participants in the present study was 52.5 ± 12.2.While a PTSD diagnosis was not provided by a clinical psychologist, the results from the PCL-5 serve to verify the severity of PTSD symptoms of the participants.Therefore, it is likely that the present results would extend to include those with a clinical diagnosis, but further research would be needed to confirm this.Finally, it is important to note that symptoms of PTSD include avoidance and decreased mental and physical health.It is possible that the participants in the present sample are not representative of all individuals living with PTSD/PTSD symptoms, thus limiting the generalizability of the findings.However, it is feasible that individuals with greater health impairments would experience more significant improvements following exercise.Future studies are needed on clinical populations living with PTSD and other comorbidities. Conclusions In conclusion, this crossover study demonstrated that an acute bout of both moderateintensity continuous exercise and high-intensity interval-based exercise resulted in increased enjoyment with positive outcomes on affective states compared to a sedentary control in individuals with subsyndromal PTSD.Energy was increased in both HIFE and MICE compared to SED, while MICE also resulted in greater psychological benefits concerning Calmness and Tiredness.In-task valance was more positive during MICE relative to HIFE, but in-task valence remained positive during HIFE.Overall, an acute bout of MICE and HIFE were well-tolerated in this special population and both resulted in immediate psychological benefits.While future explorations need to address some of the study limitations and expand these findings to longitudinal effects on affective states and enjoyment, the present manuscript provides preliminary evidence that exercise may be used as an additional treatment line for individuals living with PTSD/PSTD symptoms.Furthermore, it is of value to continue to determine the most enjoyable form of exercise that may lead to high adherence rates in order to improve both the physical and mental health of those suffering from PTSD symptoms.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Figure 2 . Figure 2. Post-condition enjoyment.* Indicates a significant difference at the p = 0.05 level. Figure 2 . Figure 2. Post-condition enjoyment.* Indicates a significant difference at the p = 0.05 level. Table 1 . Descriptive information of the sample participants. Table 1 . Descriptive information of the sample participants. bb = body bars; bw = body weight; Each exercise performed for 30 s and repeated twice per block (i.e., 3 min blocks). Table 3 . Mean ± SD and effect sizes of Energy and Tiredness before and after each condition. Table 3 . Mean ± SD and effect sizes of Energy and Tiredness before and after each condition. Table 4 . Mean ± SD and effect sizes calmness and tension before and after each condition.
2024-05-22T15:09:32.398Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "a9e5d7297212790ab5483995d07325d9f13c7a73", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4663/12/5/138/pdf?version=1716194427", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39227d6c3a71c661643e14b7038d5c0757b66294", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18842157
pes2o/s2orc
v3-fos-license
Efficacy of Active Carbon towards the Absorption of Deoxynivalenol in Pigs In order to assess the in vivo efficacy of mycotoxin binders, specific toxicokinetic parameters should be measured according to European guidelines. For this purpose, an absorption model in pigs is described with emphasis on absorption kinetics. Pigs received a single oral bolus of the mycotoxin deoxynivalenol alone or in combination with active carbon (applied as mycotoxin binder). After administration of deoxynivalenol alone, significant plasma amounts of deoxynivalenol were detected and kinetic parameters were calculated using a one compartmental model. Activated carbon completely prevented the absorption of deoxynivalenol as no plasma amounts could be detected. Introduction The contamination of feed with mycotoxins is a continuing feed safety issue leading to economic losses in animal production [1]. Consequently, a variety of methods for the decontamination of feed have been developed, but mycotoxin detoxifying agents seem to be the most promising and are therefore most commonly used [2,3]. These detoxifying agents can be divided into two different classes, namely mycotoxin binders and mycotoxin modifiers. These two classes have different modes of action; mycotoxin binders adsorb the toxin in the gut, resulting in the excretion of complex toxin-binder in the faeces, whereas mycotoxin modifiers transform the toxin into non-toxic metabolites [4]. The extensive use of these additives led, in 2009 in the European Union, to the establishment of a new group of feed additives called mycotoxin detoxifiers. These compounds are specified as "substances for reduction of the contamination of feed by mycotoxins: substances that can suppress or reduce the absorption, promote the excretion of mycotoxins or modify their mode of action" [5]. The efficacy of these products has to be evaluated. In vivo efficacy trials are usually based on so-called unspecific parameters, evaluating animal performance, blood biochemical or hematological parameters, organ weight, effects on immune function, histological changes, etc. [6]. As these criteria are non-specific, differences obtained between treated and untreated animals cannot be solely attributed to the efficacy of the detoxifier. There may be confounding effects involved such as immuno-modulating activity of β-glucans and antioxidant action of other feed components. A possibility to distinguish between specific and unspecific effects is the inclusion of a group fed non-contaminated feed supplemented with the detoxifier. However, the European Food Safety Authority (EFSA) proposed other end-points based on specific toxicokinetic parameters [7]. As mycotoxin binders are deemed to adsorb mycotoxins in the gut, a lowered intestinal absorption is expected. According to the EFSA, the most relevant parameter to evaluate the efficacy of these products is the plasma concentration of these toxins or their main metabolites. The EFSA proposes short-term feeding trials where the mycotoxin and detoxifier is mixed in the feed [7]. The plasma concentrations of the mycotoxin, and the main metabolite(s), should be monitored over a period of at least 5 days with a presampling period of at least one week (steady-state design). Furthermore, unspecific parameters may be monitored as well. Such feeding trials are labor and cost intensive. In contrast, a toxicokinetic model where the mycotoxin is administered with or without mycotoxin detoxifier as a single oral administration would be less expensive and labor intensive to perform. The aim of present study was to evaluate a bolus absorption model in relation to the EFSA guidelines, to study the efficacy of mycotoxin detoxifiers towards the oral absorption of deoxynivalenol (DON) in pigs. Results and Discussion After single oral bolus administration of 0.05 mg DON/kg bw, quantifiable plasma amounts of DON were detected ( Figure 1). No statistical differences in absorption parameters between males and females were found (data not shown). The plasma concentration-time profile fitted a one compartmental model. The Tmax of 1.33 h is comparable to the value of 1.65 h reported by Goyaerts and Dä nicke (2006) [8]. The Cmax of 29.7 ng/mL on the other hand, was higher compared to [8] (15.1 ng/mL after oral dosing of 0.08 mg DON/kg bw). However, feed intake can influence the oral bioavailability of DON which explains the slightly higher Cmax in the present study with fasted pigs in comparison to fed pigs as used in [8]. Other kinetic parameters of DON (Table 1) were comparable to literature reports [8,9]. The major metabolite of DON, de-epoxydeoxynivalenol (DOM-1), was not detected in plasma in the present study. This correlates to previous literature reports where DOM-1 only accounted for 1.4%-1.7% of the total DON concentration in the systemic circulation of pigs [10]. To test the effectiveness of this model in pigs, DON was also administered in combination with active carbon (AC) as it was demonstrated that it strongly adsorbs DON in broiler chickens [11]. The absorption of DON was completely prevented by AC as no DON, above the limit of detection (LOD), could be detected in plasma. This demonstrates the suitability of the absorption kinetic model to evaluate the efficacy of mycotoxin binders towards the oral absorption of DON in pigs. As stated, AC was used as a positive control because it is a universal antidote which adsorbs various compounds, including mycotoxins such as DON [12,13]. However, the commercial use of AC in practice should be avoided in order to minimize the risk of a diminished nutrient absorption as well as the impairment of nutritional value of the feed [12]. Chemicals, Products and Reagents The standard of DON, used for the animal and analytical experiments, was obtained from Fermentek (Jerusalem, Israel). DOM-1 was purchased from Sigma-Aldrich (Bornem, Belgium). Internal standard (IS), 13 C15-DON, was purchased from Biopure (Tulln, Austria). The standards were stored at ≤−15 °C. Water, methanol and acetonitrile (ACN) were of LC-MS grade and were obtained from Biosolve (Valkenswaard, The Netherlands). Glacial acetic acid was of analytical grade and obtained from VWR (Leuven, Belgium). Millex ® -GV-PVDF filter units (0.22 µm) were obtained from Merck-Millipore (Diegem, Belgium). Animal Experiment Eight piglets (20.2 ± 1.4 kg bw) of mixed gender were purchased (Biocentre Agrivet, Merelbeke, Belgium) and housed in four different compartments (±4 m 2 /compartment, two animals/compartment). The temperature was kept between 18 and 25 °C. The relative humidity was between 40% and 80%. An ambient day-light scheme was applied. After a one week acclimatization period, the pigs were fasted for 12 h followed by administration of a single oral bolus of 0.05 mg DON/kg bw by oral gavage using an intragastric tube. This dose resembles a feed contamination amount of 1 mg DON/kg. For this bolus administration, DON was dissolved in ethanol (1 mg/mL) and further diluted with tap water up to a volume of 10 mL. Four of the eight pigs received the DON bolus in combination with activated carbon (AC) (0.1 g/kg bw, resembling an inclusion amount of 2 g/kg feed) (NORIT Carbomix ® , KELA Pharma, Sint-Niklaas, Belgium), suspended in 10 mL of tap water. Immediately after administration of the bolus, the intragastric tube was rinsed with 50 mL of tap water. Blood samples were drawn before (0 min) and at 0.33, 0.66, 1, 1.5, 2, 3, 4, 6, 8, 10 and 12 h post administration. Blood samples were taken in heparinized tubes and centrifugated (2851 × g, 10 min, 4 °C). Aliquots (250 µL) of plasma samples were stored at ≤−15 °C until analysis. This animal experiment was approved by the Ethical Committee of Ghent University (Case number EC 2011-13). Quantification of DON in Plasma Samples were analyzed as previously described by [14]. Briefly, 12.5 µL of IS and 750 µL of ACN were added to 250 µL of plasma, followed by vortex mixing (15 s) and centrifugation (8517 × g, 10 min, 4 °C). Next, the supernatant was transferred to another tube and evaporated using a gentle nitrogen (N2) stream (45 ± 5 °C). The dry residue was reconstituted in 200 µL of water/methanol (85/15, v/v). After vortex mixing (15 s), the sample was passed through a Millex ® GV-PVDF filter (0.22 µm) and transferred into an autosampler vial. An aliquot (5 µL) was injected onto the LC-MS/MS instrument. The LC system consisted of a quaternary, low-pressure mixing pump with vacuum degassing, type Surveyor MSpump Plus and an autosampler with temperature controlled tray and column oven, type Autosampler Plus, from ThermoFisher Scientific (Breda, The Netherlands). Chromatographic separation was achieved on a Hypersil ® Gold column (50 mm × 2.1 mm internal diameter, particle diameter: 1.9 µm) in combination with a guard column of the same type (10 mm × 2.1 mm internal diameter, particle diamter: 3 µm), both from ThermoFisher Scientific. A gradient elution program was performed with 0.1% glacial acetic acid in water and methanol as mobile phases. The LC column effluent was interfaced to a TSQ ® Quantum Ultra triple quadrupole mass spectrometer, equipped with a heated electrospray ionization (h-ESI) probe operating in the negative ionization mode (all from ThermoFisher Scientific). Absorption Parameters Parameter analysis was performed with WinNonlin 6.3. (Pharsight, St. Louis, MO, USA). The most important parameters of DON were calculated: maximal plasma concentration (Cmax), time to maximal plasma concentration (Tmax), area under the plasma concentration-time curve from time 0 to infinite (AUC0-inf), absorption rate constant (ka), absorption half-life (T1/2a), elimination rate constant (kel), elimination half-life (T1/2el), volume of distribution divided by the oral bioavailability (Vd/F) and clearance divided by the oral bioavailability (Cl/F). Conclusions For the first time, an in vivo model was applied to evaluate the efficacy of active carbon towards the oral absorption of deoxynivalenol in pigs, based on absorption kinetic characteristics. Activated carbon completely prevented the absorption of DON from the intestinal tract.
2017-05-25T04:45:23.031Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "0e13e521c5544d035528f14f0dabe65b323e92ac", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/2072-6651/6/10/2998/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e13e521c5544d035528f14f0dabe65b323e92ac", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119035628
pes2o/s2orc
v3-fos-license
Effect of quantum fluctuations on even-odd energy difference in a Cooper-pair box We study the effect of quantum charge fluctuations on the even-odd energy difference for a small superconducting island (Cooper-pair box) connected to a large finite-size superconductor by a tunnel junction. Even-odd energy difference is important for understanding the quasiparticle"poisoning"effect, and determines the activation energy of a trapped quasiparticle in the Cooper-pair box. We find that renormalization of the activation energy due to quantum charge fluctuations depends on the dimensionless normal-state conductance of the junction g_T, and becomes strong at g_T>>1. Recently, superconducting quantum circuits have attracted considerable interest (see [1,2] and references therein). From the viewpoint of quantum many-body phenomena, these circuits are good systems to study the effect of quantum fluctuations of an environment on the discrete spectrum of charge states 3,4,5,6,7 (similar to the Lamb shift in a hydrogen atom). While most of the studies of superconducting nanostructures focus on smearing of the charge steps in the Coulomb staircase measurements 8 , here we consider another observable quantityeven-odd-electron energy difference δE in the Cooperpair box (CPB). This quantity is important for understanding the quasiparticle "poisoning" effect 9,10,11,12,13 , and it has been recently studied experimentally 14,15 . It was conjectured that δE may be reduced in the strong tunneling regime g T = R q /R N > 1 by quantum fluctuations of the charge 14 . Here R q and R N are the resistance quantum, R q = h/e 2 , and normal-state resistance of the tunnel junction, respectively. In this paper, we study the renormalization of the discrete spectrum of charge states of the Cooper-pair box by quantum charge fluctuations. We show that virtual tunneling of electrons across the tunnel junction may lead to a substantial reduction of the even-odd energy difference δE. We consider here the case of the tunnel junction with a large number of low transparency channels 16 . The dynamics of the system is described by the Hamiltonian Here H b BCS and H r BCS are BCS Hamiltonians for the CPB and superconducting reservoir; H C = E c (Q/e − N g ) 2 with E c , N g andQ being the charging energy, dimensionless gate voltage and charge of the CPB, respectively. The tunneling Hamiltonian H T is defined in the conventional way. We assume that the island and reservoir are isolated from the rest of the circuit; i.e. total number of electrons in the system is fixed. At low temperature T < T * , thermal quasiparticles are frozen out. (Here T * = ∆ ln(∆/δ) with ∆ and δ being superconducting gap and mean level spacing in the reservoir, respectively). If total number of electrons in the system is even, then the only relevant degree of freedom at low energies is the phase difference across the junction ϕ. In the case of an odd number of electrons a quasiparticle resides in the system even at zero temperature. The presence of 1e-charged carriers changes the periodicity of the CPB energy spectrum (see Fig. 1) since an unpaired electron can reside in the island or in the reservoir. Note that at N g = 1, a working point for the charge qubit, the odd-electron state of the CPB may be more favorable resulting in trapping of a quasiparticle in the island 14,15,17 . In order to understand energetics of this trapping phenomenon, one has to look at the ground state energy difference δE between the even-charge state (no quasiparticles in the CPB), and odd-charge state (with a quasiparticle in the CPB): Note that tunneling of an unpaired electron into the island shifts the net charge of the island by 1e. Thus, one can find δE of Eq. (2) as the energy difference at two values of the induced charge, N g = 1 and N g = 0, on the even-electron branch of the spectrum (see Fig. 1): Here we assumed that subgap conductance due to the presence of an unpaired electron is negligible 18 . In order to find activation energy δE given by Eq. (3), we calculate the partition function Z(N g ) for the system, island and reservoir, with even number of electrons. For the present discussion it is convenient to calculate the partition function using the path integral description developed by Ambegaokar, Eckern and Schön 19 . In this formalism the quadratic inQ interaction in Eq. (1) is decoupled with the help of Hubbard-Stratonovich transformaion by introducing an auxiliary field ϕ (conjugate to the excess number of Cooper pairs on the island). Then, the fermion degrees of freedom are traced out, and around the BCS saddle-point the partition function be- Here δE is the ground state energy difference between the even-charge state (no quasiparticles in the CPB), and oddcharge state (an unpaired electron in the CPB) at Ng = 1. (We assume here equal gap energies in the box and reservoir, Here the summation over winding numbers accounts for the discreteness of the charge 20 , and the action S reads ( = 1) with β being the inverse temperature, β = 1/T . Here C geom is the geometric capacitance of the CPB which determines the bare charging energy E c = e 2 /2C geom ; and E J is Josephson coupling given by Ambegaokar-Baratoff relation. The last term in Eq. (5) accounts for single electron tunneling with kernel α(τ ) decaying exponentially at τ ≫ ∆ −1 [19]. For sufficiently large capacitance the evolution of the phase is slow in comparison with ∆ −1 , and we can simplify the last term in Eq. (5) It follows from here that virtual tunneling of electrons between the island and reservoir leads to the renormalization of the capacitance 19 Within the approximation (6), the effective action acquires simple form To calculate Z(N g ) one can use the analogy between the present problem and that of a quantum particle moving in a periodic potential, and write the functional integral as a quantum mechanical propagator from ϕ i = ϕ 0 to ϕ f = ϕ 0 + 2πm during the (imaginary) "time" β ϕ(β)=ϕ0+2πm The time-independent "Shrödinger equation" corresponding to such problem has the form 21 HereẼ c denotes renormalized charging energỹ One can notice that Eq. (10) corresponds to well-known Mathieu equation, for which eigenfunctions Ψ k,s (ϕ) are known 22 . Here quantum number s labels Bloch band (s = 0, 1, 2, ...), and k corresponds to the "quasimomentum". By rewriting the propagator (9) in terms of the eigenfunctions of the Shrödinger equation (10) we obtain Here E s (k) are eigenvalues of Eq. (10). According to the Bloch theorem, the eigenfunctions should have the form Ψ k,s (ϕ) = e ikϕ/2 u k,s (ϕ) with u k,s (ϕ) being 2π-periodic functions, u k,s (ϕ) = u k,s (ϕ + 2π). We can now rewrite Eq. (4) as The eigenvalues E s (N g ) are given by the Mathieu characteristic functions M A (r, q) and M B (r, q) 23 . At N g = 0 and N g = 1, the exact solution for the lowest band reads The activation energy δE can be calculated from Eq. (13) by evaluating free energy at T = 0: The plot of δE as a function of E J /2Ẽ c is shown in Fig. (2). Even-odd energy difference δE has the following asymptotes: These asymptotes can be also obtained using perturbation theory and WKB approximation, respectively. As one can see from Eq. (15), δE can be reduced by quantum charge fluctuations. For realistic experimental parameters 14 ∆ ≈ 2.5K, E c ≈ 2K and g T ≈ 2, we find that even-odd energy difference δE is 15% smaller with respect to its bare value, i.e. δE ≈ 1.45K and δE bare ≈ 1.7K. Since the reduction of the activation energy by quantum fluctuations is much larger than the temperature, this effect can be observed experimentally. The renormalization of δE can be studied systematically by decreasing the gap energy ∆, which can be achieved by applying magnetic field B [3]. The dependance of the activation energy δE on ∆(B) in Eq. (15) enters through the Josephson energy E J , which is given by Ambegaokar-Baratoff relation, and renormalized charging energyẼ c of Eq. (11). The renormalization of the discrete spectrum of charge states in the CPB becomes more pronounced in the strong tunneling regime. However, the adiabatic approximation leading to the effective action S eff (8) is valid when the evolution of the phase is slow, i.e. the adiabatic parameter ω J /∆ is small. (Here ω J is the plasma frequency of the Josephson junction, ω J ∼ E c E J .) Thus, at large conductances g T the adiabatic approximation holds only when the geometric capacitance is large C geom ≫ e 2 g T /∆. Under such conditions the renormalization effects lead to a small correction of the capacitance, see Eq. (7). If ω J /∆ > 1, the dynamics of the phase is described by the integral equation (5), and retardation effects have to be included. In the similar circuit corresponding to the Cooper-pair box qubit 1,2 it is possible to achieve strong tunneling regime g T ≫ ∆C geom /e 2 , and satisfy the requirements for adiabatic approximation (ω J /∆ ≪ 1). In this circuit a single Josephson junction is replaced by two junctions in a loop configuration 1,2 . This allows to control the effective Josephson energy using an external flux Φ x . (For the CPB qubit the Josephson energy E J in Eq. (8) should be replaced with E J (Φ x ) = 2E 0 J cos (πΦ x /Φ 0 ); here Φ 0 is the magnetic flux quantum, Φ 0 = h/2e, and E 0 J is the Josephson coupling per junction.) In this setup even at large conductance g T ≫ ∆C geom /e 2 one can decrease ω J ∼ E c E J (Φ x ) by adjusting the external magnetic flux to satisfy ω J /∆ ≪ 1. Under such conditions the quantum contribution to the capacitanceC (see Eq. (7)) becomes larger than the geometric one, while the dynamics of the phase is described by the simple action of Eq. (8). It would be interesting to study experimentally the renormalization of the discrete energy spectrum of the qubit in this regime. We propose to measure, for example, the even-odd energy difference δE. In this case δE is determined by the conductance of the junctions g T , superconducting gap ∆, and magnetic flux Φ x , and is given by Eq. (15) withẼ c ≈ 32∆/3g T , see Eq. (11), and E J = 2E 0 J cos (πΦ x /Φ 0 ). In conclusion, we studied the renormalization of the discrete spectrum of charge states of the Cooper-pair box by virtual tunneling of electrons across the junction. In particular, we calculated the reduction of even-odd energy difference δE by quantum charge fluctuations. We showed that under certain conditions the contribution of quantum charge fluctuations to the capacitance of the Cooper-pair box may become larger than the geometric one. We propose to study this effect experimentally using the Cooper-pair box qubit.
2019-04-14T02:13:10.967Z
2006-12-19T00:00:00.000
{ "year": 2006, "sha1": "0d2862ab5a86023b58c7c4eb42365544885808cd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0612473", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "581c5f9dd75de9a071b60da8a0305becc56eef7a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234203761
pes2o/s2orc
v3-fos-license
Construction of S-doped hierarchical porous carbons by acid-free treatment strategy for supercapacitors The pollution from acid treatment process in the preparation process of hierarchical porous carbons is a substantial challenge for industrial application of supercapacitors (SCs), which necessitates the development of green alternative technologies. In this work, S-doped hierarchical porous carbons (S-HPCs) are prepared from cheap coal tar pitch by a less harmful in situ KHCO3 activation strategy. The sample obtained at 800°C (S-HPC800) possesses 3D framework structure with hierarchical pores, large specific surface area (1485 m2 g-1) and O, S-containing functional groups. Due to these synergistic characteristics, SHPC800 as supercapacitor electrode exhibits high specific capacitance of 246 F g-1 at 0.1 A g-1 with a capacitance retention of 68.3% at 40 A g-1 and excellent cycle stability with 96.7% capacitance retention after 10, 000 charge-discharge cycles. This work provides an environmentally friendly approach to prepare advanced carbon-based electrode materials from industrial by-products for energy storage devices. Introduction Owing to the exhaustion of fossil fuels and the worsening environmental pollution, stable and renewable energy sources are highly desired in recent years [1]. Therefore, electrochemical energy storage (EES) devices which can efficiently utilize clean resources attract the researchers' attention. Supercapacitors (SCs) have been widely studied as a typical EES device due to their rapid charge-discharge rate, high power density and long cycle life [2][3][4]. According to the energy storage mechanism, SCs can be divided into electric double layer capacitors (EDLCs) and pseudocapacitors [5]. EDLCs and pseudocapacitors store energy through reversible ion adsorption and fast redox reactions at the surface of electrode materials, respectively. Therefore, electrode materials play a vital role in the performance of SCs. Carbon materials are considered as the most promising electrode materials in virtue of their good electrical conductivity and high specific surface area [6]. However, pure carbon electrode materials still have great limitations in terms of electrochemical performance. Structural optimization and surface modification of pure carbon materials suit the remedy to the case. In order to achieve high performance carbon electrode materials, an effective method is to design hierarchical porous carbons (HPCs) with micro-, meso-and macropores. Micropores can provide a large number of active sites for reversible ion adsorption [7]. Mesopores act as electrolyte container which is conducive to the spread and transmission of ions [8]. Macropores can be used as ion-buffering reservoirs which are benefit to mass transport of ions [9]. In addition, heteroatom doping (such as N, P, S and B) which can improve wettability and increase pseudocapacitance of HPCs is another efficient method to enhance electrochemical performance of electrode materials [10][11][12]. Generally, HPCs can be prepared by using hard template such as SiO 2 , nano-CuO, nano-ZnO and nano-Fe 2 O 3 [13]. However, the using of corrosive HF or volatile HCl is inevitable to remove hard templates, which results in environmental pollution. Therefore, the exploitation of a more environmentally friendly method is still a 'must-do' agenda for pushing commercialization of SCs. Coal tar pitch (CTP) is a residue fraction from the distillation of coal tar, which possesses various kinds of polycyclic aromatic hydrocarbons with viscous and thermoplastic characteristics [14]. In this work, we report an eco-friendly method to prepare S-doped hierarchical porous carbons (S-HPCs) from CTP. The sample obtained at 800°C (S-HPC 800 ) featuring three-dimensional (3D) structure is composed of interconnected carbon capsules and sheets with abundant active sites for fast ion adsorption. Moreover, S-doped can further improve the electrical conductivity and boost the surface wettability of S-HPC 800 . Benefiting from such constructive characteristics, S-HPC 800 as the electrode material for SC exhibits high specific capacitance, good rate capability and superior cycle stability. Na 2 SO 4 were mixed with continuously stirring for 30 min. Then, the resulting mixture was heated in a horizontal tubular furnace. In 20 mL min -1 N 2 atmosphere, the mixture was firstly heated to 150°C at 5°C min -1 for 30 min, followed by being heated to X°C (X stands for 750, 800 and 850) at 5°C min -1 for 1 h. After cooling down naturally, the as-prepared samples were purified merely by distilled water washing and dried at 100°C overnight. The final products were denoted as S-HPC X . HPC 800 without adding Na 2 SO 4 was prepared by the similar procedure as control. Characterization Field emission scanning electron microscopy (FESEM, NanoSEM430, USA) and transmission electron microscopy (TEM, JEOL-2100, Japan) were employed to investigate the morphology of HPCs. Energy dispersive spectroscopy (EDS) was used to test surface elements. Nitrogen adsorption-desorption isotherms were conducted at -196°C on an Autosorb-IQ system (Quantachrome, USA). The specific surface area (S BET ) was calculated by the Brunauer-Emmett-Teller (BET) method. The pore diameter was analysed from the adsorption branches of the isotherms using the density functional theory (DFT) method. Raman spectroscopy (RamHR800) was used to examine the defect degree of the samples. X-ray photoelectron spectroscopy (XPS, ThermoESCALAB250) was used to measure the contents of elements and functionalities. Electrochemical measurement Firstly, HPCs (90 wt%) and polytetrafluoroethylene (10 wt%) were mixed together in deionized water. Secondly, the mixture was rolled and then made into circular films with diameter of 12 mm. Thirdly, the asprepared films were dried at 110 °C for 2 h in vacuum oven, followed by being pressed onto nickel foams to obtain the electrodes. Before being assembled in supercapacitors, the electrodes were soaked in 6 M KOH electrolyte for 2 h under vacuum. Finally, the immersed electrodes were assembled into a symmetrical button supercapacitor. The cyclic voltammetry (CV) test was carried out on a CHI760E electrochemical workstation (Shanghai, China) and the galvanostatic charge-discharge (GCD) measurement was performed on a supercapacitance test system (SCTs, Arbin Instruments, USA). The electrochemical impedance spectroscopy (EIS) test was carried out on a Solartron impedance analyzer (Solartron Analytical, SI1260, UK). The specific capacitance (C, F g -1 ) of HPC electrodes was calculated from the GCD curves by the following Equation (1). 2) P = E/Δt (3) Where V (V), Δt (h) stands for the discharge voltage after the IR drop and the discharge time, respectively. Results & Discussion The nitrogen adsorption-desorption isotherms of four HPC samples are shown in Fig. 1a. All the nitrogen adsorption-desorption isotherms of the HPCs exhibit strong adsorption at P/P 0 < 0.01, small hysteresis loops at 0.4 < P/P 0 < 0.9 and little tail at P/P 0 > 0.9, which suggests the existence of a large number of micropores, a moderate number of mesopores and a small number of macropores [15]. It is well known that macropores, mesopores and micropores avail for ion-buffering, ion transport and ion adsorption, respectively [16]. Fig. 1b indicates that the pore size of HPCs is mainly concentrated in 0.5~4 nm. With the increasing of temperature, the S BET of S-HPCs first raised from 1199 m 2 g -1 to 1485 m 2 g -1 , and then fell to 1257 m 2 g -1 , while the average pore diameter (D ap ) increased from 2.53 nm to 2.86 nm ( Table 1). The reason for such changes of S BET and D ap is that the strong activation of KHCO 3 lead to the collapse of some micropores and mesopores under high temperature conditions. In addition, the S BET of S-HPC 800 (1485 m 2 g -1 ) is much higher than HPC 800 (998 m 2 g -1 ) ( Table 1), which demonstrates that Na 2 SO 4 can be used as an auxiliary activator and the doping of S is in favour of the formation of edge defects. The foregoing results suggest that the pore structure parameters of HPCs can be tuned by changing the activation temperature and introducing S atom. The FESEM images in Fig. 2a-e show that the 3D framework structure of HPC 800 and S-HPC 800 are consisted of interconnected carbon sheets and capsules with opened pores. The EDS mapping images of HPC 800 (Fig. 2f) display a relatively homogeneous distribution of C, O and N elements. The reason for the existence of N element is that there is a small amount of N-containing substance in CTP. The TEM images further prove that there are interconnected carbon sheets and capsules with large pores in S-HPC 800 structure (Fig. 3a, b, d, e). Besides, the HRTEM images show that S-HPC 800 possesses local graphitization stripes. The EDS mapping images of S-HPC 800 display a relatively homogeneous distribution of C, O, N and S elements (Fig. 1g), which confirms that S atom was successfully doped in S-HPC 800 . The Raman spectra of HPCs display two strong characteristic peaks at ca. 1343 cm -1 and ca. 1587 cm -1 , corresponding to D-band and G-band, respectively (Fig. 4a). The former is related to the defects and disorder structures, while the latter can be assigned to the graphitic structures [17]. The peak intensity ratio of the D-band to G-band (I D /I G ) of S-HPC 800 (1.01) is higher than that of S-HPC 750 (0.95), S-HPC 850 (0.97) and HPC 800 (0.93), which confirms that S-HPC 800 possesses the most defects among the four HPCs. The full XPS survey scan spectrum of S-HPC 800 suggests the presence of C (284.6 eV), O (532.1 eV), N (400.0 eV), and S (164.4 eV) (Fig. 4b). The highresolution O 1s spectrum of S-HPC 800 witnesses three peaks: O-H (535.5 eV), C-O (532.5 eV) and C=O (531.2 eV) (Fig. 4c) [18]. In addition, the high-resolution S 2p spectrum of S-HPC 800 was fitted into three peaks with the binding energies of 163.2 eV, 164.9 eV and 168.6 eV, corresponding to S 2p 3/2 , S 2p 1/2 and SO X , respectively (Fig. 4d) [19]. The doping of these heteroatoms can not only improve the wettability of S-HPC 800 but also provide additional pseudocapacitance, as a result, remarkably improve capacity performance. The electrochemical performance of HPC electrodes were evaluated in symmetric coin-type SCs using 6 M KOH aqueous solution as electrolyte. The CV curves of HPC electrodes show a rectangular shape at a scan rate of 5 mV s -1 . Fortunately, no obvious distortions are observed when the scan rate increases to 200 mV s -1 (Fig. 5a, b), displaying the ideal EDLC behaviour and good rate performance of HPC electrodes [20]. At the current density of 0.1 A g -1 , all the GCD curves of HPC electrodes show symmetrical triangles (Fig. 5c), which indicates ideal EDLC behaviour [21]. Fig. 5d shows that the specific capacitance of S-HPC 800 electrode is always higher than that of other three HPC electrodes at the same current density. The specific capacitance of S-HPC 800 is 246 F g -1 at 0.1 A g -1 with a capacitance retention of 68.3% at 40 A g -1 . The hierarchical porous structure and synergistic effect of S doping are of great significance for the improving specific capacitance and rate performance of S-HPC 800 . The energy densities of S-HPC 800 -based SC are 8.5 and 3.3 Wh kg -1 at the power density of 51.6 and 11520.7 W kg -1 , respectively (Fig. 6a), which confirms its potential application. In addition, S-HPC 800 -based SC exhibits a long-term cycle stability, whose capacitance retention stabilized at 96.7% even after 10,000 cycles at 5 A g -1 (Fig. 6b). Nyquist plots of HPC electrodes in 6 M KOH electrolyte are shown in Fig. 6c. It can be found that S-HPC 800 electrode presents a straight line paralleling to the Y axis at low-frequency part, manifesting the ideal EDLC behaviour [22]. At high frequency region, the small Xintercept and semicircle demonstrate that S-HPC 800 electrode has very low intrinsic ohmic resistance (R s ) and charge transfer resistance (R ct ), which further gives rise to the rapid transmission and diffusion of electrolyte ions. Conclusions In summary, simple and eco-friendly in situ KHCO 3 activation strategy is reported in this paper to prepare Sdoped hierarchical porous carbons (S-HPCs) from CTP for SCs application. Unlike the conventional activation method, this method avoids the complicated and harmful acid washing step. The sample obtained at 800°C (S-HPC 800 ) possesses 3D framework structure with hierarchical pores, large specific surface area and O, Scontaining functional groups. Consequently, S-HPC 800 electrode for supercapacitor exhibits high specific capacitance of 246 F g -1 at 0.1 A g -1 with a capacitance retention of 68.3% at 40 A g -1 and prominent cycle stability with 96.7% capacitance retention after 10,000 charge-discharge cycles. This work provides an environmentally friendly approach to prepare advanced carbon-based electrode materials from CTP for energy storage devices.
2021-05-11T00:04:05.895Z
2021-01-13T00:00:00.000
{ "year": 2021, "sha1": "083bef4b926649fd38429274a943b7d03dd997c4", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/04/e3sconf_ccgees2021_01007.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c0be6098bdb99bda1f560e98c06ac8dc8f3d167d", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
204755194
pes2o/s2orc
v3-fos-license
Pharmacological Profile of the Novel Antiepileptic Drug Candidate Padsevonil: Characterization in Rodent Seizure and Epilepsy Models The antiepileptic drug (AED) candidate, (4R)-4-(2-chloro-2,2-difluoroethyl)-1-{[2-(methoxymethyl)-6-(trifluoromethyl)imidazo[2,1-b][1,3,4]thiadiazol-5-yl]methyl}pyrrolidin-2-one (padsevonil), is the first in a novel class of drugs that bind to synaptic vesicle protein 2 (SV2) proteins and the GABAA receptor benzodiazepine site, allowing for pre- and postsynaptic activity, respectively. In acute seizure models, padsevonil provided potent, dose-dependent protection against seizures induced by administration of pilocarpine or 11-deoxycortisol, and those induced acoustically or through 6 Hz stimulation; it was less potent in the pentylenetetrazol, bicuculline, and maximal electroshock models. Padsevonil displayed dose-dependent protective effects in chronic epilepsy models, including the intrahippocampal kainate and Genetic Absence Epilepsy Rats from Strasbourg models, which represent human mesial temporal lobe and absence epilepsy, respectively. In the amygdala kindling model, which is predictive of efficacy against focal to bilateral tonic-clonic seizures, padsevonil provided significant protection in kindled rodents; in mice specifically, it was the most potent AED compared with nine others with different mechanisms of action. Its therapeutic index was also the highest, potentially translating into a favorable efficacy and tolerability profile in humans. Importantly, in contrast to diazepam, tolerance to padsevonil’s antiseizure effects was not observed in the pentylenetetrazol-induced clonic seizure threshold test. Further results in the 6 Hz model showed that padsevonil provided significantly greater protection than the combination of diazepam with either 2S-(2-oxo-1-pyrrolidinyl)butanamide (levetiracetam) or 2S-2-[(4R)-2-oxo-4-propylpyrrolidin-1-yl] butanamide (brivaracetam), both selective SV2A ligands. This observation suggests that padsevonil’s unique mechanism of action confers antiseizure properties beyond the combination of compounds targeting SV2A and the benzodiazepine site. Overall, padsevonil displayed robust efficacy across validated seizure and epilepsy models, including those considered to represent drug-resistant epilepsy. SIGNIFICANCE STATEMENT Padsevonil, a first-in-class antiepileptic drug candidate, targets SV2 proteins and the benzodiazepine site of GABAA receptors. It demonstrated robust efficacy across a broad range of rodent seizure and epilepsy models, several representing drug-resistant epilepsy. Furthermore, in one rodent model, its efficacy extended beyond the combination of drugs interacting separately with SV2 or the benzodiazepine site. Padsevonil displayed a high therapeutic index, potentially translating into a favorable safety profile in humans; tolerance to antiseizure effects was not observed. Introduction Epilepsy is one of the most common neurologic diseases worldwide, and is associated with a significant healthcare burden (Devinsky et al., 2018;Thijs et al., 2019). For most patients with epilepsy, antiepileptic drugs (AEDs) are the mainstay of therapy, which must be taken on a long-term, often lifelong basis (Trinka, 2012;Thijs et al., 2019). Antiepileptic drugs approved in the last decade display good safety and pharmacokinetic profiles; however, thus far improved efficacy over first-generation AEDs has not been demonstrated in clinical studies (Chen et al., 2018), and approximately one-third of patients with epilepsy continue to experience poorly controlled seizures despite treatment, i.e., drug-resistant epilepsy (Kwan et al., 2010;Kalilani et al., 2018;Chen et al., 2018). Most AEDs were discovered by initial demonstration of their antiseizure activity in simple, classic seizure models, such as the maximal electroshock (MES) and pentylenetetrazol (PTZ) tests, which All studies described in this report were funded by UCB Pharma. All authors are current or former employees of UCB Pharma. are highly predictive of clinical efficacy in epilepsy, but not drug-resistant epilepsy (Löscher et al., 2013). Polytherapy is a frequent treatment strategy for patients with drug-resistant epilepsy, since a substantial proportion will require more than one AED to reduce their seizure burden (French and Faught, 2009;Brodie and Sills, 2011). The combination of selected AEDs should allow for synergistic or additive efficacy without any detrimental impact on safety and tolerability (French and Faught, 2009;Brodie and Sills, 2011); however, a nonclinical mechanistic rationale for clinically used AED combinations is often lacking or has not yet translated into superior efficacy. 2S-(2-oxo-1-pyrrolidinyl)butanamide [levetiracetam (LEV)] is an AED that exerts its therapeutic activity primarily by binding to the synaptic vesicle protein 2 (SV2) A protein (Lynch et al., 2004) and shows a distinctive profile in nonclinical seizure models. While ineffective in standard models used traditionally in AED discovery, such as the MES and PTZ tests, LEV provided protection against seizures in models of acquired and genetic epilepsies (Klitgaard et al., 1998), subsequently translating to broad-spectrum clinical efficacy in humans (Klitgaard and Verdru, 2007). In the audiogenic seizure and amygdala kindling models, LEV increased the potency of several AEDs and experimental agents that interfere with ligand-gated ion channels, particularly those that enhance GABA-mediated inhibition (Kaminski et al., 2009). Importantly, the increase in potency was devoid of additional adverse effects (i.e., motor impairment) as assessed by the rotarod test; on the contrary, it was associated with an increase in the therapeutic index (Kaminski et al., 2009). Assuming a potential synergistic interaction between ligands that act via SV2 and GABA A receptors (GABA A Rs), a rational medicinal chemistry design program was initiated to develop a single molecular entity that could target both. The outcome of this discovery program was the identification of (4R)-4-(2-chloro-2,2-difluoroethyl)-1-{[2-(methoxymethyl)-6-(trifluoromethyl)imidazo[2,1-b][1,3,4]thiadiazol-5-yl] methyl}pyrrolidin-2-one [padsevonil (PSL)], the first rationally designed AED candidate that acts selectively on both pre-and postsynaptic targets. Presynaptically, as a SV2 ligand, PSL displays high affinity (nanomolars), not only for SV2A but also for the other two protein isoforms, SV2B and SV2C. The latter markedly distinguishes the profile of PSL from that of LEV and 2S-2-[(4R)-2-oxo-4-propylpyrrolidin-1yl] butanamide [brivaracetam (BRV)], which are selective SV2A ligands that, therefore, have no established postsynaptic activity. Postsynaptically, as a positive allosteric modulator of GABA A Rs, PSL displays low-to-moderate (in micromolars) binding affinity for the benzodiazepine (BZD) site in recombinant human GABA A Rs and human and rat brain membrane preparations, where it shows a partial agonist profile (Wolff et al., 2017). This profile was selected specifically to minimize central nervous system and respiratory adverse effects, tolerance development, and abuse potential typically associated with the use of BZDs that are full agonists (Rundfeldt and Löscher, 2014). The detailed pharmacological and mechanistic profile of PSL is described in the companion paper (Wood et al., 2019). In this report, we describe the activity of PSL in a variety of rodent seizure and epilepsy models and compare its activity with that of mechanistically diverse and clinically used AEDs. We also compare the potential of PSL for development of tolerance with that of the BZD, diazepam (DZP), after chronic dosing in mice. Animals All experiments were conducted in compliance with guidelines issued by the ethics committee for animal experimentation according to Belgian law. Those conducted as part of the murine intrahippocampal kainate model of mesial temporal lobe epilepsy were performed at Synapcell (Grenoble, France). The experiments were approved by the European Technology Platform for Global Animal Health and performed in accordance with the European Committee Council directive (2010/63/EU). All efforts were made to minimize animal suffering. Female, genetically sound-sensitive mice (20-24 g) were derived from a DBA strain from the Laboratory of Acoustic Physiology (Paris, France) and bred at Charles River Laboratories (Italy). Male NMRI mice weighing 20-35 g were used in all other acute electrically and chemically induced seizure tests, as well as in the rotarod and tolerance tests. Male C57BL/6J mice, weighing 25-34 g, were used for the murine model of amygdala kindling. For the rat model of amygdala kindling, male Sprague-Dawley rats weighing 300-350 g at the initiation of kindling were used. Male Wistar rats of the Genetic Absence Epilepsy Rats from Strasbourg (GAERS) strain were used at a body weight of 280-400 g. Male Sprague-Dawley rats (200-240 g) were used for the rotarod tests. Animals were obtained from Charles River Laboratories (France) and housed in a holding room under a 12-hour light-dark cycle with lights on at 6:00 AM. Temperature was maintained at 20-24°C, relative humidity was maintained at 40%-70%, and the rate of air replacement was at least 15 times an hour. Animals had ad libitum access to standard dry pellet food and tap water. For the intrahippocampal kainate model, male C57BL/6 mice (11 weeks of age) were obtained from Janvier (France) and housed in cages on wood litter for 8 days with free access to food and water until surgery. Animal housing was maintained under artificial lighting from 8:00 AM to 8:00 PM. Drugs and Chemicals PSL, LEV, and BRV were synthesized at UCB Pharma (Brainel'Alleud, Belgium). All other reagents were of analytical grade and were obtained from conventional commercial sources. PSL was dissolved in 10 mM citrate buffer, 1.5% methylcellulose, 0.1% Tween 80, and 0.1% silicone antifoam, and LEV and BRV were dissolved in saline. For the audiogenic seizures model, mice were placed (one at a time) in a sound-attenuated chamber, where audiogenic seizures were induced through application of an acoustic stimulus (85 dB at 10-20 kHz for 30 seconds). The proportion of mice protected against clonic seizures was used to determine antiseizure activity. This endpoint was chosen because a correlation between SV2A affinity and efficacy against clonic seizures has been previously demonstrated (Kaminski et al., 2008). For electrically induced seizures, the MES and 6 Hz models were used. In MES, 50 mA currents were delivered at a constant pulse frequency of 50 Hz and duration of 0.2 seconds. The proportion of mice protected against tonic hindlimb extension after stimulation was used to determine antiseizure activity, as well as dose-response curve. In the 6 Hz model, 44 mA currents were delivered with 0.2-millisecond monopolar pulses at 6 Hz for a duration of 3 seconds. After stimulation, mice were observed for 30 seconds and the duration of immobility (stunned posture) was noted. The proportion showing immobility for ,7 seconds was used as the endpoint for seizure protection, as previously described (Leclercq and Kaminski, 2015). For the chemically induced seizure models, PTZ (89 mg/kg) and bicuculline (3 mg/kg) were administered subcutaneously, and pilocarpine (373 mg/kg) was administered intraperitoneally. In the latter model, the peripheral cholinergic effect was blocked via administration of methylscopolamine (1 mg/kg, i.p.) 30 minutes before administration of pilocarpine. The proportion of mice protected against clonic seizures in all four extremities during a 60-minute observation period after drug administration was used to determine antiseizure activity. 11-Deoxycortisol (1.0-1.2 mmol/kg) was infused through the lateral tail vein, and protection against generalized seizures during the 60minute observation period after infusion was used to assess antiseizure activity. In all experiments, PSL was tested at doses ranging from 0.014 to 181.4 mg/kg, which was administered (10 ml/kg) 30 minutes before testing, except for audiogenic seizure testing, where the preadministration time was 15 minutes. Testing was initiated in the audiogenic model before having conducted thorough pharmacokinetics assessment; preadministration time was subsequently adapted for screening in other models. Each experiment consisted of independent groups of 10-14 mice, with one group receiving vehicle (control) and the others receiving different PSL doses. The experimenter was unaware of the nature of the compound administered. Comparative 6 Hz Study. The 6 Hz model was used to compare the protective effect of PSL with that of LEV, BRV, and DZP, as well as the combination of LEV or BRV with DZP. To allow for a direct, objective comparison, drugs were administered at doses to provide similar in vivo target occupancy. PSL was administered at a dose of 0.17 mg/kg, which is expected to provide 2% and 35% occupancy at the BZD site and SV2A, respectively, based on results of in vivo occupancy studies (Wood et al., 2019). Correspondingly, LEV and BRV were tested at 1.83 and 0.42 mg/kg, respectively, to provide 35% SV2A occupancy, and DZP was tested at 0.017 mg/kg to provide 2% occupancy at the BZD site. All drugs were administered intraperitoneally 30 minutes before testing except for LEV, which was administered 60 minutes before testing. Each experimental arm consisted of 15 or 16 mice. Amygdala Kindling. Protocols used for both mouse and rat amygdala kindling experiments have been described previously (Löscher et al., 1986). For the rat model, experiments consisted of five groups of eight fully kindled rats, each group receiving different doses of PSL (0.14-13.9 mg/kg) administered intraperitoneally (5 ml/kg) 30 minutes before stimulation with the same supra-maximal current (500 mA at 1 second) used for the induction of kindling. Similarly, six groups of eight to nine mice received different doses of PSL (0.014-13.85 mg/kg) administered intraperitoneally 30 minutes before testing with the same supra-maximal stimulation current (250 mA at 1 second) used for the induction of kindling. Additionally, similar experiments were conducted in groups of mice receiving BRV, carbamazepine (CBZ), DZP, LEV, lamotrigine, phenytoin (PHT), topiramate, retigabine, or valproate (VPA). The effects of drugs on three parameters were tested in fully kindled animals. First, as a measure of the drug's effect on seizure severity, the behavioral effects of the stimulation were scored according to the scale described by Racine (1972), where 0 5 no reaction, 1 5 blinking and/or mild facial twitches and chewing, 2 5 head nodding and/or severe facial clonus, 3 5 myoclonic jerks of the forelimbs, 4 5 clonic seizures of the forelimbs with rearing, and 5 5 generalized clonic seizures associated with loss of balance. Second, the proportion of animals protected against generalized seizures (scores 3-5) was used to determine the drugs' ED 50 values and antiseizure activity. Third, the electroencephalographic effect of the stimulation was determined by measuring the stimulationinduced afterdischarge duration (ADD), defined as electroencephalogram (EEG) activity with an amplitude at least twice that of the prestimulus recording and a frequency .1 Hz. Murine Intrahippocampal Kainate Mouse Model of Mesial Temporal Lobe Epilepsy. Experiments were performed as previously described (Riban et al., 2002;Duveau et al., 2016). Briefly, male C57BL/6 mice (n 5 20) were surgically injected with kainate (1 nmol) in the right dorsal hippocampus. Bipolar electroencephalography electrodes were implanted into the injected hippocampus, with additional monopolar surface electrodes placed over the frontoparietal cortex and cerebellum. After a 5-week period of epileptogenesis, mice (n 5 9) displaying hippocampal paroxysmal discharges [(HPDs); $20/h] without any generalized seizures were selected. Baseline EEG (20 minutes) was recorded before injection of vehicle (10 mM citrate buffer, 1.5% methylcellulose, 0.1% Tween 80, and 0.1% silicone antifoam) or PSL (1, 3, 10, or 30 mg/kg, i.p.) and recording continued for an additional 90 minutes. Stress induced by handling and drug administration caused a transient decrease in the number of HPDs, as observed reproducibly in vehicle-treated animals. Therefore, the number and duration of HPDs were measured and analyzed for 80 minutes, after discarding the first 10-minute postdrug administration. PSL doses were administered in a randomized crossover manner. Spike-Wave Discharges in Genetic Absence Epilepsy Rat from Strasbourg. Four platinum electrodes were implanted bilaterally in the frontal and occipital cortices as described previously . After a 2-week recovery period, rats were injected with either vehicle or PSL and the EEG was recorded continuously over consecutive 20-minute intervals starting 20 minutes before and up to 120 minutes after drug administration. The cumulative duration of spontaneous spike and wave discharges (SWDs) in each 20-minute interval was measured by a semiautomatic program. PSL was administered at doses equal to 0.14, 0.43, 1.38, and 4.33 mg/kg in a dose volume of 5 ml/kg b.wt. The control group received vehicle injection intraperitoneally (5 ml/kg b.wt.). Eight rats were used in these experiments with a crossover design in which each animal served as its own control after injection of vehicle. Tolerance. To determine whether mice developed tolerance to PSL's antiseizure effects, its impact on the PTZ-induced clonic seizure threshold was tested. For comparison, the tolerance potential of diazepam, a full agonist at the BDZ site, was also evaluated. This test is widely described as a nonclinical tool for assessment of tolerance-like effects of AEDs (Rundfeldt et al., 1995). Briefly, the test consists of two steps. In the first step, the PTZ threshold dose for inducing seizures and the ED 97 value of a given AED in providing protection against these PTZ-induced clonic seizures are determined. In the subsequent step, tolerance to the protective effect of the AED after repeated administration is determined. For the first step, an intravenous infusion of PTZ (5 mg/ml) was administered into the tail vein of freely moving mice and the time to the three stages of seizures (twitch, clonic, and tonic) was noted. Padsevonil, DZP, or vehicle was administered intraperitoneally (10 ml/kg) 30 minutes before PTZ infusion to determine the dose that increased the PTZ threshold dose by 97% (ED 97 ). Different treatments were randomly distributed within each group of mice (6, 8, or 10 mice per group for PSL, and 6, 10, or 11 mice per group for DZP experiments) with injections at 5-minute intervals. In the second step, mice were administered with the previously selected PSL/DZP dose (ED 97 ) or vehicle, twice daily, for four consecutive days (n 5 12 each group). On day 5, they were treated with PSL/DZP or vehicle 30 minutes before assessment of their respective seizure threshold, following intravenous infusion of PTZ. There were four experimental groups, as described in Table 1. Rotarod. The impact of PSL on motor activity was evaluated using the rotarod test in both mice and rats using previously described Padsevonil's Efficacy in Rodent Seizure and Epilepsy Models protocols (Klitgaard et al., 1998). Animals were trained and only those able to remain on the rod for at least 60 seconds in three consecutive trials were used in the tests. In mice, PSL was administered intraperitoneally (10 ml/kg) 30 minutes before testing; one group (control) received vehicle and the others received PSL doses of 4.3-77.9 mg/kg (n 5 10 each group). In rats, PSL was administered intraperitoneally (5 ml/kg) 30 minutes before testing; one group (control) received the vehicle and the others received PSL doses of 4.3-43.3 mg/kg (n 5 8 each group). The median tolerated dose, at which toxicity or impairment of motor coordination occurs in 50% of animals (TD 50 ) was calculated and used to determine the therapeutic index (TI) of PSL. The TI is defined as the ratio between doses producing motor impairment (TD 50 ) and doses providing protection against seizures (ED 50 ). To compare the TI of PSL with that of other AEDs in the amygdala kindling model, the TD 50 values of the following drugs were also determined in naive mice: BRV, CBZ, DZP, LEV, lamotrigine, PHT, topiramate, retigabine, and VPA. Data Analysis Unless otherwise noted, the ED 50 values and associated 95% confidence intervals were calculated using nonlinear fitting of the doseresponse curve with GraphPad Prism version 4 (GraphPad Software, San Diego, CA). In the 6 Hz comparative study, Fisher's exact test was used for statistical comparisons of the number of animals protected with PSL and with the combinations of LEV or BRV with DZP using GraphPad Prism (as previously described). Amygdala Kindling. Significant differences between compound and vehicle in the median behavioral seizure score, protection against generalized seizures, and the ADD were evaluated with the Wilcoxon signed rank test, Fisher's exact test, and Mann-Whitney U test, respectively. All statistical analyses were performed with GraphPad Prism (as previously described). Intrahippocampal Kainate Model. Statistical analyses were performed with GraphPad Prism version 7 using two-way ANOVA for repeated measures, with the factors of time and compound dose (with repeated measures applying only to the time factor), followed by Bonferroni's multiple comparisons test. Spike-Wave Discharges in GAERS. For each treatment, the mean cumulative duration of SWDs (6S.E.M.) was calculated for each 20-minute interval. The results for each 20-minute interval were compared with those of vehicle treatment using two-way ANOVA with repeated measures, followed by a post hoc Bonferroni multiple comparisons test (P , 0.05), using GraphPad Prism. Due to the high variability of the responses observed in each 20-minute interval for different rats, data were further analyzed using the cumulative duration of SWDs covering the total postdrug observation period (120 minutes). This allowed application of nonlinear regression curve fitting of the results and estimation of the protective ED 50 . Tolerance. The effective dose increasing the PTZ threshold by 97% (ED 97 ) was calculated using nonlinear fitting of individual values of the dose-response curve (SAS/STAT R Software version 9.1). Oneway ANOVA and Tukey's multiple comparisons test were performed with individual calculated doses of PTZ inducing clonic seizures in the four groups of mice. Statistically significant differences between chronic vehicle 1 test compound ED 97 dose and chronic test compound ED 97 dose 1 test compound ED 97 dose were used to assess development of tolerance. Results Murine Models of Acutely Induced Seizures. Administration of PSL provided potent, dose-dependent protection against seizures induced by 6 Hz stimulation, acoustic stimulus, and a bolus dose of pilocarpine (ED 50 values of 0.16, 0.17, and 0.19 mg/kg, respectively). The potency of PSL in these three models was greater than that of LEV and BRV (ineffective in the pilocarpine model) ( Table 2). PSL also provided dose-dependent protection against clonic seizures induced by a bolus dose of PTZ. Its potency in this model was higher than that of BRV, while LEV was ineffective. In the 11deoxycortisol model, PSL provided dose-dependent and almost complete protection against seizures; at the highest dose tested (43.3 mg/kg), 90% of animals were protected. Brivaracetam was ineffective in this model, while LEV provided only limited protection at the highest doses tested. PSL showed low potency against seizures induced by a bolus of bicuculline, while LEV and BRV were ineffective in this model. The lowest potency was seen in the MES model (ED 50 value of 92.8 mg/kg). The lack of activity or low potency in this model was also observed with LEV and BRV. Comparative 6 Hz Study. The protective effect of PSL in the 6 Hz model was compared with that of LEV, BRV, and DZP alone, and with the combinations of LEV or BRV with DZP at doses expected to provide similar occupancy at SV2A (35%) or the BZD site (2%). PSL protected a greater proportion of mice than LEV, BRV, and DZP alone or in combination (Fig. 1). The difference in the protection offered by PSL and that of the LEV/DZP and BRV/DZP combinations was statistically significant (P 5 0.021 and P 5 0.0008, respectively; Fisher's exact test). The difference in the protection provided by BRV and the BRV/DZP combination or the LEV and LEV/DZP combination was not significant (P 5 0.4 and P 5 0.145, respectively). Amygdala Kindling. The protective effect of PSL against seizures was evaluated in fully kindled animals using three parameters. In rats, PSL provided dose-dependent and complete protection against focal to bilateral seizures (secondary generalized seizures). The reduction in the proportion of rats displaying generalized seizures at doses of 2.4, 4.3, and 13.9 mg/kg was statistically significant, with 100% of animals protected at the highest dose (Fig. 2, right panel). The ED 50 value was estimated to be 2.43 (2.41-2.46) mg/kg. Significant, dose-dependent reductions in the median seizure severity score and ADD were also observed with PSL, starting from a dose of 2.4 mg/kg. In mice, just as in rats, PSL significantly reduced the proportion of animals with focal to bilateral seizures and the median seizure severity score starting from a dose of 1.4 mg/kg (Fig. 2, left panel). Based on the proportion of mice protected from focal to bilateral seizures (secondary generalized seizures), the ED 50 value was estimated to be 1.2 (0.43-3.40) mg/kg. PSL also reduced the ADD, but only at the highest dose tested (13.9 mg/kg); at lower doses an increase was observed, with the increase (40%) at the 1.38 mg/kg dose being statistically significant. The TI of PSL in kindled mice was 9.8, which was relatively high compared with that of BRV and VPA, 2.8 and 1.2, respectively (Table 3). Other AEDs tested in this model displayed only partial protection against generalized seizures; therefore, it was not possible to calculate their TI. Intrahippocampal Kainate Model. PSL administration (1, 3, 10, or 30 mg/kg) resulted in dose-dependent and statistically significant reductions in the number of HPDs compared with vehicle or baseline, between 30 and 70 minutes after administration. PSL doses of 10 and 30 mg/kg were associated with significant reductions in the number of HPDs from 10 to 30 minutes after administration (Fig. 3, top panel). Dosedependent effects of PSL were also observed when the cumulated duration of HPDs was calculated, with all PSL doses associated with significant reductions compared with vehicle 50-70 minutes after administration. Maximal effects were observed with 10 and 30 mg/kg doses after 10-30 minutes (Fig. 3, bottom panel). Spike and Wave Discharges in GAERS. PSL (0.14-4.33 mg/kg) produced dose-related suppression in spontaneous SWDs, which was statistically significant from the 0.43 mg/kg dose-the suppression was almost complete at a dose of 4.33 mg/kg (Fig. 4). The effect was apparent in the first 20-minute test interval and persisted throughout the recording period (up to 120 minutes). Treatment with PSL also resulted in dose-dependent reduction in the cumulative duration of spontaneous SWDs recorded over the 120-minute postdrug period (ED 50 value of 0.87 mg/kg). Tolerance. Having established the PTZ threshold dose for inducing clonic seizures, PSL and DZP were tested. Both drugs increased the seizure threshold in a dose-dependent manner; the ED 97 of PSL was 15.9 mg/kg and that of DZP was 2.1 mg/kg. Animals that were treated twice daily for 4 days with vehicle, PSL, or DZP at the calculated ED 97 dose were injected again on day 5 with the same dose before assessment of the seizure threshold following intravenous PTZ infusion. Treatment with PSL (15.9 mg/kg) caused a significant increase in the PTZ threshold dose with a similar magnitude in both groups (mice chronically treated with vehicle or drug). The difference in the mean doses of PTZ that induced seizures in mice treated chronically with vehicle and those treated chronically with PSL was not statistically significant (Fig. 5). Diazepam (2.1 mg/kg) also caused a significant increase in the PTZ threshold dose for clonic seizures in both groups, but with a much lower magnitude in mice chronically treated with DZP, reflecting development of tolerance to its antiseizure effects. The mean dose of PTZ inducing clonic seizures in mice treated chronically with the vehicle was comparable to the mean dose calculated in mice treated chronically with DZP (Fig. 5). Rotarod. Administration of PSL resulted in dosedependent impairment in the performance of both mice and rats in the rotarod test; the TD 50 values were 11.8 (9.2-15.2) and 24.4 (15.0-39.7) mg/kg, respectively. The TI of PSL was calculated using these values and the ED 50 values determined in various models. PSL had a TI of 28 in the GAERS and 10 in the rat amygdala kindling models. In mice, the TI Fig. 1. Protective effect of padsevonil (0.17 mg/kg) in the 6 Hz model compared with that of levetiracetam (1.83 mg/kg), brivaracetam (0.42 mg/kg), and diazepam (0.017 mg/kg), as well as the combination of diazepam with levetiracetam or brivaracetam. The drugs were administered at doses associated with similar in vivo SV2A (35%) and benzodiazepine site (2%) occupancies (comparisons were made using Fisher's exact test). Padsevonil's Efficacy in Rodent Seizure and Epilepsy Models was calculated to be 69 in the audiogenic, 62 in the pilocarpine-induced, 74 in the 6 Hz-induced, and 2.5 in the PTZ-induced seizure tests. As noted previously, the TI in the murine amygdala kindling model was 9.8. Discussion PSL is the first in a novel class of drugs that bind to SV2 proteins and the BZD site on GABA A Rs. As shown in the studies reported here, this pre-and postsynaptic activity results in a distinct pharmacological profile across a wide range of seizure and epilepsy models representing focal and generalized epilepsy in humans. The MES and PTZ tests, considered gold standards for early detection of antiseizure activity, are used for screening candidate compounds (Klitgaard, 2005;Bialer and White, 2010). LEV is inactive in both models, while BRV, a more potent and selective SV2A ligand than LEV shows weak activity in both models . Similarly, PSL showed activity in both models, but its potency, while greater than that of BRV, was also relatively weak. PSL's effect was greater in the PTZ than in the MES test, which is likely to be mediated partially via the BZD site, since BZDs show high potency in this model (Löscher, 2011). PSL also showed relatively low potency in the bicuculline test, where typical BZDs are active, but not abecarnil, a partial agonist at the BZD site (Turski et al., 1990); consequently, low activity was expected, since both LEV and BRV are inactive in this test and PSL shows a partial agonist profile. PSL provided potent, dose-dependent protection against seizures induced in sound-sensitive mice, a genetic model of generalized epilepsy. BRV is active in this model, while LEV shows lower potency, correlating with their SV2A binding affinity (Kaminski et al., 2008;Matagne et al., 2008). PSL also Fig. 2. Effect of padsevonil on seizure parameters recorded after supra-threshold stimulation in fully kindled rats (right panel) and mice (left panel). Control recordings were performed 48 hours before testing with padsevonil. Values are mean 6 S.E.M. for afterdischarge duration. Comparisons between drug and control in protection against generalized seizures, seizure severity score, and afterdischarge duration were evaluated with Wilcoxon signed rank test, Fisher's exact test, and Mann-Whitney U test, respectively, with * indicating statistically significant differences (P , 0.05). provided strong protection against pilocarpine-induced clonic seizures, where in contrast to the audiogenic model BRV is ineffective, while LEV shows relatively high potency. Among acute models, PSL displayed the highest potency in the 6 Hz model (ED 50 value of 0.16 mg/kg), used as a test for protection against drug-resistant focal seizures, since many older (e.g., CBZ, phenobarbital, and PHT) and newer (e.g., felbamate, lamotrigine, tiagabine, and topiramate) AEDs fail to fully protect animals (Barton et al., 2001;(Detrait et al., 2008)). This model was also used to compare the efficacy of PSL against LEV, BRV, and DZP, and LEV/DZP and BRV/ DZP combinations. Importantly, for this comparison, doses Padsevonil's Efficacy in Rodent Seizure and Epilepsy Models calculated to provide similar SV2A and BZD site occupancy were used for the SV2 ligands and DZP, 35% and 2%, respectively. Given that LEV and BRV require 80% SV2A for antiseizure activity in nonclinical models (Gillard et al., 2011) low-level occupancy was selected in these experiments to further differentiate PSL activity. Protection offered by PSL, even at 35% SV2A occupancy was almost 70% and significantly greater than that provided by either LEV or BRV in combination with DZP. These observations suggest that PSL's antiseizure properties are due to a differentiated mode of action that provides greater protection than coadministration of an SV2A ligand and a BZD. Furthermore, the interaction of PSL with SV2B and SV2C may also contribute to enhanced antiseizure effects (Crèvecoeur et al., 2014). The 11-deoxycortisol model is also considered to represent drug-resistant seizures (Kaminski et al., 2011). LEV offers only partial protection at the highest doses, while PHT, CBZ, and VPA are ineffective; BRV has also proven to be ineffective. However, PSL demonstrated robust efficacy, providing dose-dependent protection with an ED 50 value of 10 mg/kg. 11-Deoxycortisol induces paroxysmal epileptiform network activity and seizures by significantly reducing GABAergic neurotransmission, which may explain why many AEDs, but not PSL, fail to suppress seizures (Kaminski et al., 2011). The intrahippocampal kainate model displays many features of human mesial temporal lobe epilepsy (Riban et al., 2002;Pernot et al., 2011). Unilateral injection of kainate in the dorsal hippocampus results in neuronal loss, mossy fiber sprouting, and dispersion of granule cells, followed by spontaneous and recurrent HPDs observed on the EEG (Suzuki et al., 1995;Mitsuya et al., 2009). Focal seizures remain frequent and stable during the animal's life, and importantly, resistant to most AEDs (Riban et al., 2002, Duveau et al., 2016, as in human mesial temporal lobe epilepsy (Engel et al., 1997). PSL displayed dose-dependent protective effects, with almost complete and long-lasting inhibition of HPDs at the highest dose (30 mg/kg). The GAERS model is considered predictive of human absence epilepsy (Danober et al., 1998;van Luijtelaar et al., Fig. 4. PSL activity in the GAERS model: effect on the duration of spontaneous spikeand-wave discharges. Values are mean 6 S.E.M. (n 5 8 per group), with * indicating statistically significant differences with respective time points in vehicle-treated group (P , 0.05; Bonferroni's multiple comparisons test). Fig. 5. Effect of chronic treatment (4 days) with PSL (15.9 mg/kg) or DZP (2.1 mg/kg) on the PTZ-induced seizure threshold. On the fifth day, PSL increased the threshold to the same extent in animals that had been treated chronically with vehicle or PSL. In contrast, there was a significant decrease in the ability of DZP to increase the threshold in animals that had been treated chronically with DZP, indicating development of tolerance. Development of tolerance was assessed based on statistically significant differences between chronic vehicle 1 test compound ED 97 dose and chronic test compound ED 97 dose 1 test compound ED 97 dose using one-way ANOVA followed by Tukey's multiple comparisons test. 18 Leclercq et al. 2002). LEV has a weak effect in this model, while BRV suppresses spontaneous SWDs with complete inhibition at the highest dose (67.9 mg/kg), which again correlates with their affinity for SV2A (Kaminski et al., 2008;Matagne et al., 2008). PSL showed higher potency than BRV and markedly suppressed spontaneous SWDs with almost complete inhibition at the highest dose (4.33 mg/kg), providing further evidence for PSL's broad spectrum of activity against both focal and generalized seizures. AED activity in the amygdala kindling model is predictive of efficacy against focal to bilateral tonic-clonic seizures in the clinical setting (Löscher and Schmidt, 1988). Electrographic and behavioral symptoms of seizures are initially localized at the site of stimulation, but rapidly evolve to bilateral activity, with seizures increasing in length and severity upon repeated stimulation (White, 2003;Löscher, 2011). In the rat model, PSL significantly reduced the proportion of animals displaying seizures, with 100% of animals protected at the highest dose. PSL also reduced the seizure severity score and the ADD, indicating effects on both local seizure discharge and seizure spread, or evolution to bilateral seizures. PSL was substantially more potent than LEV and BRV; while BRV significantly reduces the ADD at only high doses, LEV has no effect . PSL's effects in the mouse kindling model mirrored those in the rat model, with one exception: it reduced the ADD only at the highest dose. The reduction in ADD at the 13.9 mg/kg dose and the increase at the 1.38 mg/kg dose were both statistically significant, somewhat similar to the effects of low BRV doses . In the mouse model, PSL was the most potent compared with nine other mechanistically different AEDs. It was only possible to determine the ED 50 values of BRV and VPA since the others failed to provide full protection at high doses. The results were also used to compare the TI values of AEDs, a measure of the margin between antiseizure and adverse effects, expressed by the ratio between doses producing adverse effects and seizure protection (TD 50 /ED 50 ); the greater the TI, the greater is the separation between toxic and therapeutic doses. In mice, the PSL TD 50 value was 12 mg/kg and the ED 50 value was 1.2 mg/kg, resulting in a TI of 10. In comparison, the TI values of BRV and VPA were 3 and 1, respectively. Since the protective ED 50 values of the remaining AEDs could not be determined, due to limited efficacy, their TI values could not be calculated. Overall, these findings indicate that PSL has full efficacy in the kindling model, displaying a high TI, potentially translating into higher efficacy and improved tolerability in humans. PSL was designed to exert its therapeutic activity via two distinct mechanisms: as a SV2 ligand and as a partial agonist at the BZD site of the GABA A R. Partial agonism was selected based on evidence suggesting that the likelihood of developing tolerance to therapeutic effects is lower compared with full agonists (Miller et al., 1990;Serra et al., 1994;Löscher et al., 1996;Rundfeldt and Löscher, 2014). Clinical evidence supports these observations. Clobazam, a BZD and partial agonist, has been used successfully for the treatment of patients with Lennox-Gastaut syndrome (Faulkner, 2015;Gauthier and Mattson, 2015). The results of a long-term trial demonstrated sustained seizure control at stable dosages over a 3-year period (Conry et al., 2014;Gidal et al., 2016). Another partial agonist, abecarnil, has shown efficacy in the treatment of patients with photosensitive epilepsy without development of tolerance (Kasteleijn-Nolst Trenité et al., 2016). To evaluate PSL's tolerance potential, the PTZinduced clonic seizure threshold test was used, where the ability of AEDs to increase the seizure threshold is assessed after acute and twice daily administration for 4 days at the ED 97 dose (Rundfeldt et al., 1995). Under both regimens, PSL increased the threshold for PTZ-induced seizures to the same extent, indicating that tolerance was not developed; in contrast, DZP showed significant loss in its ability to increase the threshold. The precise role of SV2A in synaptic transmission and how ligand binding translates into antiseizure activity remain to be fully elucidated, yet the strength of ligands' antiseizure activity correlates with their binding affinity-BRV's greater affinity for SV2A over that of LEV translated into superior antiseizure activity in animal models (Kaminski et al., 2008;Matagne et al., 2008;). In turn, PSL's affinity for SV2A has been shown to be greater than that of BRV (Wood et al., 2019). PSL's additional actions on SV2B and SV2C, and the GABA A R BZD site, have resulted in a nonclinical profile that differs substantially from that of other AEDs. Additional evidence from the present studies suggests that the pre-and postsynaptic mechanism of action confers enhanced antiseizure properties beyond the combination of compounds targeting SV2A and the BZD site. PSL's highly differentiated antiseizure profile suggests a robust therapeutic benefit, an observation supported by results of a phase IIb proof-of-concept trial (Muglia et al., 2017).
2019-10-18T14:14:12.254Z
2019-10-16T00:00:00.000
{ "year": 2020, "sha1": "fd3b10bfff74aff8f8f2f145e1d5df432fd49d42", "oa_license": "CCBY", "oa_url": "https://jpet.aspetjournals.org/content/jpet/372/1/11.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e7c6ccd26ea84dceacb5716a7f885cb91203cc0b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263091579
pes2o/s2orc
v3-fos-license
Trivalent inactivated influenza vaccine effective against influenza A(H3N2) variant viruses in children during the 2014/15 season, Japan The 2014/15 influenza season in Japan was characterised by predominant influenza A(H3N2) activity; 99% of influenza A viruses detected were A(H3N2). Subclade 3C.2a viruses were the major epidemic A(H3N2) viruses, and were genetically distinct from A/New York/39/2012(H3N2) of 2014/15 vaccine strain in Japan, which was classified as clade 3C.1. We assessed vaccine effectiveness (VE) of inactivated influenza vaccine (IIV) in children aged 6 months to 15 years by test-negative case–control design based on influenza rapid diagnostic test. Between November 2014 and March 2015, a total of 3,752 children were enrolled: 1,633 tested positive for influenza A and 42 for influenza B, and 2,077 tested negative. Adjusted VE was 38% (95% confidence intervals (CI): 28 to 46) against influenza virus infection overall, 37% (95% CI: 27 to 45) against influenza A, and 47% (95% CI: -2 to 73) against influenza B. However, IIV was not statistically significantly effective against influenza A in infants aged 6 to 11 months or adolescents aged 13 to 15 years. VE in preventing hospitalisation for influenza A infection was 55% (95% CI: 42 to 64). Trivalent IIV that included A/New York/39/2012(H3N2) was effective against drifted influenza A(H3N2) virus, although vaccine mismatch resulted in low VE. Introduction Influenza vaccination is the most effective method of preventing influenza virus infection and its potentially severe complications. Based on the results of randomised controlled trials [1,2] and observational studies [3,4] the vaccine effectiveness (VE) of inactivated influenza vaccine (IIV) in healthy children has been reported to be 40% to 70%. During the 2014/15 season, a variant strain of influenza A(H3N2) virus that was classified as phylogenetic clade 3C.2a and was genetically distinct from the 2014/15 A/Texas/50/2012(H3N2)-like clade 3C.1 vaccine reference strain appeared in the northern hemisphere. Consistent with the substantial vaccine mismatch, no or low VE against A(H3N2) was reported as interim estimates in Canada, the United Kingdom (UK), and the United States (US) [5][6][7]. There have been many reports of VE in studies conducted by a test-negative case-control (TNCC) design. Most of the subjects of the studies were adults and the elderly, and VE in children was not fully elucidated, especially the VE of IIV in children. In 2014, it was clearly recommended in the US that live attenuated influenza vaccine (LAIV) be used in healthy children from 2 to 8 years of age [8]. However, the effectiveness of LAIV against influenza A(H1N1)pdm09 in the 2013/14 season was found to be poor [9,10]. Moreover, although one large randomised trial reported superior relative efficacy of LAIV over IIV against antigenically drifted influenza A(H3N2) viruses [11], neither LAIV nor IIV provided significant protection against the drifted influenza A(H3N2) viruses in children in the 2014/15 season, and LAIV did not provide greater protection than IIV against these viruses [8]. Accordingly, LAIV is no longer recommended over IIV in children aged 2-8 years in the US [12]. In the past, Japan's strategy for controlling influenza was to vaccinate schoolchildren, based on the theory that this could reduce influenza epidemics in the community, and a special programme to vaccinate schoolchildren against influenza was begun in 1962. However, the programme was discontinued in 1994 because of lack of evidence that it had limited the spread of influenza in the community [13]. At present in Japan, influenza vaccination is officially recommended for elderly and high-risk patients with underlying conditions. However, ca 50% of children receive an influenza vaccination every year on their parents' initiative, paid for out of pocket [14]. Only trivalent IIV was approved for use in children in Japan until the 2014/15 season, and we have previously reported on the VE of IIV in children in Japan based on the results of influenza rapid diagnostic tests (IRDT) during the 2013/14 season [14], when influenza A(H1N1)pdm09 and B viruses were the main epidemic strains. VE was high against influenza A (63%, 95% CI: 56 to 69), and especially high (77%, 95% CI: 59 to 87) against influenza A(H1N1)pdm09, but was only 26% against influenza B (95% CI: 14 to 36). A large influenza epidemic caused by A(H3N2) occurred in the 2014/15 season, and that provided an excellent opportunity to test VE against A(H3N2) virus infection in children. Influenza A(H3N2) outbreaks were reported throughout Japan since week 44 of 2014. The epidemic peaked between week 51 of 2014 and the week 1 of 2015. The start and peak of the influenza epidemic in the 2014/15 season occurred 3 weeks earlier than in the average year [15]. The vaccine strain used in Japan for influenza A(H3N2) was A/New York/39/2012(H3N2), which is different from A/Texas/50/2012; however, it belongs to the same clade, 3C.1. We investigated the VE of trivalent IIV in children during the large epidemic caused by the drifted influenza A(H3N2) virus by conducting a study by using the TNCC design and based on IRDT results. Epidemiology According to FluNet [16], 5,070 influenza A(H3N2) viruses were detected in Japan from week 45 of 2014 to week 14 of 2015, but only 50 A(H1N1) pdm09 viruses and 598 influenza B viruses were detected during the same period. In the 2014/15 season, over 99% of the influenza A viruses detected were A(H3N2) viruses (5,070/5,120). Phylogenetic analysis Influenza A(H3N2) viruses were isolated by using MDCK or MDCK-AX4 cells at the Yokohama City Institute of Public Health, Yokohama, Kanagawa, Japan [17]. The nucleotide sequences of the haemagglutinin (HA) genes were subjected to phylogenetic analysis, and phylogenetic trees were constructed using MEGA 6 software (The Biodesign Institute, Arizona, USA) and the neighbour-joining method [18]. The viruses were isolated in the 2014/15 influenza seasons. The nucleotide sequences determined are available from the Global Initiative on Sharing All Influenza Data (GISAID) EpiFlu database. Accession numbers for the HA genes are EPI679784-EPI679834, respectively (Table 1). Study enrolment and location Children aged 6 months to 15 years with a fever of 38 °C or over and cough and/or rhinorrhoea and who had received an IRDT in an outpatient clinic of one of 20 hospitals between 10 November 2014 and 31 March 2015 were enrolled in this study. In Japan, the cost of IRDT is covered by public health insurance, and almost all children with a high fever of 38 °C or over receive an IRDT during an influenza epidemic. Our hospitals were located in six (Gunma, Tochigi, Saitama, Tokyo, Kanagawa, and Shizuoka prefectures) of the 47 prefectures in Japan, mainly in the Greater Tokyo Metropolitan area. Patients who met the symptom criteria were eligible if they had not received antiviral medication before enrolment. Patients who had been vaccinated against influenza less than 14 days before illness onset were excluded from this study. A TNCC design was used to estimate VE based on IRDT results as previously described [14]. Diagnosis of influenza Nasopharyngeal swabs were obtained from all of the enrollees. Several different IRDT kits, including the Espline Influenza A and B-N kit (Fujirebio Inc., Tokyo, Japan), ImmunoAce FLU kit with LineJudge pdm kit (Tauns Laboratories, INC, Shizuoka, Japan), Quick Chaser Flu A, B kit (Mizuho Medy Co., Ltd., Saga, Japan), and QuickNavi-Flu kit (DENKA SEIKEN Co., Ltd., Tokyo, Japan), all of which are capable of differentiating between influenza A and influenza B, were used Phylogenetic analysis with sequences of the HA1 subunit of the haemagglutinin gene from reference viruses and influenza A(H3N2) sequences derived from children aged 6 months to 15 in the hospitals. Two of the 20 participating hospitals used the LineJudge pdm kit, which enables differentiation between influenza A, influenza B, and influenza A(H1N1)pdm09. According to their respective manuals, all of the IRDT kits used in this study have similar sensitivities (88-100%) and specificities (94-100%) [19]. Case and control patient identification The IRDT-positive patients were enrolled as case patients and the IRDT-negative patients as control patients. Their medical charts were reviewed, and information regarding symptoms, influenza vaccination, number of vaccine doses (one or two), influenza complications and hospitalisations, sex, age, comorbidities, and treatment with neuraminidase inhibitors (NAIs) was collected and recorded. Children were excluded if definite information on influenza vaccination was found to be unavailable. When a child was brought to one of our clinics, the parents or guardians were asked about the child's influenza vaccination status; the status was then usually confirmed by consulting the Maternal and Child Health Handbook provided by local governments, in which all vaccinations are recorded by the doctors in charge. In Japan, two 0.25 ml doses of vaccine 2 to 4 weeks apart are recommended for children aged 6 months to 2 years, and two 0.5 ml doses of vaccine 2 to 4 weeks apart are recommended for children aged 3-12 years. Only one 0.5 ml dose of vaccine is recommended for children aged 13 years and over. Test-negative case-control design We estimated VE by TNCC design. VE was defined as 1 -OR (odds ratio), and was calculated as described below. VE against hospitalisation We calculated the VE against hospitalisation using the TNCC design. The cases included patients with positive IRDT results who were admitted to hospital. These cases were divided into an in-patient group that had received the influenza vaccine and a in-patient group that had not received a vaccine. The control group included all patients who were not admitted to hospital, whether they received an influenza vaccine or not. Admitted patients with negative IRDT results were excluded from the analysis. Influenza A(H3N2) virus characterisation The Characteristics of the enrollees A total of 3,896 children were enrolled in this study, of whom 144 were subsequently excluded from the analysis for the following reasons: 117 were < 6 months old or > 15 years old, or their age was unknown; two had a fever < 38 °C; 24 had an unclear influenza vaccination history and the date of one patient's clinic visit had not been recorded. Of the remaining 3,752 patients who were eligible for inclusion in the analysis in this study, 1,633 had influenza A (1 had influenza A(H1N1)pdm09 infection, and the remaining 1,632 had influenza A, subtype unknown); and 42 patients had influenza B. Of the 3,752 patients included, 2,077 were IRDT-negative. Figure 2 shows the total numbers of cases of influenza diagnosed by week at the 20 hospitals as a whole. VE by age group was analysed only in regard to influenza A. Statistically significant adjusted VE was not demonstrated in the infant group aged 6 months to 11 months, in which it was -5% (95% CI: -139 to 54), but statistically significant adjusted VE was seen in the 1-to 12-year-old group. Moderate adjusted VE against influenza A was demonstrated in the 1-to 2-year-old group (40%, 95% CI: 18 to 56) and in the 3-to 5-yearold group (55%, 95% CI: 41 to 65). Adjusted VE against influenza A in the 6-to 12-year-old group was lower (25%, 95% CI: 6 to 41), and it was not statistically significant in the 13-to 15-year-old group (41%, 95% CI: -0.1 to 65). Crude VE against influenza A was 29% (95% CI: 11 to 43) in the 6-to 12-year-old group and was significantly lower than the 55% (95% CI: 42 to 65) in the 3-to 5-year-old group (p = 0.0089, Breslow-Day test). VE against influenza B was not analysed by age group because of the small number of cases. Protection against hospitalisation Patients admitted to the hospitals with influenza A were divided into an unvaccinated group (n = 231) and a vaccinated group (n = 104) ( Admitted patients with negative IRDT results (n = 143) were excluded from this analysis. Vaccine effectiveness by month of illness onset Crude VE against influenza A infection decreased markedly in the late phase of the influenza epidemic, from 46% (95% CI: 37 to 54) in the 3-month period November, December, and January to 13% (95% CI: -18 to 36) in the 2-month period February and March (Table 5). Weekly changes in vaccine effectiveness Crude VE against influenza A first became statistically significant in week 49, when it reached 69% (95% CI: 46 to 82) ( Number of doses of vaccine Two doses of influenza vaccine did not provide better protection against influenza A in children of 6 months to 12 years of age than a single dose, even though two doses of trivalent IIV were recommended for that age Discussion Estimations of the effectiveness of influenza vaccine by a TNCC design have been reported annually in recent years [20][21][22], and the TNCC design has become the standard design for assessing VE. In this study, we used the results of IRDTs as a basis for estimating VE using the TNCC design in children who had received trivalent IIV during the 2014/15 season, since almost all children with a fever receive an IRDT during an influenza epidemic [23], resulting in a large enrolment for this study. [15]. Consequently there have been genetic and antigenic mismatches between most epidemic A(H3N2) strains in Japan and the vaccine strains that have been used, as has been reported in Canada [5], the UK [6], and the US [7]. The low VE in the 2014/15 season, when the dominant influenza virus was A(H3N2), was postulated to be attributable to mutations in the egg-adapted A(H3N2) vaccine strain [24] as well as to a mismatch due to antigenic drift of the virus. [7] showed that the adjusted VE for all ages against influenza A(H3N2) was 13% (95% CI: 2 to 23). However, none of these recent reports [5,7,25] clearly demonstrated VE of IIV in children. The results of our study showed that trivalent IIV provided low but significant protection against influenza A(H3N2) virus infection in children in the 2014/15 season in Japan, despite marked antigenic drift in the epidemic virus. In a previous paper, we reported having found that trivalent IIV was highly effective in protecting against influenza A(H3N2) virus infection irrespective of whether there had been marked antigenic drift [3]. The widespread circulation of influenza A(H3N2) viruses in the 2014/15 season provided an opportunity to compare VE according to age group. Although significant protection against influenza A(H3N2) illness was demonstrated in the 1-to 12-year-old group, VE was not statistically significant in the 6-to 11-month-old group or 13-to 15-year-old group. Similarly low or no effectiveness was observed in both the 6-to 11-month-old group and 13-to 15-year-old group in our study of VE in the 2013/14 season [14]. The results of the present study showed that the influenza vaccine was not effective against influenza A (-5%, 95% CI: -139 to 54) in 6-to 11-month-old infants. Similarly, no significant VE was shown against influenza A in infants in the 2013/14 season (21%, 95% CI: -87 to 67) [14]. Our studies in these two consecutive seasons showed that trivalent IIV was not effective against influenza A(H1N1)pdm09 or A(H3N2) in infants. However, the number of infants enrolled was relatively small, and further studies are needed. We unexpectedly found that VE was low in adolescents (the 13-15 years age group), in the two consecutive seasons 2013/14 and 2014/15. In the 2013/14 season, both influenza A(H3N2) and A(H1N1)pdm09 were circulating VE against any influenza and VE against influenza A were higher early in the season than late in the season (Breslow-Day, p < 0.05). in Japan [26], and no statistically significant VE against influenza A was observed in the 13-to 15-year-old group [14]. VE against influenza B was not statistically significant either [14]. Although we cannot explain this low or absent VE in adolescents, similar results, including low VE of trivalent IIV against influenza A(H3N2) and B in adolescents, were reported during the 2012/13 season in the US [27]. A meta-analysis showed no convincing evidence that influenza vaccine reduces mortality, hospitalisations, or serious complications in children [28]. However, the results of our previous study demonstrated that influenza vaccination was highly effective in reducing hospitalisation of children infected with influenza A in the 2013/14 season. In the present study, which covered the period of the widespread epidemic caused by the drifted influenza A(H3N2), it reduced such admissions of children infected with influenza A by 55%. Although the criteria for hospitalisation vary from country to country, our studies conducted two years in row demonstrated VE in reducing hospitalisation for influenza A in children in Japan, where over 90% of the children with influenza-like illness (ILI) enrolled in the present study were brought to clinics within 48 hours after the onset of illness and 96% were treated with NAIs if their IRDT was positive. There are recent reports from other countries showing that influenza vaccination was associated with reduced hospitalisations [29] and reduced clinical severity in children [30]. Our previous study showed that VE against influenza A and B decreased by ca 10% in the latter half of the epidemic [14]. The present study showed that VE against influenza A declined greatly over the course of the epidemic, from 46% in November, December, and January to 13% in February and March. Thus, persistence of VE depends on the type and subtype of influenza viruses and the match between vaccine strain and epidemic virus. The weekly changes in VE shown in this study demonstrated the major advantage of a TNCC design based on IRDT results. It is easy to calculate VE every week in Japan. VE against influenza A gradually declined every week from 69% in week 49 of 2014 to 42% in week 8 of 2015. Two doses of influenza vaccine have been reported to be necessary to provide sufficient protection in children [4,[31][32][33], and our previous study [14] showed that two doses were needed to optimise protection against influenza A in children. However, the results of the present study show that a single dose of influenza vaccine was as effective as two doses of vaccine in protecting Table 6 Effectiveness of trivalent inactivated influenza vaccine against influenza A in children aged 6 months to 15 years, cumulative data, by week, influenza vaccine effectiveness study, Japan, November 2014 to March 2015 (n=3,752) Year Week [36], and the VE results in our previous study were consistent with the results based on RT-PCR findings reported in another study [14]. VE estimates have been found to be much less influenced when the sensitivity of the diagnostic method used is over 80%, although low specificity has been found to cause greater bias in VE estimates [35]. The sensitivity of the IRDT kit used in this study (Espline Influenza A and B-N kit) is 85.1% to 92.4% for influenza A and 71.6% to 91.2% for influenza B, and its specificity is 97.6% to 100% [37]. Moreover, over 90% of the children with ILI were brought to our clinics within 48 hours of illness onset. By contrast, in most of the TNCC studies based on the RT-PCR tests, the patients were enrolled within 7 days after illness onset, suggesting that influenza virus could not have been detected even by the RT-PCR tests [38,39]. A TNCC design based on IRDT results is limited from an epidemiological standpoint, since the VE against each subtype of influenza A or especially against each lineage of influenza B cannot be determined. However, from a clinical standpoint, a TNCC design based on IRDT results has various advantages. VE can be communicated easily to the Japanese population during the very early stages of an influenza epidemic, and more importantly, VE against hospitalisation can be easily calculated. In the near future, VE estimated by a TNCC assessment based on IRDT results will be reported weekly in many areas of Japan. The large number of patients in Japan who receive an IRDT makes it possible to estimate VE with considerable precision, and the most appropriate vaccination policy will be established based on the data obtained.
2017-09-26T23:16:53.492Z
2016-10-20T00:00:00.000
{ "year": 2016, "sha1": "8a661fb96f16740376da25e51cab0fe50e935429", "oa_license": "CCBY", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/21/42/eurosurv-21-30377-1.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES.2016.21.42.30377&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3671e07f0734024f589f878a47daa0cafd37e84", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7415918
pes2o/s2orc
v3-fos-license
The influence of stimulus format on drawing—a functional imaging study of decision making in portrait drawing To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye–hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. Introduction When drawing pictures, whether from life or from memory, or when copying photographs or paintings, there are complex decisions to be made in order to allow the rendering of an original image into a discrete series of pencil or pen strokes on paper. Only if copying or tracing an existing line drawing are these decisions avoided. But when for example, making a line drawing of a face, or a photograph of a face, features of the face have graded light or color intensities-perhaps the graded border between the cheek and the nose-that need to be caught as singular lines. As Perdreau and Cavanagh (2013) have recently discussed, there is a long chain of neural transformations between the initial processing of the image that falls onto the retina, the perception of objects in the scene and spatial relationships between them, decisions about sub-features and boundaries within the objects, the selection of the line to be drawn, and ultimately, the sensory-motor control of the drawn line as the hand moves on the paper. It is the question of the decisions about features and boundaries and the selection of the line to be drawn that we focus on in this paper. We have previously tested the eye and hand movements as naïve and expert artists draw portraits (Gowen and Miall, 2007;Miall and Tchalenko, 2001;Miall et al., 2009;Tchalenko and Miall, 2009). We have argued that in the interval between observation and selection of a feature of an image to be drawn, the chosen line is more likely stored as a motor plan than in a visual short-term memory . However, those studies did not allow us to investigate the processes involved in the selection of the drawn feature. There is indeed a long but still active debate about the extent to which artists are able to isolate their perceptual judgments from both the perceptual distortions normally introduced into each sensory processing stream, and from existing knowledge about objects (e.g. Fry, 1909Fry, /1981Ostrofsky et al., 2012;Perdreau and Cavanagh, 2011;Ruskin, 1857Ruskin, /1971. Thus our visual perception reduces the impact of perspective, illumination-induced color shifts, etc, and leads to biases in our judgments. Further, it is often difficult to avoid bringing prior knowledge about the observed objects to bear: naïve portraits typically distort the drawn face, over-emphasizing its canonical features-such as the two eyes being equidistant from the nose (Gombrich, 1960). Thus there is a strong interaction between cognitive and experience-driven prior knowledge and stimulus-driven perception. Seeley and Kozbelt (2008), for example, have argued that trained artists develop spatial schemata, and argue that premotor areas in the brain are responsible for the advantage in deploying these schemata as motor plans when drawing, thus overcoming the biases in visual perception. They propose "artists' technical proficiency in a medium confers an advantage in visual analysis, which consists of the ability to focus attention on sets of stimulus features sufficient for adequate depiction". Their model builds on previous theories (Kosslyn, 1996;Schyns, 1998), linking prefrontal attention areas with lower level visual processing areas-the perceptual features are extracted based on an iterative, hypothesis-testing loop between these frontal and temporal-occipital regions. With this background, we aimed to study the neural underpinning of the act of drawing from photographic images, controlling both the level of stimulus ambiguity and the level of top down knowledge of the images. We contrasted drawing a prescribed feature of a face with a similar feature in an abstract object. We also controlled the ambiguity of this feature by presenting each image in four formats, from a line drawing to be copied, a silhouette, a low contrast and a high contrast photograph. We hypothesized that the decisions about how to draw each stimulus would be affected by the stimulus type (faces versus abstract), as the former would allow prior knowledge of faces to influence both the low level perceptual processes, but also influence the judgments about the ambiguous features. We therefore focus on the contrast faces vs. abstract, but briefly mention the reverse contrast. We also hypothesized an effect of stimulus format (line, silhouette, define and undefined), with the a priori expectation that the order of difficulty in making judgments about the lines would increase across that sequence of formats . We further hypothesized that there would be an interaction between these two factors such that decision about drawing undefined faces might differentially engage cognitive processes compared to easy line drawing conditions of abstract objects. Finally, based on the model of Seeley and Kozbelt (2008), and on the underlying theories (Kosslyn, 1996;Schyns, 1998), we predict that prefrontal/ premotor areas would have increased activation in the faces trials, based on the greater top-down knowledge of that category, while we predict that extrastriate areas would show a modulation in activity driven by this categorical knowledge and the increasing perceptual demand across the four presentation formats. Participants Fourteen participants were included in this study (mean age 30, range 24-41; all right-handed; 8 females). One additional participant was tested but excluded from the analysis as a result of a technical failure during the experiment. All participants gave written informed consent according to instructions and procedures approved by University of Birmingham Ethics Committee. The participants were not artists, art students, or recruited according to their drawing ability, and did not report any unusual history of drawing. All had normal vision, or vision corrected to normal with contact lenses. Material & apparatus The task involved drawing the outline contour of images of faces or abstract objects (folded towels; Fig. 1). There were eight sets of faces, from eight individuals, and eight sets of abstract objects (different folding of the towel). Each set included four different formats, referred to as line, silhouette, defined (high contrast), undefined (low contrast). The undefined and defined stimuli were derived from high resolution color photographs, which were individually manipulated using digital software to reduce or increase the contrast from the original, to ensure a poorly or well defined contrast of the face outline against the plain background. The line and silhouette stimuli were also individually produced by hand tracing the outline of photographs, filling with black for the silhouette stimuli ( Fig. 1). Note that the corresponding photographs are not identical, but are recognizably the same face (or towel), from the same viewpoint. To double the number of available stimuli without repetition, we presented each stimulus twice, once in the training session and once in the main experiment, in its original orientation or in a "flipped" version. To create the flipped version, the original faces were flipped horizontally, and the original abstracts were flipped vertically, to give a grand total of 128 face and abstract stimuli. There was no strong vertical orientation for the folded towel "abstract" images. Because of the fully balanced design of the experiment, in which every stimulus was used, these small variations in stimulus are expected to have no effect on the participants' decisions about the drawing task, but will maintain higher levels of attention and interest. Participants drew images in the MR scanner on a touch-sensitive panel using a stylus. The panel was constructed from a wooden frame (295 × 230 mm) housing an 8 wire resistive touch screen (255.0 × 190.0 mm, AMT/PN 9534). The participants looked at a visual display on a rear-projection screen positioned behind their head and viewed through a rear-view mirror with a viewing distance of proximately 600 mm. The visual display included the stimulus image frame (right side) and the drawing frame (left side) presented against a black background. The display subtended a horizontal and vertical visual angle of approximately 32 × 16 degrees (Fig. 2). Participants could see their drawings appear in real time as a black line in a light pink drawing frame, although they were not able to view their hand or the drawing panel. They held the drawing panel with their left hand supported by a pillow across their lap, and their knees were supported by a knee supporter. The Long Range EyeLink 1000 eye tracking system combined with two separate infrared illuminators was used to record gaze positions of both eyes. The sampling frequency of the eye tracker was 250 Hz with an average accuracy of 0.5°of visual angle. A centroid model was adopted to fit the pupil image and determine the pupil position. A built-in nine-point calibration procedure was used, with randomly ordered presentation of the points. The calibration procedure was repeated until good calibration was indicated by the system. Calibration was also repeated before each 18 minute scanning run. A drift check and correction were carried out after calibration to maintain accuracy of the calibration parameters, and the gaze position accuracy was validated during fixations onto a central fixation cross displayed between trials. Design/Procedure Participants took part in both a training session and an experimental session. Before the main experiment, the training session was carried out while participants were lying within a mock scanner of same bore size to the magnetic resonance (MR) scanner, and with an identical head coil, mirror and projection screen. They were given verbal instructions in the mock scanner while they were performing the task, and if necessary given verbal corrections or reminders. Before entering the real scanner they were reminded about the instructions. The experiment used a block design (or "slow" event-related design with single-trial blocks) with eight conditions, which comprised a 2 × 4 design (faces vs. abstract types, and line, silhouette, defined, undefined formats). A 'rest' condition was also included in the design as a baseline. Each scanning session composed of two runs, and each run contained 32 trials. In total 128 images (8 sets × 2 types [faces and abstracts] × 4 formats [line, silhouette, defined, undefined] × 2 versions [normal and flipped]) were pseudorandomized across the 128 trials of the practice and scanning sessions. The order of normal and flipped presentation of each set were randomized and allocated evenly for both training and experimental sessions. Each session then included four different formats of each set, of which the half was a normal version and the other half was a flipped version. Each run included one format of a normal version and another format of a flipped version; if participants saw a normal version of any format in the training session, they would only see the flipped version of the stimulus in the experimental session and vice versa. Consequently, each run included two different presentations (one normal and one flipped) of each set of stimuli types, and each session included four different formats (half normal and half flipped) of each set of stimuli types. Thus, participants saw each stimulus only once during the whole experiment. Each run started and ended with a 'rest' trial in which participants were requested to keep their eyes open and relaxed, looking towards a fixation cross in the middle of the screen, with no attempt to suppress blinks. In each run, a 'rest' trial was followed by a set of four trials, in which line, silhouette, defined and undefined conditions were randomized, and two of which were faces and two abstracts. Each run contained 41 trials (32 drawing trials and 9 rest trials), and each trial lasted 24 s. The run duration was therefore 16.4 minutes in total. At the start of each drawing trial, participants were presented with two static circles, a smaller flashing circle and a short written message ('ANTICLOCKWISE', 'CLOCKWISE', or 'REST'). The message indicated the required direction of drawing or if it was a rest trial and remained onscreen throughout the trial. The flashing circle represented the location of the drawing stylus. The static circle on the right pane showed the position on the original where they were to start the drawing from, while the corresponding circle on the left side was the start zone into which they were instructed to move the cursor to begin drawing. After 3 seconds of this initial display, a stimulus image was presented, and participants began drawing from the start zone. After a further 3 seconds, the static circles disappeared and the drawing was carried on for further 18 seconds (Fig. 2). For all formats of drawing trials, participants were encouraged to draw details of the contour of the image (including the cheek, chin and jaw for the face trials) as accurately as possible at a comfortable pace, drawing throughout the trial, and they were informed that finishing the drawing was not of importance. For defined trials, they were instructed to draw the well-defined contour against the background. For undefined trials, they were encouraged to use their judgment to draw the indistinct contour and were instructed not to draw any clearly defined parts of the image. Participants drew anticlockwise in normal trials and clockwise in flipped trials, so that in all face trials they started to draw from near the hairline, down the cheek and towards the chin (Fig. 3). Behavioral data analysis Drawing stylus positional data were collected at a sampling rate of 60 Hz and interpolated to match the sampling rate of eye movement Fig. 2. Experiment procedure. A: Participants were presented with two static circles, a flashing circle and a short message. The message indicated a drawing direction (ANTI-CLOCKWISE or CLOCKWISE) or if it was a 'Rest' trial. They were instructed to move the flashing circle, which represented the location of the drawing stylus, into the static circle on the left side to begin drawing. The static circle on the right showed a location of the image where they were to start drawing. B: After 3 s, an image was presented, and participants began drawing. C: The static circles disappeared after 3 s, and participants continued drawing for further 19 s. In a 'Rest' trial, only a short message ('REST') was presented for 3 s, and participants were instructed to look towards a fixation cross in the middle of the screen for 21 s. data (250 Hz); through the calibration procedure both eye and stylus positions were defined relative to the screen coordinates in mm. During active drawing, the participants' gaze would alternate between the right hand original (stimulus image) panel and the left drawing panel, and they typically made a few fixations in each panel before switching back to the other side. The number of eye shifts between the original panel and the drawing panel during the drawing task was calculated for each trial. In general, the vertical border between the original panel and the drawing panel was used to determine if the gaze crossed from the one panel to the other. On occasions for trials in which the start zones were close to the border, drift of the eyetracking calibration meant that the gaze position would be incorrectly labeled as being in one panel, when in fact it was in the other, as was determined by visual inspection of the sequence of fixations during a trial. The border was manually adjusted in these instances, and either one eye's data or the average eye data were used, as appropriate. After categorizing gaze as being on either the original (right) or on the drawing (left), the gaze ratio (G) was calculated for each trial as the total gaze duration during the trial on the original panel versus the total gaze duration on the drawing panel. No attempt was made to analyze individual fixation positions or durations, or to separate fixations, saccadic and smooth pursuit movements. During the drawing task, participants drew the outline contour of the images nearly continuously. However, from time to time they stopped drawing and did not move the drawing stylus. In order to investigate participants' eye-hand coordination while they were actively drawing, we defined periods of drawing when their stylus speed was higher than 1.5 mm/s or 0.14°/s. The drawing ratio (D) was also computed, as the total of the gaze durations on the original panel, within all active drawing periods within each trial, versus the duration to gaze duration on the drawing panel during the same periods of active drawing. Differences between G and D are due to period of 'blind' drawing, where the eyes are on the original, while drawing takes place without central vision . Average drawing speeds (mm/s) were also computed for each trial. The total length of drawing was calculated for the entire duration from the beginning of active drawing till the end of active drawing. If participants lifted the drawing stylus and continued drawing after a short interval, the interval was deducted from the entire duration. Procrustes analysis (Kendall, 1989) was used to determine the accuracy of the drawings, conducted using Matlab (version 7.8). Procrustes determines a linear transformation (with translation, orthogonal rotation and scaling) to best conform one data set to another, and reports a goodness of fit (or dissimilarity score) based on the sum of the squared distances between the fitted data points. To conduct the Procrustes analysis, each original image outline was carefully digitized by hand, and the segment of original outline that best represented the section drawn by the participant from the start position to the end of the trial was estimated by eye. The drawn line and the corresponding segment of the original outline were then spatially resampled to have 100 data points each. The Procrustes function returned the goodness of fit between these two sets of data points. Procrustes also reports the rotation, translation and scaling used in the transformation; mirror reversal of the data during the transform was disallowed. Scanning protocol Functional MR imaging was carried out using a 3 T Philips Achieva with eight channel parallel head coil and a sense factor of two. 52 contiguous axial slices were obtained in an ascending order to cover the whole brain, using a gradient-echo echo planar imaging (EPI) sequence (80 × 80 acquisition matrix, field of view of 240 × 240 × 156 mm, 3 × 3 × 3 mm voxel size, and TE = 35 ms, TR = 3000 ms, flip angle = 85°). 357 volumes plus two dummy volumes were acquired for each run which lasted 18 min. A high resolution T1-weighted structural image (1 × 1 × 1 mm, sagittal orientation) was obtained between the first and second runs with a 5 minute MPRAGE sequence. fMRI data analysis fMRI data were analyzed in FEAT v5.98 using FMRIB Software Library package (FSL 4.1.8, FMRIB, Oxford University; Smith et al., 2004;Woolrich et al., 2009; see the FSL website for details: http://www. fmrib.ox.ac.uk/fsl). The Brain Extraction Tool (v2.1) was run on the structural images to extract the brain from the image of the skull and adjoining tissue before running FEAT. Each voxel's time series were corrected to the middle point of the TR, and motion correction was applied to remove the effect of participant's head motion using MCFLIRT (FMRIB's Linear Image Registration Tool). Each EPI image was registered to the middle image of the acquisition set applying a 6 DoF rigid-body spatial transformation. One of the two functional runs for one participant was removed from the analysis because of excessive head motion. A brain mask from the first volume in the fMRI data was made using BET brain extraction to remove invalid voxels in the fMRI data. Each volume of fMRI data was smoothed with a spatial low-pass filter using a 5 mm full width half maximum (FWHM) Gaussian kernel to lower noise without diminishing valid activation. Low frequency noise was also removed using a Gaussian-weighted high-pass temporal filter with a 180 sec cut-off. A GLM model was constructed using FILM (FMRIB's Improved Linear Modeling) with prewhitening and with the translation and rotation motion correction parameters as covariates of no interest. These covariates were orthogonalized to one another and all the main experimental conditions. Eight conditions were modeled, and the 'Rest' condition was left unmodeled as baseline. The 8 explanatory variables were: face-line (FL), face-silhouette (FS), face-defined (FD), face-undefined (FU), abstract-line (AL), abstract-silhouette (AS), abstract-defined (AD), abstract-undefined (AU). The first 3 s of each trial, during which participants moved the drawing stylus into the start zone, was also separately modeled. All the 9 modeled conditions and their temporal derivatives were convolved with a hemodynamic response function from a gamma function (phase of 0 s, SD of 3 s, mean lag of 6 s). Registration of each run to a standard space was conducted through a two-stage process. Initially, the motion corrected functional images were registered to the MPRAGE structural using a 6 degrees of freedom (DoF) affine transformation, and the structural image in turn was registered to the MNI standard brain image (MNI152 T1 2 mm) using a 12 DoF affine transformation. At the first-level of the analysis, each contrast (e.g. faces versus abstracts) was calculated for each individual run of each participant. Contrasts were performed for faces-abstract, for abstract-faces, for undefined-defined trials, and for a linear trend across formats (line, silhouette, defined and undefined, with a contrast vector of − 3, − 1, + 1, + 3). At the second-level of the analysis, the results from the first level analyses for each participant were combined to create a participant average for each contrast (e.g. faces versus abstracts), calculated using a fixed-effects analysis. At the third-level, the participant mean contrast was combined across the group using FLAME (FMRIB's Local Analysis of Mixed Effects) stage 1 and stage 2 (Beckmann et al., 2003;Woolrich, 2008;Woolrich et al., 2004). Z (Gaussianised T/F) statistic images were generated and significant activity identified using clusters determined by Z N 2.3 and a (corrected) cluster defining threshold of p = 0.05 (Worsley, 2001). The clusters identified from the group analysis were used to create masks for a region of interest analysis. Using the Featquery tool (FMRIB, Oxford; see the FSL website for details: http://www.fmrib.ox. ac.uk/fsl/feat5/featquery.html), the masks were applied to extract mean % signal change of the conditions of interest (FL, FS, FD, FU, AL, AS, AD and AU) for each run across all participants. These mean % signal changes were calculated relative to the mean activation of the 'Rest' condition (baseline) in the mask area. The anatomical locations of clusters were identified using comparisons between a neuroanatomical atlas (Duvernoy, 1999), Harvard-Oxford atlases (Desikan et al., 2006) and the MNI structural atlas (Mazziotta et al., 2001). Neighboring coordinates of the local maxima were used to identify the labels for Brodmann areas in Talairach Daemon Labels included in FSLView 3.1.8 (FMRIB, Oxford; see the FSL website for details: http://www.fmrib.ox.ac.uk/fsl/fslview; Lancaster et al., 2000). Drawing behavior-eye movements To quantify the change in eye movements across the conditions, we first calculated the gaze ratio (G) of gaze duration on the original to gaze duration on the drawing panel (Fig. 4A). If G is greater than 1, participants look at the original panel longer than the drawing, and we hypothesized that this would correlate with the difficulty of the decision process. Thus, a 2-way repeated measures ANOVA with 2 (type: faces, abstracts) × 4 (format: line, silhouette, defined, undefined) levels was performed. Since Mauchly's test indicated that the assumption of sphericity had been violated, corrected values of degrees of freedom were calculated using Greenhouse-Geisser estimates of sphericity, in this and subsequent ANOVA tests. There was no significant difference between faces and abstracts (F (1, 13) = 0.01, p = .920, η p 2 = .001). However, there was a significant main effect of format (F (1.43, 18.62) = 3.77, p b .001, η p 2 = .66). Post-hoc tests confirmed that the G ratio in the line task was significantly lower than in both defined (p = .001) and undefined (p b .001), but not different from silhouette (p = .086). G-ratio for silhouette was significantly lower than for both defined (p = .009) and undefined (p = .001), and for defined it was significantly lower than for undefined (p = .012). There was no significant interaction between type and format (F (3, 39) = 1.16, p = .338, η p 2 = .09). These results confirm the hypothesis that the time spent on the original varied with the stimulus formats (Fig. 4A), and suggest the difficulty in decisions about the drawing task rose from line to silhouette to defined to undefined. The drawing ratio (D) of gaze duration on the original panel to gaze duration on the drawing panel during periods of active drawing (drawing speed N 1.5 mm/s or 0.14°/s) was also calculated (Fig. 4B). Just as for the G-ratio, if D was greater than 1, participants looked at the original panel longer than the drawing panel, while actively drawing. We hypothesized from other work ) that this drawing ratio would be similar to G, as drawing rates would remain constant despite change in time viewing the original, and increasing amounts of "blind drawing", i.e. active drawing while looking at the original, would take place as drawing decisions increased. A 2-way rmANOVA with 2 (type: faces, abstracts) × 4 (format: line, silhouette, defined, undefined) levels was conducted on the D-ratio. There was no significant difference between faces and abstracts (F (1, 13) = 0.03, p = .856, η p 2 = .003). However, there was a significant main effect of format (F (1.50, 19.46) = 23.84, p b .001, η p 2 = .65, Greenhouse-Geisser corrected). Post-hoc tests illustrated that the ratio was significantly lower in line than defined (p = .001) and undefined (p b .001), but not different from silhouette (p = .099). Silhouette was significantly lower than defined (p = .013) and undefined (p = .002), and defined was significantly lower from undefined (p = .016). There was no significant interaction between type and format (F (1.96, 25.50) = 1.25, p = .305, η p 2 = .09, Greenhouse-Geisser corrected). Thus, these results also confirm our hypothesis-even during active drawing periods, greater time was spent with gaze on the original panel in those tasks assumed to require greater decisions, in ascending order of line, silhouette, defined and undefined (Fig. 4B). We next tested the number of eye shifts between the original panel and the drawing panel (Fig. 4C); as above, from our previous work we predict fewer shifts for the more difficult drawing decisions, as longer time is spent in each dwell period . Significantly less shifts were made for faces compared to abstracts (F (1, 13) = 84.56, p b .001, η p 2 = .87). There was also a significant main effect of format (F (3, 39) = 84.99, p b .001, η p 2 = .87) and a significant interaction between type and format (F (3, 39) = 6.53, p = .001, η p 2 = .33). A simple main effects analysis showed that the number of gaze shifts significantly decreased in the order of format (line N silhouette N defined N undefined) for both faces (p ≤ .027) and abstracts (p ≤ .044), except for between line and silhouette formats for abstract stimuli (p = 1.000). Thus consistent with the G-and D-ratios, fewer gaze shifts were seen in the conditions defined and undefined, assumed to be the more difficult ones, as gaze dwelt longer on the original panel than on the drawing panel. In summary, both faces and abstracts induced a near-identical pattern of G-and D-ratios; both ratios were higher for defined and undefined formats than for line and silhouette formats, and the ratios for undefined were higher than for defined. Participants made almost the same number of gaze shifts for both faces and abstracts, and the number of gaze shifts decreased in the order of line, silhouette, defined and undefined, except for between line and silhouette of abstracts. Drawing behavior-hand movements We quantified the match between the drawn line and the original for every trial, and monitored the average speed of the drawing during all active drawing periods. We had no a priori hypotheses about how accuracy would be affected by stimulus format-difficult drawing decisions might be compensated by changes in gaze behavior, as described above, such that the final accuracy was equivalent despite increased difficulty. From the Procrustes analysis of drawn line shape, values of dissimilarity, rotation and scale of the transformation from the drawn line to the original contour were computed. Two-way rmANOVAs were then conducted on these parameters. We did not analyze the translational components because the start position of each drawn line was constrained by the fixed start circle, and thus translation was constrained. The value of dissimilarity was significantly smaller for faces compared to abstracts (F (1, 13) = 7.07, p = .020, η p 2 = .35, Fig. 5A). High dissimilarity scores indicate less accurate overall fit of the shape between the original and the drawn line. Thus overall there was a better match of shape in the faces condition than in the abstract condition. There was also a significant main effect of format (F (3, 39) = 6.56, p = .001, η p 2 = .34), but there was no significant interaction between type and format (F (3, 39) = 1.00, p = .403, η p 2 = .07). Bonferroni-adjusted posthoc t-tests illustrated that the value of dissimilarity measure for silhouette was significantly smaller than line (p = .027, Cohen's d = 0.59) and undefined (p b .007, Cohen's d = 0.79). The dissimilarity of defined was significantly smaller than undefined (p = .013 Cohen's d = 0.63). The rotation error was significantly greater for abstracts compared to faces (F (1, 13) = 10.05, p = .007, η p 2 = .44, Fig. 5B). There was also a significant main effect of format (F (3, 39) = 4.43, p = .009, η p 2 = .25), but no significant interaction between type and format (F (3, 39) = 1.02, p = .394, η p 2 = .07). Bonferroni-adjusted post-hoc tests illustrated that the rotation error for silhouette was significantly smaller than defined (p = .015) and undefined (p b .012). The scale component was greater than 1 if the drawn outline was bigger than the original outline (Fig. 5C). The results showed that faces were drawn at a larger scale than abstracts (F (1, 13) = 7.13, p = .019, η p 2 = .35). There was also a significant effect of format (F (3, 39) = 44.70, p b .001, η p 2 = .78) and a significant interaction between type and format (F (3, 39) = 20.13, p b .001, η p 2 = .61). A simple main effects analysis illustrated that silhouette, defined and undefined faces were drawn at a significantly larger scale than silhouette, defined and undefined abstracts (t(13) ≥ 2.94, p ≤ .011, Bonferroni-adjusted). However, the scale of abstract line drawings was significantly larger than that of line faces (p = .036), and greater than all other conditions. The average drawing speed (mm/s) was significantly slower for faces compared to abstracts (F (1, 13) = 67.79, p b .001, η p 2 = .84; Fig. 5D). There was a significant main effect of format (F (1.73, 22.50) = 11.08, p b .001, η p 2 = .46, Greenhouse-Geisser corrected) and also a significant interaction between type and format (F (3, 39) = 17.08, p b .001, η p 2 = .57). A simple main effects analysis showed that the average drawing speeds of each level of format for faces were significantly slower than for abstracts (t(13) ≥ 6.65, p ≤ .001). Post-hoc tests for the faces showed that drawing speed was higher in the line condition than in all other conditions (p b .015, Bonferroni-adjusted), and that undefined was faster than silhouette (p = .038). For abstracts, the only difference was that undefined was faster than defined (p = .036). In summary, there was no simple relationship between parameters describing the drawn lines and the assumed decision difficulty. The similarity of the drawn line shape to the original was greater for faces than for abstracts, indicating better shape matching. The rotation component varied across the four formats, but the mean was within about 3-7 degrees. The slightly greater mean rotation for the abstract stimuli may reflect that they have no obvious natural orientation (Fig. 1). In general the scale of the face drawing was greater than one, and also greater than for the abstracts, while the scale for the abstract line was bigger than for all other conditions. Note however that all mean scale values are within 5-10% of unity. Finally, the average drawing speed was slower for faces than for abstracts for all formats, and showed differences between the different face formats that were not evident between the abstract formats. For the faces, drawing speed was lowest for the silhouette condition, and highest for the line condition. Finally, in relation to the interpretation of the functional activation data, the differences in hand actions across conditions (movement scale and drawing speed) were relatively small, under 10% in all cases except for a 15% difference in scale of drawing for the abstract line versus abstract undefined conditions. Even in this extreme case, there was no difference in drawing speed, a factor that might influence motor execution of the hand action. Functional activations We hypothesized that the decisions about how to draw each image would be affected by the stimulus type and also by the stimulus format. We therefore tested for a main effect of stimulus type (face vs. abstract) and secondarily for a linear trend across the four stimulus formats (in order of assumed difficulty, line, silhouette, defined and undefined). We further hypothesized that there would be an interaction between these two factors, such that decisions about difficult face drawing conditions might differentially engage cognitive processes compared to easier abstract drawing conditions. We therefore conducted a further ROI analysis based on the results of the primary analyses. Finally we performed a contrast of the undefined versus defined format conditions, to test for greater activity in the former, more challenging condition. Faces versus abstract objects We first performed a main contrast of face trials (FL, FS, FD, FU) versus abstract trials (AL, AS, AD, AU). Four clusters showing stronger activation for drawing faces relative to abstract objects were identified from the group analysis (Figs. 6A-D). Two of these clusters in the occipital and temporal lobes were composed of bilateral fusiform face area (FFA, both occipital and temporal areas), bilateral lateral occipital cortex (LO, both inferior and superior areas), bilateral inferior temporal gyrus (ITG), bilateral lingual gyrus, right superior parietal lobe (SPL) and right precuneus (Table 1, Fig. 6, clusters A, B). The remaining two clusters lay in the frontal lobe and were composed of bilateral frontal pole (FP), bilateral middle frontal gyrus (MFG) and bilateral inferior frontal gyrus (IFG) (Table 1, Fig. 6, clusters C, D). For the reversed contrast (abstracts versus faces), a number of small activations were found (Z N 2.3) in several cerebral areas across both hemispheres. Table 1E reports only the bilateral local maxima that survived at a raised threshold of Z N 3.0. We do not focus on this reverse contrast for two reasons. First, our hypothesis was driven by the idea that enhanced knowledge of the face category will provide addition input to the perceptual decision processes, thus arguing for a faces-abstract contrast. Second, the contrast revealed an extensive activation in a single cluster spanning much of the sensory-motor areas, involving both hemispheres, but also extending into the cuneous and precuneus (Table 1E). There was no a priori reason to expect such extensive activity, and we failed to see a clear pattern within the areas activated that leads to a functional interpretation. To further explore the differences in activation levels in the main clusters (Table 1A-D), region of interest analyses were conducted using these four functionally identified clusters constrained by their overlap with anatomically derived masks, limiting the ROI to ventral temporo-occiptial regions. One mask therefore included bilateral FFA, inferior LO and ITG (Fig. 6), but excluded the superior LO, parietal and lingual activations. Fig. 6E shows the mean activation level for the ventral region including bilateral FFA, inferior LO and ITG across the 8 conditions. A 2 (type: faces, abstracts) × 4 (format: line, silhouette, defined, undefined) repeated measures ANOVA was carried out on mean % signal changes extracted from this first region (FFA, LO and ITG; Fig. 6E). As expected from the identification of these regions as more active in face than abstract condition, face trials resulted in significantly stronger activation than abstracts (F (1, 13) = 71.17, p b .001, η p 2 = .85). A simple main effects analysis showed that faces led to significant activation compared to abstracts under all four different formats (t(13) ≥ 2.94, p ≤ .012; Fig. 6E). There was also a significant main effect of format (F (3, 39) = 10.53, p b .001, η p 2 = .45), with noticeably higher activation in the defined and undefined formats for faces (FD and FU) than for other conditions. In addition, the interaction between type and format was significant (F (1.82, 23.67) = 4.70, p = .022, η p 2 = .27, Greenhouse-Geisser corrected). The effect sizes reported here for stimulus category should be interpreted with caution, since the voxels for this analysis were selected using a linear contrast of stimulus category. Results for stimulus format, on the other hand, are unaffected by this potential bias, because stimulus format was completely orthogonal to the stimulus category variable. One way ANOVA conducted on the differences in mean percentage signal changes between faces and abstracts at each level of format (FL-AL, FS-AS, FD-AD, FU-AU) illustrated that there was a significant linear trend in the magnitudes of difference (F (1, 13) = 18.55, p = .001, η p 2 = .59; Fig. 6F). In other words, the activation difference between faces and abstract stimuli increased, in order of assumed difficulty, from line, silhouette, defined to undefined formats (Fig. 6F). Post-hoc tests using a Bonferroni adjustment showed that the magnitude of difference between silhouette (FS-AS) and undefined (FU-AU) was significantly different (p = .013). The differences between line (FL-AL) and defined (FD-AD), p = .068, and between line (FL-AL) and undefined (FU-AU), p = .057, approached significance. The additional region of interest analyses within the network of areas more active in the faces condition, anatomically constrained by its overlap with the right SPL and right precuneus, only confirmed that faces led to stronger activation relative to abstracts in those regions (F (1, 13) = 26.84, p b .001, η p 2 = .67), bilateral lingual gyrus (F (1, 13) = 12.34, p = .004, η p 2 = .49), bilateral FP, MFG and IFG (F (1, 13) = 40.30, p b .001, η p 2 = .76), but did not expose significant differences across formats, or interactions between type and format. The effect of presentation format One additional network of interest was identified from the group analysis after conducting a main contrast of a linear trend across the four formats (with contrast weights of −3, −1, +1 and +3 for line, silhouette, defined and undefined, respectively). It encompassed bilateral FFA (both occipital and temporal areas), bilateral occipital pole (OP), bilateral lingual gyrus and bilateral LO which was marginally extended from OP, very similar to the cluster shown in Fig. 6. The mean activity within this network, across the 8 conditions (Fig. 7A), had a very similar Table 1 Four clusters (A-D) identified from the contrast between faces and abstracts, based on Z N 2.3 and a corrected cluster defining threshold of p = .05. profile as seen in Fig. 6E, although the difference in activity between the four faces formats was greater (Fig. 7A). Hence, while the aim of this contrast was to identify areas changing activity in a linear fashion, in fact the only regions that were found showed greater activation in the defined and undefined conditions, but actually also showed greater activation in the line format than in the silhouette format. A 2 (type: faces, abstracts) × 4 (format: line, silhouette, defined, undefined) rmANOVA was carried out on the mean BOLD signal change in these regions. Faces led to significantly stronger activation than abstracts (F (1, 13) = 8.43, p = .012, η p 2 = .40). There was a significant main effect of format (F (3, 39) = 60.57, p b .001, η p 2 = .82). There was also a significant interaction between type and format (F (3, 39) = 9.40, p b .001, η p 2 = .42). A simple main effects analysis illustrated that for faces, both defined and undefined respectively led to greater activations compared to both line and silhouette (p ≤ .001), and line was greater than silhouette (p = .0.12). For abstracts, both defined and undefined resulted in greater activations compared to silhouette (p ≤ .009). However, unlike the results shown in Fig. 6F, the differences between the faces and abstracts conditions were driven mainly by a stronger signal in the two photographic formats (defined and undefined) compared to the other formats (line and silhouette). In other words, the mean signal in this region was almost identical for the face and abstract stimuli in line format, while the silhouette format of abstracts led to somewhat greater activation than that of faces. However, the mean signal was significantly greater for faces both in the defined and undefined formats than for abstracts (FL = AL, FS b AS, FD N AD, FU N AU). Defined vs. undefined formats Finally, in order to test for any additional differences in activation when we had experimentally manipulated the contrast of the two photographic image formats, to make the line decisions more ambiguous, we performed a contrast between the undefined versus defined conditions. Two clusters were identified as significant from this group analysis. One cluster on the right hemisphere included IFG, temporal pole (TP), MFG and insula (Fig. 7B) while the other cluster on the left hemisphere included FP and MFG (Fig. 7C). Both showed stronger activation in the undefined condition. A 2 (type: faces, abstracts) × 4 (format: line, silhouette, defined, undefined) rmANOVA was carried out on the mean activity for the regions in the right hemisphere (Fig. 7B). There was no significant difference between faces and abstracts (F (1, 13) = 2.88, p = .113, η p 2 = .18), and there was no significant interaction between faces and abstracts (F (3, 39) = .41, p b .727, η p 2 = .03). However, the significant main effect of formats (F (3, 39) = 6.34, p = .001, η p 2 = .33) revealed the difference between defined and undefined (p = .001), for which this cluster had been identified. In both faces and abstract stimuli, there was a similar 0.1% increase in BOLD activation for the undefined format (FU N FD, AU N AD), a difference of about a quarter of the mean signal seen in the other conditions. The same 2 (type: faces, abstracts) × 4 (format: line, silhouette, defined, undefined) rmANOVA was also carried out on the mean BOLD from the regions on the left hemisphere (Fig. 7C). Here, faces led to significantly greater activation than abstracts (F (1, 13) = 12.32, p = .004, η p 2 = .49). The significant main effect of formats (F (3, 39) = 4.32, p = .010, η p 2 = .25) showed the expected difference between defined and undefined (p = .007). However, there was no significant interaction between faces and abstracts (F (3, 39) = .75, p b .527, η p 2 = .06). In this region the difference between defined and undefined formats represented about 80% of the mean signal in the other conditions (FU N FD, AU N AD). To explore potential relationships between behavioral measures and the BOLD contrast between drawing conditions, we performed post-hoc regression between the difference in drawing scale and accuracy (as measured by the Procrustes analysis) and the mean activation differences between faces and abstract conditions, for the four identified ROIs, across the 14 participants in the group. There were no significant relationships for any of the ROIs (p N 0.102). We did however, find a significant and positive relationship for the frontal ROI (r = 0.3, p =0.018 DOF = 54, for ROI including the cluster shown in Figs. 6C,D) and for the lingual ROI (r = 0.36 p = 0.006 DOF = 54). There was a marginally significant relationship for the parietal ROI (r = 0.27 p =0.045 DOF = 54); the regression was not significant for the ventral occipital ROI including the cluster shown in Figs. 6A,B. These relationships suggest a negative relationship between speed and BOLD activation, as the speed was higher in abstract than faces, while the clusters reported were based on greater activation in faces compared to abstract. We are cautious in our interpretation, as our experimental design did not aim to induce great variation within the behavioral measures, although there was in fact a difference in speed between the different drawing trials. Our group size is also small for such within-group regression analyses. However, it is clear that this negative relationship does not explain the interaction effects reported in Fig. 6F: the relationship with the individual's speed was not significant for this region, and in fact the differences in group mean speed between conditions (Fig. 5D) does not conform to the BOLD activations reported. Discussion This experiment aimed to isolate the neural systems involved in making decisions about how to draw observed pictures. We recorded eye and pen movements during whole brain functional imaging, and found that presentation of images of faces or abstract objects to be drawn in four different picture formats led to significant differences in eye movements and drawing metrics. We worked from the assumption based on our previous behavioral analyses that the difficulty in deciding on a line to be drawn would increase across line, silhouette, defined and undefined formats. As predicted from this, both the gaze ratio and the drawing ratio increased in order across the four presentation formats. The number of eye shifts between the original and the drawing panels of the display showed the opposite pattern. Thus when the discrimination of the prescribed feature of the stimulus image was made most difficult (i.e. for the undefined photographs with their deliberately reduced contrast), participants spent longer-up to 40% more-viewing the original and shifted their eyes between the original and the drawing panel less frequently (Fig. 4A). This was the case even during periods of active drawing (as captured by the D-ratio, Fig. 4B, the ratio of active drawing time spent looking at the original image instead of the drawing). In contrast to these obvious and clearly ordered changes in ocular behavior, there were less orderly changes in the drawing behavior (Fig. 5). There was an overall trend for our quantitative measures of inaccuracy in drawing (the rising dissimilarity and rotation scores) to follow the same order of changes seen in ocular metrics especially across silhouette, defined and undefined formats. The dissimilarity scores reported by Procrustes analysis of the drawn lines appear small (below 0.015 or 1.5%, implying very high accuracy), but these scores closely match those seen in other work (Tchalenko et al., in preparation). For example, in that study, dissimilarity scores reached about 0.007 (0.7%) when participants copied complex lines with a separation between original and copied lines of 15 degrees, similar to the 16 degree separation between original and picture panels in the present study. They found dissimilarities of 0.002 (0.2%) for direct tracing over a complex line: this is about 4 times smaller (more accurate) than seen here for the most accurate drawing under silhouette conditions. This value may represent a fundamental accuracy limit in line-copying, governed by visuo-motor control of the pencil. We would expect and did find slightly larger errors under the conditions used here, when participants drew within the constrained environment of the scanner, with indirect visual feedback of the pen motion, and with the line drawing recorded with a relatively low resolution touch panel. However, while these scores are low, the key point is that the relative accuracy between the presentation conditions changed by as much as 50% (Fig. 5A). Note also that the similar trend in the scale measure (Fig. 5C) for face and abstract conditions, across three formats, silhouette, defined and undefined, actually implies opposite shifts in size-for faces, the scale measure approaches unity, whereas for abstracts, the scale gradually reduces below unity. Thus changes in scale may be independent from similarity and rotation, in terms of an overall accuracy of reproduction. Intuitively, the (dis)similarity measure may be the most important of these metrics. A drawing would be recognizable even if drawn at the wrong scale and orientation, whereas the reverse is not necessarily true. On this basis, considering (dis)similarity as the primary measure of drawing accuracy, there were more accurate drawings completed for silhouette than for the defined formats, and more accuracy in the defined than the undefined stimuli. The high dissimilarity scores for the lines format were unexpected. It is not clear why this condition leads to less accuracy, higher rotations and-for the abstract line format-a large scaling error. We speculate that in the viewing conditions in the scanner, with indirect cursor feedback of the drawn line, it is easier to correctly locate and scale ones rendition of the other, more complete formats. Further work would be needed to understand this aspect of the drawing performance. Relating these drawing accuracy measures to the number of gaze shifts (Fig. 4C), we suggest that the gaze shift rate is more closely related to task difficulty (i.e. in the decisions about the original stimulus image) than to the accuracy of performing the drawn line. This last point is supported by the dissociation between the metrics of line drawing and drawing in silhouette, defined and undefined formats. The line presentation format should be the easiest decision task, as there is no ambiguity about what line is to be drawn, and indeed the low G-ratio and the higher number of gaze shifts reflect less time, and shorter dwell time, spent with gaze on the original in this condition than in any other. In contrast, the line format appears to be the exception to the linear trend seen in each measure of drawing performance (Fig. 5). In particular, the abstract stimuli presented in line format were drawn with noticeably less accuracy than expected from the gaze behavior. We cannot yet determine why this should be the case; it is not simply a trade-off between speed and accuracy, because the changes in average drawing speed were not congruent with the changes in drawing dissimilarity. In fact, drawing speed was quite slow in all conditions (mean about 12 mm/s) and did not vary systematically across presentation formats, although it was higher in the abstract condition than in the face condition. We suggest therefore that the dissimilarity, scale and rotation metrics reflect planning errors in drawing, while the drawing speed reflects the relatively constant performance demand of executing the planned lines. The main focus of the experiment was of course to be able to compare functional activation levels across four presentation formats, driven by our a priori assumption that decisions about the line to be drawn would be modulated by the format. Our eye movement metrics recorded simultaneously confirmed that the participants indeed behaved as if they found the formats to increase in difficulty in order of line, silhouette, defined and undefined. Furthermore, the differences in hand drawing actions across the different conditions were relatively small, accounting to changes in scale of below 7%, and difference in speed of the pen motion of less than 3 mm/s (Fig. 5D). Even the statistically significant differences in ocular control that we observed-the increase in the number of gaze shifts between original to drawing, for example-were in the order of 10% of the average. We were therefore largely successful in keeping overt ocular and manual motor execution parameters approximately equal, and we did not see the sensorimotor and oculomotor areas identified in our main comparison of interest between timulus conditions (faces vs. abstract). However, the reverse contrast (abstract vs. faces) did reveal significant bilateral S1 and M1 activation. This could be due to the approximately 10% greater drawing speed observed for the abstract conditions, despite the low overall speeds. However, this activation pattern did not reflect the differences in drawing speed seen between the four formats. We did, however, find a broad network of occipito-temporal, parietal and prefrontal regions that were differentially activated when the drawing faces task was contrasted with drawing abstract objects. The object images were generated from photographs of folded white towels (Fig. 1). They superficially resembled the shape of a human head, and in line and silhouette formats had close similarity to the faces stimuli. The order of presentation of the trials was carefully counterbalanced across formats and stimulus type, so there is some possibility that in some trials in which objects were presented, the class of stimulus (face versus object) might have been unknown to the participants. However, the category would have been clear in the vast majority of face stimulus trials, cued by the neck and shoulders, or by the hairline in the line format, and the category would have been unambiguous for all stimuli when presented in the photographic formats, even in the undefined cases with reduced contrast. It is noticeable however, that only the photographic formats had internal features of the human faces, in particular eyes and mouth. Images with these canonical features are known to activate the face-sensitive regions in the fusiform and occipital areas of extrastriate visual cortex (Grill-Spector et al., 2004;Kanwisher et al., 1997;Kourtzi and Kanwisher, 2000). This difference in the facial features is likely to be the cause of the increased activation in ventral areas in the photographic faces conditions (Figs. 6E,A). In contrast, parietal and frontal areas were more engaged by the difficult undefined and defined conditions for both stimulus categories, but were not selectively driven by the facial features. Turning next to the hypothesized interaction between stimulus types and the presentation formats, the tempero-occiptal-parietal and frontal network that was identified only on the basis of its increased activation for faces ( Fig. 6), did indeed show this strong interaction. The difference in activity across this region, especially in the ventral areas consisting of bilateral FFA, LO and ITG showed a strong monotonically increasing difference between faces and abstracts (Fig. 6F). This increasing difference parallels the increase in the ocular D-and G-ratios (Fig. 4), that is, the amount of time spent viewing the original image, rather than with gaze on the drawing panel. We suggest that this reflects the time required for the selection of the line to be drawn and to plan how to draw it, under the different formats. As the activation in this region is higher for faces than for abstracts across all four formats, the results support a domain-specific mechanism in these ventral areas. In other words, the pattern of ocular behavior is common across both stimulus categories (faces and abstracts, Fig. 4), and this alone cannot account for the interaction seen in Fig. 6F. But since the FFA has a strong preference for the faces stimuli, the monotonic increase in its activation over that during the abstract stimuli could be explained by the top-down modulation of this region during the more challenging of the face processing conditions. We speculate therefore that this difference reflects an increasing influence of top-down areas onto the ventral cortex, priming these extrastriate visual areas to extract the visual features that are to be drawn. When the features are poorly defined, as in the undefined stimuli, prior knowledge of the stimulus category-faces-allows the participant to make the difficult judgments about the line to be drawn. In this study, we have been able to identify the influence of this top-down knowledge on the processing of lower areas only in the category of faces; other more specific classes of stimuli might be needed to indentify similar top-down influence in processing, for example, objects or scenes. We should acknowledge that our design could be improved. Through our choice of folded towels with typically only one obvious edge or fold, the complexity of the photographic face stimuli is higher than that for the abstracts. By default, only the photographic defined and undefined faces had internal features (eyes, mouth etc), and this might lead to higher interest and attention. Thus the increase in complexity between the two photographic face formats and the other two line and silhouette formats is greater than that seen for the abstract stimuli, and might engage higher interest or attention. We do not believe that this can be the major cause of the activation pattern seen in the tempero-occiptal-parietal areas, however, as an attentional account would suggest longer viewing times for the more engaging stimuli. In contrast, our behavioral measures show that gaze ratios were very closely matched between the two stimulus classes, and all of the ocular metrics showed a linear trend across the 4 formats, for both abstract and face stimuli. The domain-specific effect that we believe is driving these changes in extrastriate cortical activation levels is closely in line with the model of proposed perceptual advantages for trained artists put forward by Seeley and Kozbelt (2008). They suggested that processing of image features within the visual stream (including V1, MT, V4 and TEO) would be primed by categorical knowledge from the DLPFC and from area TE, to allow extraction of visual features consistent with a current perceptual hypothesis, but also by motor working memory or action schemata from motor areas such as rostral SMA and PMC, that would constrain visual processing and direct visual attention consistent with planned motor acts. We did indeed find significant activation of middle and frontal gyrus, bilaterally. We also found activation of the frontal pole, BA10, and this may be consistent with Seeley and Kozbelt (2008)'s hypothesis that high levels of the motor hierarchy constrain visual processing though their role in planned motor acts. This area can be activated by high-level decisions about the value of actions (Ramnani and Miall, 2003). In summary, we have shown that line drawing of facial features in visual images engages a number of lateral and ventral occipital areas, and the activation in these areas may reflect both their domainspecific sensitivity for faces, and their top-down modulation by other cortical areas-perhaps prefrontal-in agreement with theory about perceptual ability in trained artists. Our study has only tested untrained participants, and while trained artists might be expected to have developed particular experience in making judgments on visual images, normal healthy adults would have significant domain-specific knowledge about faces. The tempero-occipital activation therefore may reflect top-down knowledge that the participants are using to disambiguate the drawings (knowledge that can distort perception at times). Solso (2001) has reported a pilot study comparing one skilled artist with untrained participants, and indeed showed higher frontal activation, and less occipital activity in the artist. Interestingly, Chamberlain et al. (2014) have just reported an anatomical study with trained and untrained artists that suggests increased grey-matter in right medial frontal gyrus correlated with observational drawing ability (cf . Table 1D), while right precuneus correlated with observational drawing training (cf . Table 1E). A full functional study comparing artists with nonartists would be a valuable extension to our work, to further clarify this issue.
2015-03-06T19:42:58.000Z
2014-11-15T00:00:00.000
{ "year": 2014, "sha1": "bdb2fea42276798b1ecd51551895df866102216a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuroimage.2014.08.015", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "907d3a3199d142644c6357403793ef4cf4e08519", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Computer Science", "Medicine" ] }
54490247
pes2o/s2orc
v3-fos-license
Phase I/II Study of Stem-Cell Transplantation Using a Single Cord Blood Unit Expanded Ex Vivo With Nicotinamide Purpose Increasing the number of hematopoietic stem and progenitor cells within an umbilical cord blood (UCB) graft shortens the time to hematopoietic recovery after UCB transplantation. In this study, we assessed the safety and efficacy of a UCB graft that was expanded ex vivo in the presence of nicotinamide and transplanted after myeloablative conditioning as a stand-alone hematopoietic stem-cell graft. Methods Thirty-six patients with hematologic malignancies underwent transplantation at 11 sites. Results The cumulative incidence of neutrophil engraftment at day 42 was 94%. Two patients experienced secondary graft failure attributable to viral infections. Hematopoietic recovery was compared with that observed in recipients of standard UCB transplantation as reported to the Center for International Blood and Marrow Transplant Research (n = 146). The median time to neutrophil recovery was 11.5 days (95% CI, 9 to 14 days) for recipients of nicotinamide-expanded UCB and 21 days (95% CI, 20 to 23 days) for the comparator (P < .001). The median time to platelet recovery was 34 days (95% CI, 32 to 42 days) and 46 days (95% CI, 42 to 50 days) for the expanded and the comparator cohorts, respectively (P < .001). The cumulative incidence of grade 2 to 4 acute graft-versus-host disease (GVHD) at day 100 was 44%, and grade 3 and 4 acute GVHD at day 100 was 11%. The cumulative incidence at 2 years of all chronic GVHD was 40%, and moderate/severe chronic GVHD was 10%. The 2-year cumulative incidences of nonrelapse mortality and relapse were 24% and 33%, respectively. The 2-year probabilities of overall and disease-free survival were 51% and 43%, respectively. Conclusion UCB expanded ex vivo with nicotinamide shortens median neutrophil recovery by 9.5 days (95% CI, 7 to 12 days) and median platelet recovery by 12 days (95% CI, 3 to 16.5 days). This trial establishes feasibility, safety, and efficacy of an ex vivo expanded UCB unit as a stand-alone graft. INTRODUCTION Despite remarkable improvement in outcomes of adult recipients of umbilical cord blood (UCB) transplantation, slow hematopoietic recovery continues to be the major limitation of this approach. Stemming from this delay in hematopoietic recovery are other disadvantages of UCB transplantation, such as increased risk for infection, prolonged hospitalization, and increased resource use. Early-phase, singlecenter studies have demonstrated that ex vivo expansion of UCB stem cells before transplantation has the potential to address this critical shortcoming. By expanding both hematopoietic stem and progenitor cells, the time to neutrophil recovery after myeloablative conditioning can be even more rapid than that after a mobilized peripheral blood stem-cell graft. [1][2][3][4] NiCord (Gamida Cell, Jerusalem, Israel) is an ex vivo expanded cell product derived from the CD133+ fraction of banked UCB that uses nicotinamide as the active agent that inhibits differentiation and enhances the functionality of cultured hematopoietic stem and progenitor cells. When nicotinamide is added to stimulatory hematopoietic cytokines, UCB-derived hematopoietic progenitor cell cultures demonstrate an increased frequency of phenotypically primitive CD34 + CD38 2 cells and a substantial increase in bone marrow homing and engraftment potential of ex vivo expanded CD34+ cells. 5 The ability of nicotinamide to expand both committed and long-term repopulating hematopoietic stem cells was confirmed in a first-inhuman pilot study of NiCord. 3 In this study, a second unmanipulated UCB unit was coinfused with the NiCord expanded unit to maintain patient safety. With long-term follow-up, stable NiCord-derived hematopoiesis has now been observed for more than 7 years. On the basis of these results, we conducted a multicenter, phase I/II study of NiCord transplanted as a single, expanded UCB graft after myeloablative conditioning. Patient Eligibility Eligible patients were 12 to 65 years of age with high-risk hematologic malignancies and no readily available matched sibling or matched unrelated adult donor. The Center for International Blood and Marrow Transplant Research (CIBMTR) provided historical data on 1,037 patients undergoing UCB transplantation between 2010 and 2013. A cohort of patients was selected with characteristics as similar as possible to the phase I/II patients; selections for myeloablative conditioning, disease status, age, graft size, HLA matching, and performance score criteria resulted in a CIBMTR sample size of 146 (Appendix Table A1, online only). Among the final cohort, 80% received a double cord blood graft and 20% received a single cord blood graft. Of the 58 patients who enrolled in the trial between 2013 and 2017, 10 became ineligible during the pretransplantation work-up and five withdrew because of logistical issues surrounding graft production. Forty-three patients were allocated to treatment in the study. Seven of the 43 patients were not evaluable because of NiCord production complications. These patients underwent UCB transplantation with either an unmanipulated cord blood graft or a combination of NiCord plus an unmanipulated cord blood graft (Appendix Fig A1, online only). The study was approved by the institutional review boards of all participating institutions and the national regulatory authorities. All patients provided written informed consent. The study was performed in accordance with the International Conference on Harmonization Guidelines and Good Clinical Practice (ClinicalTrials.gov identifier: NCT01816230). Graft Selection Protocol eligibility required patients to have a cord blood unit matched at 4 to 6/6 HLA class I (HLA-A and HLA-B, low resolution) and class II (HLA-DRB1, high resolution) loci (Data Supplement). The unit was required to have a precryopreserved dose greater than or equal to 8.0 3 10 6 CD34+ total cells as well as a precryopreserved total nucleated cell dose (TNC) greater than or equal to 1.8 3 10 9 delivering greater than or equal to 1.8 3 10 7 TNC/kg. The UCB unit must have been volume reduced and red blood cell depleted before cryopreservation. UCB bank preference was not specified in the eligibility criteria. An additional partially HLAmatched cord blood unit of at least 2.5 3 10 7 TNC/kg was reserved as a backup in case the expanded product did not pass the required quality control tests. NiCord Production The NiCord-designated unit was delivered from the cord blood bank to a Current Good Manufacturing Practicecompliant cell-processing facility (Lonza, MD, or Gamida Cell, Jerusalem, Israel). NiCord was manufactured as previously described. 3 Briefly, the unit underwent immunomagnetic bead selection for CD133+ cells. The CD1332, T-cell-containing flow-through fraction was retained and recryopreserved. The CD133+ fraction was cultured for 21 6 2 days and then recryopreserved. Taking into account time for shipment of the UCB unit to the cell-processing facility and then to the transplant center, the total time for NiCord production is 24 6 3 days. CD34+ and CD3+ cell content of the graft, as reported in this article, was quantified before recryopreservation of the product. Conditioning Regimens and Graft-Versus-Host Disease Prophylaxis Three alternative myeloablative conditioning regimens were permitted for study participants ( Table 1). All dosing was on the basis of 25% adjusted ideal body weight unless otherwise noted. Graft-versus-host disease (GVHD) prophylaxis was provided by a calcineurin inhibitor (tacrolimus or cyclosporine) and mycophenolate mofetil starting 4 days before transplantation. Mycophenolate mofetil was continued for a minimum of 60 days and the calcineurin inhibitor for minimum of 6 months after transplantation. Supportive Care Granulocyte colony-stimulating factor (5 mg/kg recipient body weight) was administered daily starting on day +1 after transplantation until the absolute neutrophil count exceeded 1,000 cells/ml. Antiviral and antifungal prophylaxis were administered at the discretion of the transplantation center. Antibacterial prophylaxis for the first 100 days after transplantation was required by protocol. The agent used was left to the discretion of the transplant center. Laboratory and Clinical Assessments Donor chimerism was performed by the local transplant center on whole blood, CD15+ myeloid, and CD3+ T cells using quantitative analysis of informative microsatellite DNA sequences. Quantitative assessment of CD3, CD4, CD8, natural killer, and B-cell recovery was performed on a subset of patients by the local transplantation center (or designated referral laboratory) 2 months, 3 months, 6 months, and 1 year after transplantation. The time to neutrophil and platelet engraftment was defined as per CIBMTR standards. Statistical Considerations Analysis was limited to the 36 patients undergoing transplantation with NiCord as a stand-alone graft. Database closure was on November 16, 2017. The primary end points were the cumulative incidence of neutrophil engraftment at 42 days with less than or equal to 10% host cells and the incidence of secondary graft failure. To facilitate comparison with CIBMTR data, engraftment without chimerism was evaluated here. Secondary end points evaluated here were cumulative incidence of platelet engraftment, overall survival, nonrelapse mortality, disease relapse, acute and chronic GVHD, and time alive and out of hospital over the first 100 days. Competing risks for engraftment were death, progression/relapse, and second transplantation; for GVHD, competing risks were death, absolute neutrophil count recovery failure, second transplantation, secondary graft failure, and progression/relapse; for nonrelapse mortality, competing risk was progression/relapse; and for progression/relapse, competing risk was death. Because of differences in the age distribution between the phase I/II study and the retrospective cohort, unadjusted and age-adjusted cumulative incidence curves for engraftment were calculated; age-adjusted curves for the CIBMTR cohort were weighted by the proportion of patients in the phase I/II trial in age strata 18 years or younger, 19 to 39 years, and 40 years or older. Comparison graphs for engraftment are provided showing the weighted cumulative incidence. Calculations of SEs for the unadjusted and adjusted cumulative incidence CIs were based on the Aalen and delta method, respectively. Differences between times to engraftment were tested using a van Elteren test stratified on age groups. For these tests, patients not engrafting were assigned a time to event larger than any patient with an event. Median time to an event is calculated among those with an event, with 95% CIs on the basis of CI calculations for rank statistics. CIs (95%) for the difference between median times to engraftment were estimated using the nonparametric bootstrap. For secondary end points, unadjusted cumulative incidence or Kaplan-Meier survival probabilities are reported. For GVHD, comparisons were made using the Fine-Gray model with group and age group as covariates; for nonrelapse mortality and relapse, comparisons were made using the Fine-Gray model with group, age group, and disease risk index, and Cox models were also used to help interpret the results. For overall survival and disease-free survival, comparisons were made using the log-rank test and a Cox model with group, age group, and disease risk index. Time out of hospital was compared using the age-stratified van Elteren test. SAS (SAS/STAT User's Guide, Version 9.4, SAS Institute, Cary, NC), STATA 15 software (STATA, College Station, TX; Computing Resource Center, Santa Monica, CA), R Studio, and R3.3.1 or higher were used for these analyses. Patient and Stem-Cell Transplantation Characteristics Patient characteristics are described in Table 1. Eleven centers in the United States, Europe, and Asia (Singapore) enrolled patients on the study. Graft Characteristics Characteristics of the NiCord graft before and after expansion are shown in Figure 1. The median total CD34+ Abbreviations: ALL, acute lymphoblastic leukemia; AML, acute myelomonocytic leukemia; CML, chronic myeloid leukemia; CMV, cytomegalovirus; MDS, myelodysplastic syndrome; TBI, total body irradiation. *TBI 13.5 Gy over eight or nine fractions on days 29 to 26 or 25, and either cyclophosphamide, 60 mg/kg on days 24 and 23 or thiotepa 5 mg/kg administered on days 211 and 210. 7,8,9 The third agent in regimen A was fludarabine 40 mg/m 2 on days 25 to 22 when paired with thiotepa, or 25 mg/m 2 on days 28 to 26 when paired with cyclophosphamide. Hematopoietic Recovery The age-adjusted cumulative incidence of neutrophil engraftment at 42 days after transplantation was 94% for NiCord recipients and 85% for the CIBTMR comparator cohort (Fig 2A). By 21 days after transplantation, 89% of NiCord recipients had achieved neutrophil engraftment. Neutrophil engraftment was faster for NiCord recipients (P , .001). Among patients who engrafted, the median time to neutrophil recovery was 11.5 days (95% CI, 9 to 14 days) for NiCord recipients and 21 days (95% CI, 20 to 23 days) for the CIBMTR comparator cohort. The age-adjusted cumulative incidence of platelet engraftment at 100 days after transplantation was 81% for NiCord recipients and 63% for CIBMTR comparator cohort (Fig 2B). Platelet engraftment was faster among NiCord recipients (P , .001). For patients who achieved platelet recovery, the median time to platelet recovery was 34 days (95% CI, 32 to 42 days) and 46 days (95% CI, 42 to 50 days) for NiCord and CIBMTR comparator cohorts, respectively. Whole blood chimerism was available for 26 patients at 100 days after transplantation. Twenty-five patients (96%) had greater than or equal to 95% and one had 57% donor whole blood chimerism. Lineage-specific myeloid and T-cell chimerism was available in a subset of patients (n = 22) at day 100. Twenty patients had greater than 90% donor chimerism in both fractions. Two patients had mixed chimerism at day 100; one was 57% in the myeloid fraction and 3% in the T-cell fraction and the other was 100% in the myeloid fraction and 10% in the T-cell fraction. One patient experienced primary graft failure. Two patients experienced secondary graft failure, one occurring at day 19, concurrent with high titer human herpes virus 6 viremia, and the second occurring at day 262, concurrent with a lethal adenovirus infection. Nonrelapse Mortality, Relapse, Disease-Free Survival, and Overall Survival The median follow-up of surviving NiCord recipients was 14 months (range, 5 to 36 months). The unadjusted 2-year cumulative incidence of nonrelapse mortality for NiCord recipients was 24% (95% CI, 11% to 39%). Using both the Fine-Gray and the Cox models, 2-year nonrelapse mortality hazard rates were lower for patients receiving a NiCord graft compared with the CIBMTR cohort ( Table 2). The unadjusted 2-year cumulative incidence of relapse for NiCord recipients was 33% (95% CI, 16% to 52%). Cause-specific hazard for relapse for NiCord recipients was no different from the CIBMTR cohort when compared using the Cox model, but the subdistribution hazard was higher when compared using the Fine-Gray model ( Table 2). The 2-year probability of disease-free survival for NiCord recipients was 43% (95% CI, 24% to 60%) and 45% (95% CI, 37% to 53%) for the CIBMTR comparator cohort (P = .77). The unadjusted 2-year probability of overall survival for NiCord recipients was 51% (95% CI, 33% to 67%) and 48% (95% CI, 40% to 56%) for the CIBMTR comparator cohort (P = .72; Fig 3). With adjustment for both age and disease-risk index, there were no differences in disease-free and overall survival hazard between the two cohorts ( Table 2). Transplantation Course and Toxicity Primary hospital discharge occurred at a median of 20 days (range, 0 to 61 days) after transplantation. Recipients of the NiCord graft spent a median of 73 days (range, 0 to 85 days), and CIBMTR standard cord blood recipients spent a median of 57 days (n = 141; range, 0 to 92 days) alive and out of the hospital during the first 100 days after UCB transplantation (P , .001). Hypertension was reported as the most common toxicity attributable to NiCord infusion. One grade 3 hypertension and one grade 2 hypersensitivity reaction were attributed to NiCord infusion. Of the 16 patients who died, eight deaths (50%) were attributable to relapsed disease, five (31%) to infection, two (13%) to GVHD, and one (6%) to organ failure. Immune Reconstitution Lymphoid immune recovery was monitored in a subset of 27 patients after transplantation of NiCord. Figure 4 demonstrates the CD3, CD4, CD8, CD19, and natural killer cell recovery during the first 12 months after transplantation. DISCUSSION NiCord is an ex vivo expanded UCB graft designed specifically to address the limitations arising from low hematopoietic stem and progenitor cell dose and resultant delayed engraftment after adult UCB transplantation. We show that transplantation of NiCord is safe, is effective in reducing the time to hematopoietic recovery, and does not require coinfusion of a second unmanipulated UCB unit. The use of dual UCB grafts has vastly expanded the accessibility of UCB transplantation to adult patients who lack an adequately sized single UCB graft. 7,8 However, the problem of delayed hematopoietic recovery was not addressed by this technique. Ex vivo expansion of UCB stem and progenitor cells has been studied by a number of groups in an attempt to address the important limitation of UCB transplantation. 1,2,4,11,12 Delaney and colleagues 1 were the first to demonstrate that transplantation of UCB stem cells, expanded in the presence of Delta 1 Notch ligand, resulted in a median 10-day reduction in time to neutrophil recovery compared with conventional dual UCB transplantation. This strategy was designed as a bridge to long-term engraftment by a second, unmanipulated UCB graft. NiCord was designed to be a stand-alone graft and differed from the preceding expanded UCB products in that the T-cell fraction from the unit was retained and recryopreserved before culture. This important difference allowed NiCord the potential to become the dominant unit after coinfusion with an unmanipulated cord blood unit. 3 To our knowledge, this study is the first to show that an expanded UCB unit can be infused as a stand-alone graft and is capable of providing robust, durable hematopoiesis. One patient (3%) experienced primary graft failure, a rate well below the graft failure rate after stem-cell transplantation from bone marrow grafts. 13 Two patients experienced secondary graft failure. Although stem-cell exhaustion cannot be completely ruled out, high titer adenovirus and human herpesvirus 6 infections are the most plausible explanation for these events. The median time to neutrophil recovery is 20 days after myeloablative HLA-identical allogeneic bone marrow transplantation and 15 days after HLA-identical mobilized peripheral blood stem-cell transplantation. 13 This 5-day reduction in time to neutrophil recovery for peripheral blood stem-cell recipients translated into a significant reduction of bacterial infections during the first 100 days after transplantation. 14 The median time to neutrophil recovery after myeloablative haploidentical peripheral blood stemcell transplantation using post-transplantation cyclophosphamide as GVHD prophylaxis is 16 to 19 days. 15,16 Transplantation of NiCord as a single ex vivo expanded UCB graft results in an estimated median time to neutrophil recovery of 11.5 days (95% CI, 9 to 14 days). This marked reduction in time to neutrophil recovery explains why NiCord recipients spent less time in the hospital compared with the CIBMTR cohort and why the NiCord graft has been associated with a reduction in bacterial infections. 17 When compared with a retrospective cohort of patients who received standard myeloablative UCB transplantation, we observed that recipients of NiCord experienced a trend toward less severe acute GVHD, lower nonrelapse mortality, and higher relapse. The lower nonrelapse mortality in the NiCord cohort could confound the comparison of relapse. Results using the Cox instead of the Fine-Gray model indicated a smaller, statistically nonsignificant (P = .11), increase in the relapse rate among NiCord recipients. Overall, these findings need to be considered with caution. The small sample size of the NiCord cohort resulted in wide confidence intervals. Extreme disease heterogeneity among the cohorts could also confound the comparison of relapse. UCB transplantation has a 30-year track record of providing a hematopoietic stem-cell transplant option for patients without an available matched adult donor. 18 Many adult recipients require two UCB units to ensure reliable engraftment. However, the addition of a second unit significantly increases the expense of the transplantation and is associated with delayed platelet recovery and a higher incidence of chronic GVHD. 19 This study suggests that NiCord obviates the need for a second UCB graft. NiCord paves the way for use of smaller, better-matched units for adult patients that otherwise could not be used because of excessive risk of graft failure. Additional studies will be needed to determine whether the time required for graft production negatively affects patient outcome. The study demonstrates the feasibility of an ex vivo expanded hematopoietic stem-cell product manufactured in a centralized cell-processing facility and distributed internationally to three continents. It is hypothesized that an ongoing prospective, multicenter, phase III registration trial comparing NiCord to standard myeloablative UCB transplantation will provide confirmation of the findings presented in this study.
2018-12-12T19:54:12.230Z
2018-12-04T00:00:00.000
{ "year": 2018, "sha1": "5c84cd57cc2420dd431761cb74bf1ea62c0867fb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1200/jco.18.00053", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5c84cd57cc2420dd431761cb74bf1ea62c0867fb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252089088
pes2o/s2orc
v3-fos-license
Aspects of Modelling Requirements in Very-Large Agile Systems Engineering Using models for requirements engineering (RE) is uncommon in systems engineering, despite the widespread use of model-based engineering in general. One reason for this lack of use is that formal models do not match well the trend to move towards agile developing methods. While there exists work that investigates challenges in the adoption of requirements modeling and agile methods in systems engineering, there is a lack of work studying successful approaches of using requirements modelling in agile systems engineering. To address this gap, we conducted a case study investigating the application of requirements models at Ericsson AB, a Swedish telecommunications company. We studied a department using requirements models to bridge agile development and plan-driven development aspects. We find that models are used to understand how requirements relate to each other, and to keep track with the product's evolution. To cope with the effort to maintain models over time, study participants suggest to rely on text-based notations that bring the models closer to developers and allow integration into existing software development workflows. This results in tool trade-offs, e.g., losing the possibility to control diagram layout. INTRODUCTION Driven by success stories in small-scale software development, agile development is increasingly adopted in large-scale software and systems engineering [6,12,15,31,45]. However, context factors such as long lead times [6], safety criticality [27], and the scale of development itself make this adoption challenging. In particular, challenges relate to Requirements Engineering (RE), such as building and maintaining a shared understanding of customer value and the system requirements [26,29]. To build and maintain system knowledge over time, models have been used as a suitable means of documentation [26]. Specifically, models are often cited as a way to deal with complexity that arises from the scale of systems [47]. However, while the use of models is common in systems engineering [33], using requirements models is uncommon in practice [34,35]. In the context of large-scale agile software and systems engineering, we are not aware of any work investigating the use of models in industry. Therefore, the goal of this paper is to better understand the potential of using requirements models in very large-scale (VLS) agile [14] systems engineering. To do so, we conducted a case study of a single department at Ericsson AB, a large Swedish telecommunications provider, which has long-ranging experience using requirements models in a VLS agile setting. We aim to answer the following research questions (RQs). RQ1 : What sentiments exist for and against the use of requirements models in VLS agile systems engineering? RQ2: How do different stakeholders use requirements models in VLS agile systems engineering? RQ3: What are the needs to support the intended use of requirements models in VLS agile systems engineering? To answer these questions, we collected survey data, followed up with a number of semistructured interviews to find answers to patterns observed in the survey. We find that the requirements models at the case department serve as a boundary object that relates the agile world in individual teams with the overall waterfall-like process that deals with product requirements and their long-term evolution. While engineers are positive regarding the use of models, many take a practical stance concerning the feasibility of continuously maintaining these models over time. To achieve an updated and maintained model, text-based modelling approaches such as PlantUML 1 with certain inherent limitations such as automatic layouting are seen as inevitable. Furthermore, to avoid deterioration of models over time, our study participants suggest generating simple artefacts from the models, e.g., documentation. This would encourage engineers to regularly update the models, as derived artefacts would otherwise become outdated. RELATED WORK There exists a broad body of work on the use of models in industry, and suggestions on how to use models for RE-related activities. In the following, we will discuss this work in detail. In a case study at Motorola, Baker et al. [3] discuss how Model-Based Engineering (MBE) is used at Motorola over a period of 20 years. The authors report several positive effects, such as defect reductions and increases in productivity, but also a lack of tools and tool interoperability, poor performance of generated code, and a lack of scalability of the modelling approach. Experiences from three European companies with MBE techniques and tools are presented by Mohagheghi et al. [37] in terms of a qualitative study. The authors find that simulation and testing opportunities are positive aspects of using MBE, while tool problems and the complexity of models are listed as drawbacks. Hutchinson et al. study the use and adoption of MBE in industry in a series of qualitative and mixed-methods studies [21][22][23]50]. The overall finding of this study series is that the organisation context and several non-technical topics need to be considered for MBE to succeed. For instance, the authors report that significant additional training is needed for the use of MBE. From their interviews, the authors conclude that especially people's ability to think abstractly seems to have significant impact on their ability to create models. In addition, several technical challenges such as tool shortcomings impede the use and success of MBE. In a case study at two automotive companies [34], we find that models are used in automotive RE to improve communication and to handle complexity. However, stakeholders prefer informal models and whiteboard sketches over formal modelling notations. Frameworks for Using Models During RE Several frameworks and methods have been suggested that include the use of models for or during RE. Pohl et al. [41] introduce the SPES 2020 Methodology for the development of embedded systems. During RE in particular, the framework suggests a separation between solution-independent and solution-oriented diagrams. Practical experiences with SPES are reported in [8] and [11]. In [8], Böhm et al. present their experiences with SPES in an industrial project at Siemens. The authors apply SPES to a mature, already running train control system, using a specification of "high quality". Findings are that "the high quality of input documents, and cooperation with product experts were considered the most influential success factors". Brings et al. [11] discuss experiences of using SPES in the area of cyber-physical systems. The authors report that they "identified problems resulting from an increased number of dependencies. " and "the need to cope with redundancies caused by properties which are system as well as context properties in a structured manner. ". Vogelsang et al. [49] propose to model requirements and architecture in parallel, and evaluate the approach with 15 master students. In particular, the authors propose the use of Message Sequence Charts. Brandstetter et al. [9] present a process to perform early validation of requirements by means of simulation, using the control software of a desalination plant as an industrial case. Experiences of using the approach are discussed, but details on the execution of the use case are largely missing. Resulting from a research project with academic and industrial partners, Braun et al. [10] propose the use of model-based documentation. For RE, these include goal models, scenario models and function models. To our knowledge, the approach has not been evaluated in terms of an empirical study. Berenbach, Schneider, and Naughton [5] list several requirements they consider essential for a requirements modelling language, such as distinction between process and use case modelling. The authors argue that using UML for requirements modelling has proven to be frustrating. URML is piloted in one commercial project at Siemens, showing that the proposed concepts are useful. Finally, the Model-Driven Requirements Engineering (MoDRE) workshop series that has taken place since 2011 contains many contributions on how models, in particular in the context of model-driven development, can be used for RE purposes. RE in Large-Scale Systems Engineering Initially, agile approaches were focused on small teams developing software [4,25,36]. The success of these approaches have led to their adoption at scale [13,31,46], where non-agile, plan-driven, and stage-gate based processes have been the norm [39]. Due to their iterative nature, agile approaches are suitable for building systems whose requirements may change; further, experience from early versions of a system can impact later versions [4,17,36]. Gren and Lenberg even argue that the main motivation for choosing agile methods is to be able to respond to changing requirements [17]. However, Heikkilä et al. [20] find in their mapping study that there is no universal definition of agile RE. Instead, they report that requirements-related agile practices such as the use of customer representatives, prioritization of requirements, or growing technical debt are particularly hard to manage. The same authors also present a case study at Ericsson, where they investigate the flow of requirements in large-scale agile [19]. They find that practitioner perceive benefits such as increased flexibility, increased planning efficiency, and improved communication effectiveness. However, the authors also report problems such as overcommitment, organizing system-level work, and growing technical debt. In their case study on the use of agile RE at scale, Bjarnason et al. [7] also report that agility can mitigate communication gaps, but at the same time may cause new challenges, such as ensuring sufficient competence in cross-functional teams. In a case study with 16 US-based companies, Ramesh et al. [42] identify risks with the use of agile RE such as neglecting non-functional requirements or customer inability. A systematic literature review on agile RE practices and challenges reports eight challenges posed by the use of agile RE [24], such as customer availability or minimal documentation. However, the authors also report 17 challenges from traditional RE that are overcome by the use of agile RE. The authors conclude that there is more empirical research needed on the topic of agile RE. Consequently, Kasauli et al. [26] report on RE challenges in scaled-agile system development that are neither addressed in contemporary RE literature nor by established frameworks for scaled-agile. Paetsch [38] suggest that agile methods and RE are pursuing similar goals in key areas like stakeholder involvement and therefore could be integrated in a good way. The major difference is the emphasis on the amount of documentation needed in an effective project. Meyer, in contrast, criticizes agile methods for limiting requirements engineering to functional requirements described through (exemplary) scenarios and discouraging upfront planning [36]. In fact, in practice such functional requirements are often described as user stories, e.g. formulated as boilerplate statements: "As a <role> I want <feature> so that <value>." [32]. The much more detailed requirements of plan-driven approaches are omitted; instead, agile methods push for a continuous dialogue (with customer representatives or product owners) and comprehensive sets of tests, which are ideally automated [36]. Given the set of challenges with managing requirements in scaled agile, it is unlikely that user stories and automated tests are enough to enable a shared understanding of requirements in agile at scale. It is therefore that we investigate the use of requirements models in agile. Models in Agile Development As a final area of related work, several authors have explored how models can be used in agile development, e.g., [2,18,43]. Ambler [2] argues that modelling and agile development can go hand in hand. The author describes important aspects to succeed with agile modelling, e.g., using as simple tools as possible, fostering effective communication, and building agile modelling teams. Similarly, Rumpe [43] argues that modelling can be used as a part of agile methodologies to further increase development efficiency. Concretely, the author suggests to use models for code and test case generation. A number of further approaches to use models during agile development have been proposed as a part of the Extreme Modeling (XM) workshop series. However, as noted by Hansson et al. [18], existing work on agile modelling suffers from a lack of empirical evidence on its application in industry. In summary, there is a large body of work on how models are used in industry, including benefits and challenges of using models. Additionally, challenges of agile development and agile RE at scale are studied in considerable depth. Finally, a substantial amount of solution proposals for using models for RE and during agile development exist. However, to our knowledge there are no detailed studies investigating industry cases of successful model use for RE activities. RESEARCH METHOD To address the RQs, we conducted a case study at a department in a large Swedish telecommunications company -in the following referred to as the case department/company. We embrace a constructivist world view, emphasising that different engineers at the case department have subjective views and opinions on the topic under investigation. The case study is both exploratory and confirmatory in nature. That is, we use a set of propositions we formulated initially and updated throughout the study. At the same time, we included a number of open questions to be investigated as part of the study. Case Description We conducted this study in one department at Ericsson AB, a large Swedish telecommunications provider. In that department, more than 30 Scrum teams develop a single product in parallel based on a scaled agile approach. Cross-functional teams independently work on backlog features all the way to delivery on the main branch. Specialised coordination roles exist, e.g., for integration or architecture tasks. Scrum sprints are based on a backlog and a hierarchy of product owners breaks down product requirements and customer visible features to backlog items. While these product owners represent the customer requirements towards the product development, system managers (SMs) represent a system requirements perspective. These SMs also interact with agile teams in providing the system-level knowledge. Further products are developed using a similar methodology. Hardware development at the company is largely decoupled from software development. New hardware becomes available with a regular, but low frequency. The studied case department is a department at Ericsson AB. At the time of the study, there were approximately 200 engineers working at the department. Development is closely aligned with existing standards that describe technical solutions in much detail, e.g., [1]. Requirements on system level are stored in the tool T-Reqs [30], which has been developed in house. T-Reqs allows storing text-based requirements and other artefacts together with code in version control systems such as git, thus bringing these artefacts closer to developers [30]. The tool has been used at the case company since 2017. Models are used to keep track of the system requirements and their relation. This is primarily done in the form of UML activity diagrams, where activities denote requirements and the flow between activities their relation and order. Models are created and maintained manually. More details on the used models are presented in Section 4 and Section 5. Study Scope and Propositions From previous research and initial scoping meetings with two contact persons at the case department, we formulated a number of propositions addressing the three research questions. These are depicted in Table 1. For each, we describe the origin of the proposition. The propositions can be summarised as follows. For RQ2, we envision an organisation in which few experts work with models (P1), while in particular the roles working with lower level of abstractions, testers and developers, do not use the models (P2), do not see the need for them (P4), and do not think that any relevant information is contained in the model (P6). People creating the models are not necessarily experts, resulting in an ad-hoc approach (P3). Few UML diagram types are in use (P5). The existing tool solution for modelling is restricting the employees in their work (P10). For RQ1, we expect that sentiments towards modelling roughly resemble the current use: a few "power users" of models , but a substantial amount of people not believing in the usefulness of models. For RQ3, we cover a few important tool decisions, including the need for only few diagram types (P5), the need for layouting capabilities (P7), automation support (P9) and navigation between diagrams (P12). Furthermore, we expected some insights from participants that would not use the models even with better tools (P11), since they might have additional input on what would be the preferred format. For the remaining feature space, we chose an exploratory approach asking several free-text questions to get additional input. Survey Design, Execution and Analysis To evaluate the propositions, we designed an online survey. Our contact persons reviewed the survey design. After review, our contacts sampled 54 people at the case department, all of which they judged to have sufficient knowledge of the model to answer our questions. We received 33 answers, i.e., a return rate of 61.11%. The participants worked in 16 different areas of the case department, covering various tasks and product aspects, both from functional and non-functional perspective. However, SMs were over-represented among the participants (22 out of 33 participants had an SM . Finally, the majority of participants had substantial work experience in the case department (depicted in blue bars in Figure 1) and modelling experience (depicted in yellow bars in Figure 1). We analysed the survey answers by creating summary statistics and evaluated the propositions in a qualitative manner, i.e., without employing statistical tests or related statistical methods. The first author summarised open-ended questions by assigning topic codes [44] to each stanza, then grouping related stanzas together and counting their frequency. The mapping of survey questions to propositions is depicted in Table 3 in Appendix B. As a form of member checking, we presented the results to our contact persons, who disseminated the findings in the department. Interview Follow-Up Following the questionnaire, we updated and refined our list of propositions and added some open questions (see Appendix C). The open questions relate in particular to contradictions in the survey data. For instance, while the majority endorsed using text-based models, the suggested solution does not support manual layouting, an important feature requested by the majority. We used the proposition and questions as an input for the creation of the interview guide. Our contact persons recruited five engineers to be interviewed. We requested a varied set of roles and mindsets, to obtain diverse information. In particular, we also asked them to recruit participants who might be skeptics of requirements modelling or modelling in general. While this is a small sample, it nevertheless represents about 10% of the survey sample size, i.e., engineers who are knowledgeable enough in modelling to answer our questions. We analysed the interview transcripts using the following process. Both authors, Grischa (GL) and Eric (EK), coded all interviews. GL used a list of a-priori codes aligned with the propositions and questions, while EK used open coding. In both cases, the coding followed a content coding approach [44]. That is, we assigned codes that describe the content of the coded stanza, assigning codes on a per-answer basis. In cases where the interviewees clearly discussed different content, we separated the answer into multiple stanzas which we coded differently. GL piloted the initial a-priori codes on one interview, then modified them according to the pilot. The final a-priori codebook is discussed in Appendix D. After the first round of coding, we discussed the resulting code distribution and decided to continue a parallel approach. That is, we jointly structured the existing codes in a second-cycle coding approach. We hierarchically grouped the codes obtained from EK's open coding into the different (and much more abstract) a-priori codes, then grouped the resulting clusters according to our three research questions. We then extracted candidate themes, which we validated using all stanzas coded with at least one of the open codes for the theme. Simultaneously, GL analysed the interview data one more time and followed a holistic coding [44] approach, writing analytical memos while working through the data. Finally, we integrated the initial themes from the second-cycle coding approach with the themes extracted from the holistic coding and memoing. Validity Threats Given the constructivist nature of this case study, we present the threats to validity in terms of transferability, credibility and confirmability [40]. Transferability. Transferability describes to what extent results from the study can be transferred to cases that resemble the case under study [40]. Many of the reported aspects are specific to the case department, e.g., the role of the SM that connects agile teams with the system-level view. However, we know from previous work [26] that similar roles and situations exist in many systems engineering companies. Therefore, we expect that the findings apply in similar cases as well. One exception might be the large emphasis on software development at the case department, which is in contrast to many other systems engineering organisations, where hardware is developed in parallel and thus causes long lead times and longer feedback cycles. We used purposeful sampling to select interviewees that had diverse background and at the same time could comment on the use of requirements models. However, we did not reach saturation in all our themes. This means that there might be additional facets or themes, or contrasting ideas that we did not capture. This is a threat to the transferability of our findings. 3.5.2 Credibility. Credibility describes whether findings are reported truthfully, or have been distorted by the researchers [40]. All interviews were recorded, and data analysis performed on the verbatim transcripts. Additionally, we report quotes for all themes in our qualitative interview analysis. This should ensure credibility of the findings. We performed first-cycle coding and memo writing for both the free-text answers in the survey and the interview transcripts. This should avoid threats to credibility arising from long chains of interpretation in our analysis. Confirmability. Confirmability describes the extent to which conclusions made by researchers follow from the observed data [40]. To structure our study, we used propositions prior to the survey and in between the survey and interviews. We then evaluated them after each analysis step. Furthermore, survey and interview instruments, as well as the codebooks are available in the appendix to this paper. EXPLORATORY SURVEY In the following, we present the results of the exploratory survey in terms of descriptive statistics and relations to the propositions. We then discuss the implications of the survey findings. Survey Findings The resulting proposition evaluation is summarised in Table 2. Number Proposition Supported by Survey P1 Models are created by few experts, and mainly read by them. Yes P2 Access to models, and especially editing, is rare among testers and developers. P3 Model creators are not modelling experts. Therefore, use of modelling languages is ad-hoc and varies across the organisation. P4 Testers and developers do not see the need/use of modelling requirements. Partially P5 Only few diagram types (of the UML) are used. Yes P6 Testers and developers do not think that the present models carry important information. P7 Layouting of diagrams is important to the users. Yes P8 Stakeholders believe that modelling should be integrated with existing development tools (e.g., git). P9 Stakeholders do not believe that the requirements models should be used for automated tasks. They should instead be used as documentation only. P10 The current modelling solution is restricting employees in their work. P11 Even if a better/good modelling solution would be in place, most stakeholders would not update/maintain the model. P12 Navigating between different diagrams is an important feature. Yes Table 2. Evaluation of Propositions For P1, 5 participants state that they create/modify diagrams at least weekly (see the dark blue bars in Figure 2). Three of these participants are System Managers (SMs), one is a Developer and one is both a Developer and SM. When consulting the read access/use of diagrams (light yellow bars Figure 2), these five participants have weekly (4 answers) or daily (1 answer) read access to the diagrams. In the entire sample, only 2 more people stated that they read/use the diagrams on a weekly basis. This seems to confirm our proposition that it is indeed a small group responsible for modification and use of diagrams. Our data shows a mixed picture for P2 (testers and developers access and modify models rarely). Of the 8 people with development or testing roles (out of 33 participants, see Section 3), 5 state that they read models on a monthly basis, 2 weekly, and 1 yearly. Creation is less common, with 3 stating that they never create or modify models, 2 yearly, 1 monthly, and 2 weekly. This picture does not change significantly if we consider only those who have a pure testing/development role (without addition of Designer/Architect or System Manager). Overall, these figures do not allow a clear answer as to whether P2 is confirmed or not. The free text answers indicate that people not accessing the models are mainly concerned with the tooling (difficulty of tooling, access to the tool) and the effort it takes to comprehend the models (too much detail, information spread across model layers/navigation). P3 (model creators are not modeling experts) is not supported by our data and must be rejected. The participants who create/modify models at least weekly all have considerable experience with modelling (at least 5 years). We did however not ask whether they have a formal education, or proceed in an ad-hoc manner. For P4 (testers and developers do not see the need of modeling requirements), the survey data shows again a mixed picture (parts of Figure 3). Three survey participants agree or strongly agree that they would update the models regularly if they had a better tool. However, two participants strongly disagree and three do not know. There is again no noticeable difference between the pure developer/tester roles and others. Additionally, we do not see a pattern in the answers with respect to how the creation/modification patterns look like at the moment (e.g., "Participants who already modify/create diagrams often would not do it more often"). Interestingly, six people are generally positive towards modelling, and the remaining two neutral. No one opposes modelling per se. The free-text answers for this question do not give a clear justification of the pattern, either. However, one participant stated the concern that the current model is unreliable and therefore not useful, suggesting to assign someone to manage the model. As expected from the study preparations, due to the domain, behavioural models dominate at the case company ( Figure 4). Activity (18 answers) and sequence diagrams (13 answers) are used by the majority of the participants. However, state machine and use case diagrams follow closely with 9 and 10 participants. Class and component diagrams are used by 4 and 3 participants only. For P6 (developers and testers do not think that the present models carry important information), it is rather interesting to observe that our proposition must be rejected based on our data (parts of Figure 3). Indeed, 5 testers/developers out of 8 disagree or strongly disagree with the statement that existing models do not carry important information. Of the remaining 3 participants, only one agrees, with the other two being neutral or "don't know". P7, the importance of layout, is clearly confirmed by our participants (parts of Figure 3). 20 participants agree or strongly agree, 8 are neutral, and 5 (strongly) disagree. One participant disagreeing noted that requirements should be stated in text, and have as a maximum pictures/models to support its explanation. Therefore, it should overall be kept simple, explaining their answer that layouting is indeed not important. Regarding the integration of models into existing tools (P8), the picture is favourable (parts of Figure 3). 15 people (strongly) agree that this should happen, 8 are neutral, 5 against, and 5 don't know. Specifically, 19 participants agree that models should be integrated into the existing text-based requirements tool T-Reqs [30], with 4 disagree, 4 don't know, and 6 neutral answers. From earlier work, we expected that informal modelling without any automation would be favoured by most stakeholders (P9). Interestingly, our results show a different mindset (parts of Figure 3): 17 people agree that they should be used for automation, while 6 are neutral, and 5 each disagree or answered don't know. Regarding P10, there is a disagreement as to whether the current solution is restricting the participants in their work (parts of Figure 3): 13 each agree and disagree. 4 don't know and 3 are neutral. While we initially thought that some participants could state that they are not restricted since they don't use the models, this picture was not confirmed clearly by looking at the disagreeing group: Only 2 of the participants stating that they don't write/modify models are in that latter group. P11 again contradicted our impression from previous work -we expected that participants would state that they would not update their models even if the tool was better (parts of Figure 3). However, 15 participants stated that they indeed would update the model if the tool was better. 6 participants stated that they don't know, 5 were neutral and the remaining 7 disagreed. P12, that navigation between diagrams is an important feature, got the strongest support in our survey (parts of Figure 3). Indeed, 23 participants (strongly) agreed with the statement, 6 participants didn't know, 3 were neutral, and 1 disagreed. Survey Discussion Overall, we summarise the survey findings as follows. We find strong support both for working with models (RQ1) and the use of requirements (RQ2) among our participants. With respect to the needs to support the intended use of requirements models (RQ3), layouting and navigation, a focus on activity and sequence diagrams, and close integration into development tools and version control systems surfaced. We discuss these aspects in this section and revisit them in the second, interview-based part of this study with the aim to shed more light on these aspects. For RQ1 (Sentiments for and against models) we expected a diverse result, based on own experience and literature that propose a divide between model proponents and opponents. Instead, we find that most participants clearly see the value of models. Interestingly, there were a few voices mentioning that text-based requirements would be enough and that models are too complicated to handle. In particular, participants mentioned that at the case department, there is only little requirement work per team, which could easily be handled in text. Similarly, the results show that the picture for RQ2 (How do different stakeholders use requirements models?) is far from the negative one we expected. While it is true that models are created by few people, and also mainly accessed by them, the majority of our participants sees the value of models and also the information contained in existing models. This covers all roles, including testers and developers that do not have an SM role in parallel. Furthermore, the testing and developer roles are far from negative towards modelling. The model creators/maintainers have substantial experience, though we do not know their educational background in modelling. Indeed, the move from the existing modelling tool to an integrated solution is supported, with few exceptions. From free-text comments, we see that there are several factors hindering the use of models at the moment. These include lack of tool access and tool usability, the complicated nature of models, the amount of details and need for constant work related to models, and the outdated information in models. Finally, several statements related to process issues: Participants stated that there was currently no clear direction on whether the models should be kept updated, no process of doing so, and a lack of knowledge how to model and on which abstraction level. This means that the role of reading and understanding the model and then feed the information into the teams ends up in the hands of a few people (SMs). Participants suggested regular modelling courses for users, clear abstraction levels on what should/should not be in the models, and examples of models that are considered to be of high quality. Regarding needs to support modelling (RQ3), participants strongly supported the notion that layouting and navigation are key features. Models at the case department often contain multiple requirements in flow charts/activity diagrams, with one requirement per activity node and a text description of each. The entire diagram then gives the context of the requirement, i.e., what happens before and after, and how it relates to other requirements. Often, there are links to other diagrams as well. Therefore, both the layout and the navigation are required to understand how the system behaves as a whole. Our proposition was confirmed that only few diagram types are in active use, mainly activity and sequence diagram. However, there were minor usages of several further types. Free text answers clearly pointed to the fact that any modelling tool needs to be integrated into daily work (e.g., into git), by using the same tools developers use and by being able to integrate the models with (text-based) version control such as git. While pictures are helpful, the models should theoretically be readable in text, in particular changes to models. Finally, a large share of the participants stated ease of use as one or the main success factor for a modelling tool. CONFIRMATORY INTERVIEWS After the survey, several gaps in our understanding remained, in addition to new questions that arose. These gaps directly follow from the survey findings in relation to the propositions in 1 and 2. Since there is a general willingness to work with requirements and models, combined with a sense that current support is lacking, and clear indication of specific needs, clear questions for follow-up in-depth interviews follow (see Appendix C). In the following, we discuss the findings relating to our three RQs. Given the open nature of interviews, themes in the data can relate to more than one research question. First, we discuss how interviewees see the role of requirements in VLS agile systems engineering in Section 5.1, the role of models in VLS agile systems engineering in Section 5.2, and the use cases arising therefrom in Section 5.3. All these topics relate to RQ1 and RQ2. Finally, we discuss the consequences for tooling (RQ3) in Section 5.3. RQ1/RQ2: Requirements in VLS Agile Systems Engineering At the case department, requirements used to be written prior to development. Now, due to the agile transformation at the company, only vague requirements are developed prior to the sprints, which are then shaped and refined in parallel to the development and testing. Some interviewees perceive this as documentation work only, while others see it as a crucial step in invention and in preparing for future maintainability. That is, the role of requirements in large-scale agile development is perceived very differently in the case department. Several interviewees take the standpoint that there are too few requirements, and that those are written too late in the process. They take the traditional development point of view in which upfront requirements analysis guides the Requirements and VLS Agile (RQ1/2) • System-level requirements are available too late in the process. • Requirements are an asset when changes are made, but often need to be updated first. • Importance of requirements: interviewees agree, but have doubts about general sentiment in organization. Conclusion: Role of requirements in VLS agile is conceptually unclear. development later on, and in which requirements provide the system knowledge. The lack of such requirements is therefore seen as an issue. "And we have them always too late in the chain. That's my view of it. " -Interviewee 1 "[..] someone updates the implementation and suddenly things don't work anymore. And then the problem is you have to determine why. Because a lot of the behavior of the product is not really based on requirements. We don't have requirements on exactly everything. " -Interviewee 4 None of our interviewees stated that they considered requirements unimportant. However, several of them did express that this was a common belief within the company. That is, that other sources but written requirements are sufficient to obtain system knowledge, e.g., test cases, or annotations to standards (compliance declarations). "I got the feeling that some people think it's very important and some think this...we don't need requirements at all. We can do the coding and then we check at the end if it works OK, if the customer doesn't complain it's ok. " -Interviewee 1 We therefore conclude that the notion of requirements in VLS agile systems engineering is conceptually unclear and individual opinions of practitioners differ. RQ1/RQ2: Requirements Models in VLS Agile Systems Engineering Requirements Models and VLS Agile (RQ1/2) • Requirements models are important to understand the big picture. • Requirements models are hard to keep up to date. • Some models are increasingly outdated, thus losing value. • Changes can break a model and require re-design. • Different modelling styles make shared modelling difficult. Conclusion: While requirements models provide substantial value, using them successfully in practice is challenging. Given that the role of requirements is conceptually unclear or at least different from the original, plan-driven process in which requirements were written up front, the role of using models to convey requirements information is also debated at the case company. Several of our interviewees valued the existing requirements models. They reported that the models serve primarily as a boundary object between different agile islands and the overall system, providing the long-term knowledge [28]. A common issue in VLS agile systems engineering is that individual methodological islands exist in a company that are disconnected [26,29], e.g., individual Scrum teams and an overall plan-driven process. Having a model that relates systemlevel requirements to each other can help building bridges between the islands and keep knowledge over a long time. For example, the models can help engineers understand how isolated user stories connect to the overall system behavior. Furthermore, incoming change requests can be understood better in relation to the current system-level behaviour. However, we also have several interviewees that reject the requirements models, for several reasons. First, while they consider requirements models useful in principle, they differ whether it is worth spending the required effort to create and maintain the models over time. Just as with other forms of documentation, maintenance is essential. If the model becomes outdated, it loses its value to the engineers. "The problem is that the information gets outdated and there are not enough resources to make sure they are correct. And the focus probably lies on other things. " -Interviewee 1 "We have definitely the knowledge to do the model right. The question is if we want to spend the time and effort. Because it would require many people many months to go through them all and update it. " -Interviewee 4 In fact, an interviewee stated that several models at the company are outdated at the moment, and would require a substantial effort to be updated. "The requirement model itself has degraded in many cases to the point where it's useless, totally inaccurate and not up to date. " -Interviewee 4 Also in relation to maintenance effort, one interviewee stated that changes to the requirements can be orthogonal to the way the requirements models are designed, thus leading to substantial maintenance effort up to entire re-designs of a model. For instance, changes that lead to a modification in the system structure could require moving requirements between models or entirely re-designing the information flow in models that depict behaviour or interactions. While only one interviewee mentioned this issue, we considered it critical enough to list it here. In addition to maintenance and model creation effort, several interviewees highlighted that there is no common way of modelling. Currently, engineers do not get any instructions on how to create a model, how to use it, and how to maintain it. This leads to a multitude of different modelling styles, reluctance to modify a model, and in many cases to teams abandoning the model altogether. We further hypothesise that this also leads to a higher overall effort, since a person used to one model might need additional training to use or modify another model, as it might be modelled following a completely different style. Finally, several interviewees have reservations towards requirements models due to tooling issues, and integration of the tools into the process. Most of these reservations are similar to tool challenges known from related work, e.g., [21,23,33,34,50]. For instance, the interviewees mention outdated, heavy-weight tools, and the risk of vendor lock-in. "I have logged into Rhapsody just a few times, but in general that's very slow and so on. So that's not an option to log into it to get information. " -Interviewee 1 "Someone checks out the requirement document and only that person is allowed to make changes until it checked in. And hopefully that person will check it in before leaving for vacation. " -Interviewee 4 Based on these themes, we conclude that requirements models provide substantial value in VLS agile systems engineering. However, practitioners struggle to use them successfully due to challenges in maintaining them and modelling in a consistent style that allows engineers to work on shared models. Use Cases for Requirements Modelling in VLS agile (RQ1/2) • Models provide an overview of the requirements and their relationships. • Models provide valuable information to developers and testers. -Many read, few write -Potential imbalance (effort/benefit) -Potential lack of awareness and appreciation of models Conclusion: A lightweight approach to requirements models that exposes models to many stakeholders is seen most favourable by the interviewees. As a third theme in our analysis, we discuss the different use cases for requirements models that our interviewees report or discuss, and the roles that relate to these use cases. These are either already in place at the case company today, or the interviewees raised them as desirable or promising. Not all interviewees had a good overview of all stakeholders that actually interact with the models, yet implicit assumptions on which roles should interact with the models existed. The requirements models at the case department are primarily a collection of flow charts/activity diagrams. Activities are used as containers for textual requirements and their connections depict the connections/traces between requirements. There is typically a main flow and potentially multiple alternative flows, describing error cases. The main value of the model lies in the overview it provides, primarily obtained through the relationships between requirements. Several interviewees state that this overview is something that is hard to achieve with a text-only representation. "I think that's very good. Because it's easy to follow, compared to when it's text based. " -Interviewee 1 "It's impossible to read all of them and understand what the total requirement mass is. " -Interviewee 2 The primary use case for the existing requirements models at the studied department is read-only access, to provide valuable information to developers to inform their activities. However, in many cases this information is provided by other roles. There is the widespread idea that mainly the SM reads the model in order to then inform other roles and to provide an overview. SMs use the model as a source of information to answer questions regarding the overall system functionality, to investigate how changes affect the system, and to understand if change requests are due to misunderstood requirements, bugs, or actual changed needs. This restricted use of the requirements models has the advantage that other team members do not need to be experts in modelling. However, the disadvantage is that they might not be aware of the models' value and purpose, leading them to believe that updating the model is a waste of time -they do not see that the SM uses the model as a core element in their work. The degree to which different SMs use the requirements models depends on their personal preferences and the state of the model. As discussed earlier, some models are outdated and therefore no longer used by the SMs. When discussing future use cases, most interviewees mentioned that all team members should read the model, but not necessarily write. "Everybody should at least have read access. I cannot see any reason why you should not have read access. " -Interviewee 1 Relatively few stakeholders currently modify the model. These are primarily the SMs, who create and update requirements models according to changes in the system, e.g., newly-implemented user stories. Testers currently benefit from the requirements models to understand which requirements relate to a given object under test. Again, the degree to which they use the models varies. Interviewees also expressed that the model should allow testers better linking of test-related information, such as individual test executions. This is currently not possible, but would enable better integration of work the testers currently have to do in other tools. "And what we as a tester see is lacking, is the way you work with requirements and the actual test executions, and how you follow up that your requirements are fulfilled." -Interviewee 1 Needs for Requirements Modelling in VLS Agile (RQ3) • Models need to be navigable and searchable. • Broad and easy access to the models is key to adoption. • Education and guidance for modelling need to be provided. • Review mechanisms similar to code reviews can foster adoption. • Generation artefacts from models can serve as a maintenance incentive. • Heterogeneity of teams and tools needs to be supported. • Automation of model layout is a trade-off. Based on the evidence from our interviews, several hypothetical ways to use requirements models exist. We extrapolate the features necessary in tooling for requirements models, and the information content those models need to have. Note that this section is only partially based on evidence from our interviews, and partially a logical extrapolation based on our expertise in the field. For each theme, we discuss to what extent we do have evidence for the discussion points. We distinguish four hypothetical scenarios based on our data. These are: (1) Entirely abandoning requirements modelling in favour of using other artefacts as sources of information. (2) Using requirements models as sources of information for the SMs only. (3) Using requirements models as sources of information for developers, with SMs maintaining the model. (4) Use and maintenance of the models by the entire development organisation. Scenario 1 (no requirements models) makes modelling tools unnecessary. Hence, the tools do not need to be discussed. Documenting Knowledge: Instead, abandoning models raises the question where the information should reside instead at the case department. That is, information on how the overall requirements relate to each other, e.g., in terms of main and alternative flows/scenarios. In our survey and during the interviews, we found several statements that existing documentation such as the user manuals could serve this purpose. "Yes, you could use [customer documentation] as a requirement if it works, but it has not always worked. " -Interviewee 5 Similarly, tests are often raised as a potential source of knowledge that could replace written requirements, both in our data and in related work. "Yes, actually I think it is an interesting idea because we have spent over the years quite a lot of time and effort on doing this requirements modelling. And there are alternatives which are tempting. Some have proposed that we should use [..] test cases as such, so instead we spend more time on reviewing the test cases and whatever ever changes we do to test cases, to see that this is still the wanted behavior. " -Interviewee 4 However, our interviewees also raise concerns that tests might not be sufficient. That is, each test expresses exactly one scenario, which means that the overall system behaviour arises from the combination of the entire test suite. Therefore, this overall behaviour is not easily visible. "If you only have the test case, it's not clear really what parts that the test case verifies that our requirements can...and what is just a behavior. That's a risk. " -Interviewee 4 If requirements models are used in some capacity, several important needs arise. Some of these are already present in the current tool solution at the case department, others are lacking according to the interviewees. Supporting Traceability: In addition to the information being present, requirements need to exist so that testers know what to test, and have a target they can trace to. Currently, the tool T-Reqs [30] fulfils this purpose at the case department, even though one interviewee expressed that the possibilities for tracing are limited. For instance, test executions could not be traced in T-Reqs and could therefore not be addressed in the tracing. "I would like to say that this test execution, I will map to that requirement for our work package. To indicate that we have delivered what we are supposed to do and we are fulfilling this requirement. And then, two weeks later, the test execution fails. But [..] it means that somebody else maybe has destroyed, or we have delivered something new. " -Interviewee 1 Hence, while traceability capabilities exist in T-Reqs, improvements are necessary. Navigable and Searchable Information: In the context of VLS systems engineering, requirements and their relations quickly become complex. Hence, it is important that they can be navigated and searched efficiently. "We often have flows. So we have a number of requirements that are a part of a flow. So when something happens we follow a flow. But those flows are often broken down into sub-flows and the sub-flows might be re-used from other flows and things like that. So you want to have some way to kind of link it all together and make it easy to navigate. [..] you should be able to easily to search there and navigate. " -Interviewee 4 "there is the practical thing of it. And for our requirements to be useful it has to be first of all easy to find and navigate. Because it's a complex model, you can't just...read one model, one short requirement out of context. [..] So you need an easy way to navigate the model. " -Interviewee 4 For instance, links between requirements could be made navigatable by using hypermedia with hyperlinks between requirements, as is standard in most RE tools. Currently, this is supported by T-Reqs for textual artefacts. For models, several modelling tools allow for hierarchical models that support hiding information in sub-models, or distributing models and diagrams over several files. However, extracting relevant information from models is difficult [34], e.g., in the form of a search. Broad Information Access: Several of our interviewees stated that access to requirements information needs to be open to everyone. If only selected roles have access to the information, requirements easily become an abstract concept that many engineers are not aware of or do not consider important. This lowers the overall acceptance of requirements as an important source of knowledge in the organisation. "Everybody should at least have read access. I cannot see any reason why you should not have read access. " -Interviewee 1 "I think it's important to be used and to be...maybe have good qualities, I think it's good if it can be [..] easily accessed. That seems like a crucial point I think. " -Interviewee 2 While easy tool access can help acceptance of the models in any case, especially for Scenario 3 and 4 (developers use the models at least as a source of information) this feature is crucial. Previously, the case department used IBM Rhapsody, which required engineers to set up a remote environment to open the tool. This turned out to be a large obstacle and only few engineers accessed the model, effectively limiting the access to the SMs. "R: Because like two thirds of the people will not even try or . . . Rhapsody. I don't have that [remote] environment setup and the Rhapsody tool. I have never seen that tool [laughter]. " -Interviewee 2 "Well, first of all it has to be accessible, both for the people who need to do updates of the model, and also for the people then who are supposed to read the model, read the requirements, the designers and testers. I mean if it's very easy to access, people will do it. If it's hard to access, people will not. " -Interviewee 4 Need for Education and Guidance: Similar to coding guidelines that exist in most organisations, modelling and requirements guidelines need to be in place that ensure a common approach to modelling and requirements. While important for any kind of scenario in which requirements and models play a role, this guidance becomes more important when many people are supposed to edit models, i.e., for Scenario 4 in particular. Several of our interviewees raise this point. "Yeah, it is back again to this if we want to use Rhapsody and the modelling, in that sense that we call it modelling, then we have to use the modelling guidelines. And keep it. Not just know it and just abuse the model. " -Interviewee 3 As a variant of guidelines, several interviewees also suggest mentors at the company that can support others in modelling-related questions. "If they don't know it, double-check with someone that knows it. " -Interviewee 3 "So need to be one or a couple of people that really know how to model that you can ask for 'Okay, can we have an hour and come to a conclusion how we should model this, my problem?'. " -Interviewee 2 Need for Review Mechanisms: While guidance and education can improve the quality of models, enforcing standards could become difficult. Hence, mechanisms are required to do so. Drawing from experience both in RE and in programming, we believe that both reviews (similar to code reviews) and automated analyses (similar to requirements heuristics or static code analyses) have the potential to enforce model quality. Reviews are also brought up several times by interviewees as a reason to rely on textual modelling tools like PlantUML. "[..] have like reviews in the tool or it would be like very much be a good way to get people into also use it, the others. Of course then you can easily ask them to 'Okay, can we have a review of this?' or 'Can you review this?'. " -Interviewee 2 "But if you instead did it in a pure text, then you can use your standard merge tools. You can do standard diff to see what has been updated. If someone wants to do a review, I mean we have today code review tools we use for everything else. " -Interviewee 4 Code Generation: Carrot and Stick: Existing models at the case company are in many cases outdated or of low quality. While guidelines, mentors, and model reviews could help addressing this, several interviewees suggested that artefacts could be generated from the models in a form of lightweight model-driven engineering process. This would encourage people to read and update the models, as they serve as a ground truth. In turn, generation could help enforcing guidelines. For instance, using elements that are semantically wrong would lead to incorrect artefact generation or errors during the generation process. Due to the use of the requirements model as a read-only source of information, the model is disconnected from the final product. Nothing is generated from the model that is used further in development. This means that the end product could potentially be completely contradicting the model. This bears the risk that the model deteriorates. If code or other important artefacts would directly be generated from it, this would not happen. However, it is important to note that generation of artefacts requires a well-balanced approach: None of the interviewees expressed the desire to follow a strict generation approach, where, e.g., the entire code is generated based on models. Hence, generation should remain a tool that allows engineers to see some direct benefit of the models, and to obtain quick feedback of sorts, without dictating their entire workflow. Supporting Heterogeneity: In VLS systems engineering, development work is organised in a number of different ways. For instance, component teams are common, where different teams focus on their individual components. The case department instead structures work by expertise, e.g., having teams that focus on quality attributes such as availability. This heterogeneity leads to different approaches, and to different needs. Furthermore, heterogeneity and independence of teams is encouraged by the use of agile practices. In turn, this heterogeneity requires highly flexible tools and notations, or independence within the teams to choose their own tools and notations for requirements and requirements modelling. Forcing a single tool/style/approach on all teams will likely lead to resistance. Nevertheless, advertising success stories from individual teams might pave the way for others to adopt similar approaches. "We have different areas with different needs, but my area is very functional. " -Interviewee 4 With respect to requirements modelling, supporting heterogeneity might also mean accepting that some teams choose to abandon modelling entirely, either due to preference, or due to a mismatch with their way of working. Abstraction Level: The case company needs to be compliant with specifications from 3GPP [1]. This standard contains many technical details, that often need to be discussed or referenced in the requirements, e.g., to discuss required additions. "[..] somewhere I would say between 75% and maybe up to 90% depending on a bit what area we are in, the requirements are specified by 3GPP, we implement the standards. And then we don't need to re-specify this. But sometimes we need to clarify this because the standard might not be very clear on certain details. So we might need to annotate it and say 'okay, in this case the value should be this or we do like that', when the standard is not clear enough. " -Interviewee 4 This leads to requirements or requirements models on a low level of abstraction. Any notation or specification format used at the case company needs to support this level of abstraction. For example, to make a requirements model understandable, it might be necessary to use hierarchical models to hide details. A modelling tool would thus need to offer sophisticated hierarchy and decomposition features. However, given that the requirements at the case department are closely-aligned with the standard, this also means that engineers have an easier time choosing the right level of abstraction, something that is otherwise challenging [34]. Importance of Layout: Finally, several of the interviewees raise the advantage of textual modelling languages like PlantUML, as they can be integrated into traditional text-based environments such as git or diff. However, they come at the cost of relying on automated layouting, e.g., through GraphViz 2 in the case of PlantUML. The layout of a graphical model can contain important information, and reflect the intent of the modeller [48]. Expressing this information is therefore no longer possible with automated layouting. Our interviewees have differing views as to whether this is a problem or not. "I don't think that will be any problem. [..] It could be better like that. I see it, as long as I can show the flow, you have space to write, you could show the relation of the different parts, I think it should be okay. " -Interviewee 5 "I think that's crap [automatic layout]. You must be able to...an automatic thing is good, but then you should click on a button 'I want an automatic suggestion'. And then you should be able to fine-tune it and it should stay that way. [..] Because the tool will never know what my intention was. " -Interviewee 1 One interviewee suggested that, while automatic layout might not work for complex models, conventions or adjustments to automatic layout could be possible. "[..] it's not a black and white question here. In many cases it doesn't matter. If you do modeling as diagrams or activity flows or something like that...if you have small enough flows the layout doesn't matter, because it will not be that bad. [..] When the model becomes larger and more complex layout becomes more important. [..] it might be good to do some hinting. For example exceptional flows maybe should go to the right while main flows on the left or something like that. " -Interviewee 4 This last point clearly shows the complex trade-off between the simplicity of tools, and features that might be considered essential by some. DISCUSSION AND CONCLUSION We conducted a case study in one department at Ericsson AB, a large Swedish telecommunications provider, investigating the use of models for RE purposes. We conducted a survey with 33 participants, followed by 5 semi-structured interviews. With respect to RQ1 (What sentiments exist for and against the use of requirements models in VLS agile systems engineering?), we find that our study participants consider requirements models useful and valuable. While several interviewees mentioned that sentiments against these models exist in the case department, we did not directly interact with anyone supporting this view. However, we also find that creating and maintaining requirements models at a sufficiently high level of quality is challenging. Several participants maintain that many existing models have deteriorated over time and are no longer useful. Additional point worth highlighting is that different modelling styles make it difficult to jointly work on models, something that might be harder to unify compared to, e.g., writing style in textual requirements. Finally, one interviewee mentioned that the nature of some changes to requirements models can require entire models to be re-drawn. This either leads to substantial overhead, or it causes resistance to make changes in the models, especially if changes are made by other engineers than the model creator. The use of existing requirements models at the case department (RQ2, How do different stakeholders use requirements models in VLS agile systems engineering?) is primarily by SMs, in their role as providers of system-level knowledge and as a boundary between incoming change requests, system requirements, and work in individual agile teams. While several engineers in other roles use the models as well, mainly in a read-only fashion to answer their questions on intended system requirements, complex tooling used in the case department in the past has prevented a broader consumption of the models. Interviewees expressed the desire that all engineers should at least read the models. They further express confidence that their tool T-Reqs supports this, in which models are stored in textual format alongside code and textual requirements in git repositories. This allows easy access through tools engineers use on a daily basis, as well as easier review in terms of textual diff. Our findings with respect to RQ1 and RQ2 allow for reasoning about the information content and tooling needs (RQ3, What are the needs to support the intended use of requirements models in VLS agile systems engineering?) regarding requirements modelling. The value of the models at the case department is primarily in providing an overview of the system requirements and especially their connections, something that is difficult to express in a suitable manner in text. However, in order for the models to reach their full potential, the contained information needs to be up to date and of high quality. This, in turn, requires a broad access to the models by all stakeholders, at least in read-only fashion. Furthermore, education and guidance in how to create and maintain the models is essential, potentially also in the form of mentors at the case department. In terms of tool features, navigation and search are essential. Furthermore, interviewees expressed the desire to incorporate the requirements models in their regular code review workflow, e.g., by adding them to the git repository in textual form. Using generation of artefacts from models could be a way to further incentivise their use. However, no interviewee expressed the desire to generate substantial code from the models. Using textual models is considered an advantage by most study participants, due to easier tooling and the possibility to integrate the models into exiting tools. However, a number of issues arise due to the reliance on textual modelling, most notably the loss of manual layout capabilities at the case department. Interviewees suggested workarounds such as a standard layout, where the main flow is always displayed to the left, while alternative flows are drawn to the left of the main flow. Finally, we note the importance of supporting heterogeneity at the case department, with different needs and preferences in the agile teams. Our study clearly shows the usefulness of models during RE, if used for well-motivated use cases. Furthermore, the study shows that simple modelling tools that are close to the engineers in terms of workflow and tooling have the potential to be successful, while heavy-weight modelling tools do not reach their full potential due to difficulties in accessing and using the tools, and resistance to do so regularly. Finally, we find several trade-offs that exist when tailoring models and modelling tools to an organisation, e.g., sophisticated modelling tool features such as hierarchy and manual layout vs. simple, text-based modelling tools. ACKNOWLEDGMENTS We would like to thank our contacts at Ericsson AB for the fruitful collaboration and constructive input at all stages of this work, and the study participants for their valuable contributions. Furthermore, we express our gratitude to João Araujo for feedback on the manuscript draft. Parts of this work were supported by Software Center Project 27 on RE for Large-Scale Agile System Development. • Requirements models should only be used for documentation. • The current modelling tool is restricting me in my work. • I would update/maintain the requirements models more frequently if I had a better tool. • Navigation between diagrams is an important feature. (15) Do you have any additional comments on this page? (Optional, free text) Additionally, we displayed the following two questions iff a participant stated in question 6 (model creation) that he/she created/used models at least monthly: (16) For what purposes do you create models of requirements? (Optional, free text) (17) Who is looking at the models of requirements you create? ( (19) Currently, keeping the models/diagrams of requirements up to date is challenging. Do you have any suggestions how engineers could be motivated to maintain these models/diagrams more frequently? (Optional, free text) (20) How would the ideal situation regarding requirements modelling look like in the future? (Optional, free text) (21) Do you have any other comments (e.g., alternative ideas to modelling, tool suggestion)? (Optional, free text) Additionally, we displayed the following question iff a participant stated "yes" in question 9 (use of IBM Rhapsody): (22) Which features in IBM Rational Rhapsody are important to you? A.5 Page 5 Thank you for completing this questionnaire! We would like to thank you very much for helping us. Your answers were transmitted, you may close the browser window or tab now. C.1 Propositions • The current ad-hoc use of models is insufficient. Either modelling should be abandoned, or a clear process (with clear stakeholders, tasks and abstraction levels) and guidelines (including courses on modelling) are needed. (How could such a process look like?) • Information in models is outdated in many areas. This needs a centralised effort to be fixed, replacing the tool does only treat the symptoms. (How could the case company proceed? How do first steps look like?) • A number of stakeholders/tasks have been forgotten when considering the use of models and the tool integration. (Who are these stakeholders? What are their needs?) • Potential users need a clearer motivation for using (and in particular updating) models. (How could we motivate them?) • A lightweight modelling approach is sufficient for the case company. They require only very few model elements of activity diagrams (activity nodes with text, relations between them) and few model capabilities. (Which of the features of modern modelling tools are still required?) C.2 Questions • Using PlantUML allows only automated layouting. How do stakeholders view this trade-off between text-based integration into T-Reqs, and losing capability to modify the layout? How does the simple approach relate to other modelling capabilities? • Automation using existing models was supported in the survey. How should this automation look like? What aspects could/should be automated? • What kind of information is needed in the models? What is a suitable level of abstraction? • Working with models is currently cumbersome. How can the experience be improved? What does it mean to be easy to use for a modelling tool? • Information needs clearly differ between stakeholders. What information needs exist for specific stakeholders?
2022-09-07T01:16:09.194Z
2022-09-05T00:00:00.000
{ "year": 2022, "sha1": "3a316ced3470e89f0fad7f44f3c78a1e62e641a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3a316ced3470e89f0fad7f44f3c78a1e62e641a4", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
45317348
pes2o/s2orc
v3-fos-license
Chemical composition and antibacterial activity of essential oil of Nepeta graciliflora Benth. (Lamiaceae) Abstract The chemical composition of the essential oil obtained from aerial parts of Nepeta graciliflora was analysed, for the first time, by GC–FID and GC–MS. A total of 27 compounds were identified, constituting over 91.44% of oil composition. The oil was strongly characterised by sesquiterpenes (86.72%), with β-sesquiphellandrene (28.75%), caryophyllene oxide (12.15%), α-bisabolol (8.97%), α-bergamotene (8.51%), β-bisabolene (6.33%) and β-Caryophyllene (5.34%) as the main constituents. The in vitro activity of the essential oil was determined against four micro-organisms in comparison with chloramphenicol by the agar well diffusion and broth dilution method. The oil exhibited good activity against all tested organisms. Introduction The genus Nepeta, which belongs to the family Lamiaceae, is an annual herb comprised of about 300 species which are spread out over a large part of Europe, Asia and Africa (Rechinger 1982;Formisano et al. 2011). In India about 30 species are found, widely distributed in temperate Himalayas, and on foothills and plains (Hooker 1975;Rechinger 1982). The genus Nepeta shows diverse biological behaviour like feline attractant, canine attractant, insect repellant and arthropod defence (Wagner & Wolf 1977;Bottini et al. 1987;Gkinis et al. 2003). Nepeta graciliflora (common name: Uprya ghas) is reported as an ethno-medico-botanical herb of Uttarakhand (Bisht, Rana, et al. 2012). Based on the existing literature on N. graciliflora and ethnobotanical principles of essential oils from the species of Nepeta genus, aerial parts of N. graciliflora have been investigated for the first time. As such the objective behind present work was to evaluate the chemical composition and antibacterial activity of essential oil from aerial parts of N. graciliflora. Antibacterial activities The antibacterial activity of essential oil was tested against four bacteria, by evaluating the presence of inhibition zone diameter and minimum inhibitory concentration (MIC), as presented in Table S2. The oil inhibited the growth of all tested micro-organisms with zone inhibition diameter from 16 to 33 mm and MIC ranging from 114 to 260 μg mL −1 (Table S2). Among all tested micro-organisms, oil showed good activity against Bacillus cereus with MIC 114 μg mL −1 followed by Staphylococcus aureus, Pseudomonas aeruginosa and Klebsiella pneumoniae having MIC values of 123, 212 and 260 μg mL −1 , respectively. These results suggested that the extracted oil of N. graciliflora has capacity to inhibit the growth of selected bacterial strains. Disclosure statement No potential conflict of interest was reported by the authors. Supplementary Data and research material The experimental details relating to this article are available online http://dx.doi.org/10.10 80/14786419.2015.1055489.
2018-04-03T06:02:40.212Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "79dbcfcc939896596f93da5e26ad0567f1e71967", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Chemical_composition_and_antibacterial_activity_of_essential_oil_of_i_Nepeta_graciliflora_i_Benth_Lamiaceae_/1472922/1/files/2161581.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "5551e6d1aff8f1bbed09a8dac38de3973c0b4b41", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
266517533
pes2o/s2orc
v3-fos-license
Differential contribution of TrkB and p75NTR to BDNF-dependent self-renewal, proliferation, and differentiation of adult neural stem cells Alterations in adult neurogenesis are a common hallmark of neurodegenerative diseases. Therefore, understanding the molecular mechanisms that control this process is an indispensable requirement for designing therapeutic interventions addressing neurodegeneration. Neurotrophins have been implicated in multiple functions including proliferation, survival, and differentiation of the neural stem cells (NSCs), thereby being good candidates for therapeutic intervention. Brain-derived neurotrophic factor (BDNF) belongs to the neurotrophin family and has been proven to promote neurogenesis in the subgranular zone. However, the effects of BDNF in the adult subventricular zone (SVZ) still remain unclear due to contradictory results. Using in vitro cultures of adult NSCs isolated from the mouse SVZ, we show that low concentrations of BDNF are able to promote self-renewal and proliferation in these cells by activating the tropomyosin-related kinase B (TrkB) receptor. However, higher concentrations of BDNF that can bind the p75 neurotrophin receptor (p75NTR) potentiate TrkB-dependent self-renewal and proliferation and promote differentiation of the adult NSCs, suggesting different molecular mechanisms in BDNF-promoting proliferation and differentiation. The use of an antagonist for p75NTR reduces the increment in NSC proliferation and commitment to the oligodendrocyte lineage. Our data support a fundamental role for both receptors, TrkB and p75NTR, in the regulation of NSC behavior. Introduction Two main regions maintain the potential to generate new neurons in the adult mammalian brain: the subventricular zone (SVZ) in the wall of the lateral ventricles and the subgranular zone (SGZ) in the dentate gyrus of the hippocampus (Taupin and Gage, 2002;Chaker et al., 2016;Gonçalves et al., 2016).The neural stem cells (NSCs) are not only responsible for carrying out this process of neurogenesis but also contribute to generating new astrocytes and oligodendrocytes throughout life (Taupin and Gage, 2002;Sohn et al., 2015), thus becoming potential agents for brain repair.Under homeostatic conditions, a careful interchange between cellular and molecular processes in the microenvironment constantly regulates the activity of NSCs (Fuentealba et al., 2012).In order to avoid loss or excess of the stem cell (SC) population, self-renewal and proliferation must be acutely regulated in coordination with differentiation processes.Thus, the differentiation of NSCs requires an intermediate state in which these cells become committed [i.e., neural progenitor cells (NPCs)] although they still show proliferative potential (Llorente et al., 2022). Brain-derived neurotrophic factor (BDNF) is the most widely distributed member of the neurotrophin (NT) family in the central nervous system (Leibrock et al., 1989), with important implications in neuronal survival and differentiation (Eide et al., 1993).BDNF interacts with two receptors, the tropomyosinrelated kinase B (TrkB) receptor (Klein et al., 1989) and the p75 neurotrophin receptor (p75 NTR ), known to interact with all NTs (Rodriguez-Tebar et al., 1992).Classically, NTs promote survival, proliferation, and correct maturation by Trk receptor signaling through its associated kinase activity (Mitra et al., 1987), whereas p75 NTR has been more involved in apoptosis (Frade et al., 1996) and in other cellular pathways depending on the intracellular complexes it constitutes (Roux and Barker, 2002).A recent study has begun to clarify the complexity of p75 NTR signaling.This includes proteolytic processing through γsecretase to release its intracellular domain (Vicario et al., 2015) that translocates to the nucleus (Parkhurst et al., 2010) and the conformational rearrangement of disulfide-linked receptor dimers (Klein et al., 1990) that allows the access of intracellular effectors to the receptor (Lin et al., 2015).BDNF, TrkB, and its truncated form TrkB.T1, known to lack the kinase domain (Klein et al., 1990), are all expressed in the murine SVZ (Vilar and Mira, 2016) as well as throughout the migratory pathway (Chiaramello et al., 2007).p75 NTR is also expressed by cycling cells of the SVZ (Okano et al., 1996;Giuliani et al., 2004), including intermediate progenitors (Galvão et al., 2008).In addition, p75 NTR can be detected in neuroblasts of the SVZ/RMS (Galvão et al., 2008), and genetic depletion of p75 NTR reduces the migration capacity of the neuroprogenitors in the SVZ both in physiological conditions and after cortical injury (Young et al., 2007;Deshpande et al., 2022).The complexity of NT signaling is increased due to the known association of p75 NTR with members of the Trk family (Hempstead et al., 1991;Zanin et al., 2019).This is also the case for BDNF as the treatment with BDNF in embryonic hippocampal neurons elicits the association of TrkB and p75 NTR , facilitating the TrkB signaling and promoting neuronal survival and function (Zanin et al., 2019). The activity of BDNF by the high-affinity binding to TrkB has been widely described in the hippocampal neurogenic niche (Bartkowska et al., 2007;Li et al., 2008;Vilar and Mira, 2016); however, its role in the NSCs located at the SVZ is not fully understood (Bath et al., 2012;Vilar and Mira, 2016).Although both BDNF receptors, TrkB and p75 NTR , are present in the adult SVZ (Tervonen et al., 2006;Galvão et al., 2008;Bath et al., 2012;Vilar and Mira, 2016), the implication of these receptors in NSC decision-making remains to be established.BDNF/TrkB participates in the proliferation and differentiation of the neuroprogenitors, and in the survival and maturation of the new neurons (Berghuis et al., 2006;Bath et al., 2012;Chen et al., 2013).BDNF/p75 NTR seems to regulate cell proliferation and migration of neuroblasts to the olfactory bulb (OB) (Snapyan et al., 2009;Bath et al., 2012;Deshpande et al., 2022). Alterations in the niche environment as a consequence of stroke or neurodegenerative diseases, among others, drive a disorder in the amount of BDNF and its receptors (Holsinger et al., 2000;Jiao et al., 2016;Deshpande et al., 2022).These changes in BDNF concentration might imply the activation of different signaling pathways and, thus, the different context-dependent effects observed in previous studies (Bath et al., 2012).Investigating the function of BDNF and the molecular mechanisms implicated in the regulation of adult NSCs is essential to understand the potential contribution of adult NSCs to brain repair and as a therapeutic tool.Here, we analyzed the effect of low and high concentrations of BDNF in the self-renewal, proliferation, and differentiation capacity of NSCs isolated from the adult SVZ and the contribution of TrkB and p75 NTR receptors in the adult NSCs response. . NSCs cultures NSCs were obtained from mice with a C57BL6 background.Mice were maintained in a 12-h light/dark cycle with free access to food and water ad libitum according to the Animal Care and Ethics Committee of the CSIC.Adult NSCs were isolated from 3-monthold mice after cervical dislocation.The brains were dissected out, and both SVZs from each hemisphere were extracted and cut into small fragments.The pieces were incubated with 0.025% Trypsin-EDTA (Gibco; Cat #25300054) for 30 min at 37 • C. The tissue was then transferred to Dulbecco's modified Eagle's medium (DMEM)/F12 (1:1 v/v; Life Technologies, Cat #21331020) and carefully triturated with a fire-polished Pasteur pipette to a single cell suspension.Isolated cells were collected by centrifugation, resuspended in the NSC medium based on DMEM/F12 containing 2 mM Glutamax (Gibco; Cat #35050038), 1X B27 without vitamin A (Gibco; Cat #11500446), 2X antibiotic-antimycotic (Gibco; Cat #15240062), 2 µg/ml heparin (Sigma; Cat #H3393), supplemented with 20 ng/ml epidermal growth factor (EGF; Peprotech, Cat #AF-100-15), and 10 ng/ml fibroblast growth factor (FGF; Peprotech; cat# 100-18B), and maintained in a 95% air−5% CO 2 humidified atmosphere at 37 • C (Bizy and Ferron, 2015;Belenguer et al., 2016).Neurospheres were allowed to develop for 7-10 days in these conditions.Each culture was generated using both SVZs from one adult mouse.Thus, each experimental point in the graphs represents the mean value of the replicates of a single independent animal.For culture expansion, primary neurospheres were disaggregated with Accutase (0.5 mM; Sigma; Cat #A6964) for 10 min at room temperature and washed with the NSC medium without mitogens to generate single-cell suspension.Then, 62.5 cells/µl were plated in the fresh mitogen-completed medium in a 95% air−5% CO 2 humidified atmosphere at 37 • C and maintained for 6-7 passages maximum.In order to determine the self-renewal capacity of the NSCs, secondary neurospheres were disaggregated, NSCs were plated at low density (5 cells/µl) in the fresh mitogencompleted medium, and the number of neurospheres was counted 5 days later.In the self-renewal experiment, four replicates for each culture were used, and the average value was estimated.All these experiments were repeated four times with different cultures.Images of the neurospheres were taken using the PAULA Smart Cell Imager (Leica), and the diameters of the spheres were estimated by ImageJ. . Proliferation and di erentiation assays To estimate proliferation, 62.5 cells/µl were plated after Accutase disaggregation in the fresh mitogen-completed medium in a 95% air−5% CO 2 humidified atmosphere at 37 • C.After 3 days, neurospheres were plated onto cover glasses coated with 1X Matrigel (Corning, Cat #356234) for 15 min, allowing NSC attachment and fixed for staining with 2% paraformaldehyde (PFA) 0.1M phosphate buffer saline pH 7.4 (PBS) for 15 min at 37 • C. For the differentiation assay, 80,000 cells/cm 2 were seeded in Matrigelcoated coverslips and incubated for 2 days (2 DIV) in the NSC culture medium without EGF.The medium was then changed with the fresh medium without FGF and supplemented with 2% fetal bovine serum (FBS; Gibco; Cat #10438-026) for 5 more days (7 DIV) to allow terminal differentiation.Cultures were fixed for staining at 7 days of differentiation with 2% PFA 0.1M PBS for 15 min at 37 • C. The BDNF treatment was performed by incubating the NSCs with either 10 ng/ml (low concentration) or 50 ng/ml (high concentration) of Recombinant Human/Murine/Rat BDNF (PeproTech; Cat #450-02) since the single cell suspension is plated.When indicated, NSC cultures were treated with 10 µM ANA-12 (MedChemExpress; Cat #HY-12497) (hereafter referred to as TrkB-i) or 10 µM THX-B (MedChemExpress; Cat #HY-137322) (hereafter referred to as p75-i) at the time of plating to inhibit TrkB or p75 NTR , respectively.The specificity and selectivity of both antagonists have been previously evaluated (Bai et al., 2010;Cazorla et al., 2011).Control cultures were exposed to 1:1,000 of DMSO (Sigma; Cat.# D5879).In both proliferation and differentiation assays, 10 random images were taken with ∼400 cells analyzed for each culture.These experiments were performed four times with independent cultures. . Immunocytochemical procedures For immunocytochemical staining, fixed cells were permeabilized and blocked with PBS 0.2% Triton X-100 (Sigma; Cat.#X100) containing 10% normal goat serum and 1% glycine (Thermo Scientific; Cat #A13816.36)for 1 h at RT, incubated with primary antibodies, and prepared in the same blocking solution overnight at 4 • C. Cells were washed three times with PBS 1X and incubated with secondary antibodies for 1 h at RT.Primary and secondary antibodies and dilutions used are listed in Tables 1, 2, respectively.DAPI (1 µg/ml) was used to counterstain DNA.The samples were washed three times with PBS 1X and mounted with the ImmunoSelect antifading mounting medium (Dianova; Cat #038447).Images were acquired at 20x or 40x magnification with a Leica SP5 confocal microscope.For fluorescence intensity quantification, maximal projection images were generated, and the mean gray intensities of p-TrkB, TrkB, and p75 NTR were measured with ImageJ/Fiji software and recorded as arbitrary fluorescence units (a.u.).p-TrkB data were normalized to TrkB intensity. . Gene expression analysis RNAs were extracted with the RNAeasy mini kit (Qiagen; Cat.# 74104) including DNase treatment, following the manufacturer's guidelines.For quantitative PCR (qPCR), 1 µg of total RNA was reverse transcribed using random primers and SuperScript IV Reverse Transcriptase (ThermoFisher Scientific; Cat# 15317696), following standard procedures.Thermocycling was performed in a final volume of 15 µl, containing 1 µl of cDNA sample (diluted 1:7), and the reverse transcribed RNA was amplified by PCR with appropriate primers from PrimePCR SYBR Green Assay (Cultek; Cat.PB20.11) (see Table 3).qPCR was used to measure gene expression levels normalized to Rpl27, the expression of which did not differ between the groups.qPCR reactions were performed in a 7500 real-time PCR equipment (Applied Biosystems).Raw data from this analysis is shown in Supplementary Table 1. . Statistical analysis All statistical tests were performed using the GraphPad Prism Software, version 7.00 for Windows.The significance of the differences between groups was evaluated by the two-tailed paired Student t-test or one-way ANOVA followed by a Tukey post-hoc test.The presence of outlier values was evaluated by Grubb's test.A p-value of <0.05 was considered statistically significant.Data are presented as the mean ± standard error of the mean (SEM) and the number of independent cultures (n), and p-values are indicated in the figures. ng/ml BDNF is su cient to induce self-renew and proliferation of adult NSCs BDNF has proved to act as a pro-neurogenic factor promoting the proliferation and differentiation of NSCs (Lee et al., 2002;Islam et al., 2009;Chen et al., 2013;Liu et al., 2014;Langhnoja et al., 2021).BDNF activity is mediated by high-affinity binding to the TrkB receptor (Naylor et al., 2002), and this neurotrophic factor is able to interact with low-affinity to p75 NTR (Rodriguez-Tebar et al., 1990).Both receptors are expressed in the adult NSCs (Young et al., 2007;Islam et al., 2009;Bath et al., 2012;Faigle and Song, 2013;Vilar and Mira, 2016).A clear positive role for the TrkB pathway has been described in the function of BDNF on the embryonic or P0 NSC proliferation (Islam et al., 2009;Chen et al., 2013), and the proliferative role of p75 NTR in the NSCs located in the adult SVZ (Young et al., 2007) remains to be established.To understand the mechanism behind BDNF's effects on the neurogenic population, adult NSCs were treated with two different doses of this neurotrophic factor (10 and 50 ng/ml).We chose these concentrations as the former mainly activates TrkB, while the latter also activates p75 NTR since the K d of the interaction of BDNF with p75 NTR is approximately 10 −9 M (∼25 ng/ml) (Rodriguez-Tebar et al., 1990).First, self-renewal capacity was tested by determining the number of neurospheres after 5 days of NSCs cultured at low density with low (10 ng/ml) or high (50 ng/ml) concentrations of BDNF (Figure 1A).The presence of BDNF at 10 ng/ml in the NSC cultures significantly increased the number of neurospheres compared to untreated cultures, being this effect potentiated by the addition of BDNF at 50 ng/ml (Figure 1A).This suggests that p75 NTR facilitates NSC self-renewal.Moreover, the diameter of these neurospheres was significantly higher in BDNF-treated NSCs (Figure 1B), suggesting an enhancement of NSC proliferation capacity.Both exposures to 10 and 50 ng/ml of BDNF showed a significant increment in the diameter of the neurospheres compared with untreated cultures, whereas no differences were detected between both concentrations of BDNF (Figure 1B).The proliferative capacity of adult NSCs was analyzed by measuring the percentage of positive cells for the cell cycle marker Ki67 (Figure 1C).Both concentrations of BDNF showed a significant increase in the proliferation ratio compared with untreated NSCs.Again, no differences in the percentage of Ki67+ cells were detected between 10 and 50 ng/ml treated cultures (Figure 1C), indicating that the lowest concentration of the neurotrophic factor was sufficient to activate the proliferation pathway. .ng/ml BDNF potentiates oligodendrocytic and neuronal di erentiation of adult NSCs Several studies have shown that BDNF exerts a positive effect on the differentiation of NSCs into neurons (Chen et al., 2013;Liu et al., 2014) and oligodendrocytes (Chen et al., 2013;Langhnoja et al., 2021).In accordance, the mRNA levels of relevant differentiation markers were analyzed by qPCR in cDNAs obtained from adult NSCs.This analysis indicated that the expression of the neuronal marker Dcx showed a tendency to increase and the oligodendrocyte marker Olig2 was significantly upregulated in the NSCs treated with 50 ng/ml of BDNF, suggesting that treatment with a high dose of BDNF predisposes NSCs toward a more committed state.Instead, the presence of 10 ng/ml of BDNF in the medium was not sufficient to increase the levels of mRNA of these lineage genes (Figure 2A).Neither the expression of the mRNA encoding the astrocytic marker S100β (S100b) nor the neural precursor gene Nestin (Nes) showed differences between untreated and BDNF-treated NSCs (Figures 2A, B).To test if the upregulation of the neuronal and oligodendrocytic genes in the adult NSCs after 50 ng/ml BDNF treatment drove an increment in the percentage of neurons and oligodendrocytes in differentiating conditions, the number of TUJ1+, O4+, and GFAP+ cells, representing neurons, oligodendrocytes, and astrocytes, respectively, were estimated after seven DIV in NSCs maintained in differentiation conditions.The percentage of neurons and oligodendrocytes were increased in the 50 ng/ml BDNF treated cultures, at the expense of astrocyte generation, which decreased in this condition compared with untreated cells (Figures 2C-F).Moreover, the treatment with the low dose of BDNF (10 ng/ml) did not alter the differentiation capacity of adult NSCs regarding untreated cultures (Figures 2C-F), thus requiring a higher concentration of BDNF to activate the differentiation pathway.These data, together with those from the proliferation analysis shown above, suggest different mechanisms for BDNF to promote proliferation or differentiation in a dosedependent manner. . BDNF promotes the expression of TrkB-and p NTR -specific mRNAs, the phosphorylation of TrkB, and the upregulation of p NTR The previous results of proliferation and differentiation of the adult NSCs in the presence of low or high doses of BDNF could be explained by the use of different signaling mechanisms to activate each cellular process.Precisely, BDNF presents high-affinity binding to TrkB (Naylor et al., 2002) and low-affinity binding to p75 NTR (Rodriguez-Tebar et al., 1990), two receptors that are expressed by NSCs, showing a dynamic pattern of expression during proliferation and differentiation of these cells (Figure 3A).To understand if the different cellular response of BDNF in a dosedependent manner could be due to the intervention of different receptors/pathways, adult NSCs were treated with 10 or 50 ng/ml of BDNF, and the gene expression of both receptors, Ntrk2 (TrkB) and Ngfr (p75 NTR ), was measured by qPCR (Figures 3B, C).To this aim, BDNF was added after neurosphere disaggregation, and the expression of these receptors was analyzed in the newly formed neurospheres after 5 days in the presence of the neurotrophin.The Ntrk2 gene encodes three receptor isoforms generated by alternative splicing, the full-length isoform (TrkB FL), and two truncated versions of the protein lacking the kinase domain, with TrkB.T1 being the most expressed in the NSCs from the SVZ (Islam et al., 2009;Vilar and Mira, 2016).Thus, the expression of the transcripts encoding both TrkB FL and TrkB.T1 (TrkB FL and TrkB.T1, respectively) was analyzed in adult NSCs grown in the absence or presence of 10 or 50 ng/ml BDNF (Figure 3B).The presence of BDNF in the culture medium resulted in a significant increment of both TrkB FL and TrkB.T1 expressions, regardless of the BDNF concentration (Figure 3B), suggesting that its expression is regulated by the activation of TrkB.As previously shown (Islam et al., 2009), the expression of TrkB.T1 was higher than that of TrkB FL (Figure 3B).In contrast to its mRNA levels, the expression of the TrkB protein using an antibody recognizing the extracellular domain (i.e., recognizing all TrkB isoforms) was not observed to show an increased response to BDNF (Figure 3D), suggesting that post-transcriptional mechanisms regulate TrkB protein expression.As expected, exposure of neurospheres to BDNF resulted in the increase of TrkB phosphorylation in Y516 (Figure 3D), a residue that becomes phosphorylated upon TrkB activation (Mazzaro et al., 2016).This activation of TrkB signaling in NSCs confirms previous published data suggesting TrkB activation in NSCs (Chen et al., 2017).Moreover, the application of the selective TrkB antagonist ANA-12 (TrkB-i) (Cazorla et al., 2011) to neurospheres treated with 10 ng/ml BDNF resulted in the reduction of Y516 TrkB phosphorylation to basal levels (Figure 3D).In contrast to TrkB FL and TrkB.T1 expressions, the expression of Ngfr was significantly upregulated in the NSC cultures only after high-dose exposure to BDNF (Figure 3C), indicating that the presence of high levels of BDNF promotes the activation of a signaling pathway resulting in the expression of Ngfr.The requirement for the dose of BDNF suggests that the upregulation of p75 NTR is modulated by its own activation.To confirm this hypothesis, the expression of Ngfr was measured in NSCs treated with 50 ng/ml BDNF in the presence of TrkB-i or the selective p75 NTR antagonist THX-B (Bai et al., 2010) (p75-i) (Figure 3E).The presence of TrkB-i did not change the Ngfr mRNA levels when NSCs were treated with 50 ng/ml of BDNF, and the expression of Ngfr was not upregulated after 50 ng/ml BDNF treatment in the presence of p75-i (Figure 3E), showing that the increment in the expression of the p75 NTR receptor was regulated by the interaction of BDNF with this receptor.The increment in Ngfr mRNA at 50 ng/ml of BDNF treatment was confirmed at the protein level by Western blot (Figure 3F) and immunocytochemistry (Figure 3G), using a previously characterized antibody (Huber and Chao, 1995). . TrkB and p NTR are required for BDNF-mediated self-renewal and proliferation of adult NSCs The expression data of TrkB and p75 NTR in proliferating and differentiating conditions suggest that both receptors are involved in NSC behavior.To determine the implications of TrkB and p75 NTR in these processes, NSCs were treated with TrkB-i and p75i, respectively (Figure 4A).NSCs were cultured at low density to evaluate self-renewal capacity in the absence or presence of 10 or 50 ng/ml of BDNF as above, using 10 µM of TrkB-i or 10 µM of p75-i to inhibit TrkB or p75 NTR specifically (Figure 4B).Control NSCs were treated with DMSO.The presence of TrkB-i in the medium revealed that the TrkB pathway is essential for NSCs to self-renew, independently of the presence of exogenous BDNF, a finding consistent with the expression of Bdnf by the adult NSCs (Figure 4C).Blocking this receptor significantly decreased the number of neurospheres in 0, 10, and 50 ng/ml of BDNF treatments (Figure 4B).These data were consistent with previous results showing a decrease of newly born neurons in the OB of TrkB heterozygous mice (Bath et al., 2008).In contrast, treatment of NSCs with p75-i in the absence of BDNF showed no effect on the self-renewal capacity of the NSCs (Figure 4B).The presence of 10 ng/ml of BDNF jointly with this antagonist did not alter this ability either (Figure 4B) indicating that lower concentrations of BDNF act through the TrkB pathway.However, treatment with 50 ng/ml of BDNF in the presence of the p75 NTR antagonist resulted in a decrease in the number of neurospheres (Figure 4B), indicating that the higher concentration of BDNF activated a TrkB/p75 NTR -dependent pathway that becomes necessary to control NSC self-renewal.Previous studies demonstrated that TrkA formed complexes with p75 NTR , increasing the affinity and selectivity of NGF binding (Hempstead et al., 1991).Another study showed that BDNF induces TrkB association with p75 NTR in embryonic hippocampal neurons after TrkB activation (Zanin et al., 2019).Importantly, this latter study demonstrated that p75 NTR is necessary for optimal TrkB signaling and function through the PI3K pathway in embryonic neurons (Zanin et al., 2019).In contrast to these studies, where p75 NTR optimizes the signaling capacity of the Trk family receptors, our observation suggests that a novel functional interaction between p75 NTR and TrkB exists in the adult NSCs as the blockade of p75 NTR prevents TrkB function. The proliferation capacity of adult NSCs was also analyzed in the presence of the receptor antagonists (Figure 4D).NSCs were plated in proliferation-promoting conditions and treated with different doses of BDNF.The percentage of proliferating cells was determined by the number of Ki67+ cells.The treatment with either antagonist in the absence of exogenous BDNF showed no alterations in the percentage of proliferative NSCs (Figure 4D).The presence of the TrkB-i in NSCs treated with low or high concentrations of BDNF prevented the increase in the percentage of Ki67+ cells induced by this neurotrophin, reaching the untreated culture levels (Figure 4D).However, the presence of the p75-i decreased the Ki67 percentage to untreated culture levels only in NSCs treated with 50 ng/ml BDNF (Figure 4D), demonstrating activation of the p75 NTR pathway when BDNF levels are high, leading to increased proliferation. . p NTR is required for BDNF-mediated di erentiation of adult NSCs into oligodendrocytes Since BDNF-mediated differentiation requires high levels of BDNF (Figures 2C-F), we investigated whether the p75 NTR activation observed under proliferative conditions was also required to achieve terminal differentiation of adult NSCs.Thus, NSCs were differentiated in the presence of a high concentration of BDNF and either of the antagonists TrkB-i and p75-i (Figure 4E).In the absence of BDNF, no alterations were detected in the percentage of neurons, oligodendrocytes, and astrocytes after the receptor blockage (Figure 4E).The differentiation of NSCs with 50 ng/ml of BDNF increased the percentage of neurons and oligodendrocytes at the expense of astrocytes, as previously demonstrated.However, only p75 NTR inhibition with p75-i was able to rescue the proportion of oligodendrocytes observed in the control cultures with statistical significance (Figure 4E).No statistically significant alterations were observed in the percentage of astrocytes and neurons with the TrkB-i and p75-i antagonists.However, a decrease in the proportion of oligodendrocytes with TrkB-i antagonist in 50 ng/ml BDNF-treated cultures was detected, not reaching statistical significance (Figure 4E). Discussion We have shown in this study that BDNF facilitates self-renewal and cell cycle progression in NSCs isolated from the SVZ of adult mice.These processes are mediated by the TrkB/TrkB.T1 receptors as they can be blocked by , an inhibitor that interacts with the binding domain of BDNF in the extracellular domain of these receptors (Cazorla et al., 2011).Interestingly, both self-renewal and cell cycle progression become dependent on p75 NTR when the concentration of BDNF is high enough to activate this latter receptor.Under this condition, BDNF does not exert proliferative effects if the p75 NTR function is pharmacologically blocked.In addition, we have demonstrated that BDNF induces the differentiation of NSCs into oligodendrocytes through a p75 NTRdependent mechanism as it requires a BDNF concentration above its K d for the binding to p75 NTR and can be pharmacologically blocked with a p75 NTR -specific inhibitor.We have also shown that BDNF triggers neuronal differentiation when applied at a high dose (Figure 5). Our results indicate that BDNF is required for NSC selfrenewal.This effect is dose-dependent since a significantly higher number of neurospheres can be observed in the presence of 50 ng/ml BDNF when compared to 10 ng/ml.This facilitation has been previously described for the BDNF-dependent survival of rat hippocampal neurons (Zanin et al., 2019).The mechanism by which 50 ng/ml BDNF potentiates NSC self-renewal might depend on the observed upregulation of p75 NTR expression at this high BDNF concentration.This increase in p75 NTR at high BDNF dose is reminiscent of the effect of NGF in astrocytes, which can also upregulate p75 NTR expression in these cells (Kumar et al., 1993).In the adult SVZ, the p75 NTR -positive population contains all of the neurosphere-producing precursor cells (Young et al., 2007).Therefore, we suggest that the observed increase of p75 NTR in our cultures when treated with 50 ng/ml BDNF is likely due to the activation of p75 NTR and the upregulation of its levels in all the neurosphere-constituting cells.In fact, the upregulation of the mRNA and protein p75 NTR levels was blocked in the presence of p75-i, indicating activation of this receptor after high-dose BDNF treatment to promote its own expression. In this study, we have demonstrated that BDNF facilitates the proliferation of adult mouse NSCs in vitro.This observation is consistent with the finding that BDNF stimulates the proliferation of newborn NSCs (Chen et al., 2013), human iPSCs-derived NPCs (Pansri et al., 2021), and embryonic neural precursors (Bartkowska et al., 2007).The stimulation of proliferation triggered by BDNF likely depends on TrkB since the use of TrkB-i prevents it.This is consistent with the known activation by TrkB of the Ras-Raf-MEK-ERK signaling pathway (Reichardt, 2006), which favors cell cycle progression when ERK translocates to the nucleus and phosphorylate transcription factor substrates that are responsible for the mitogenic response (Mebratu and Tesfaigzi, 2009).This is consistent with our observation that BDNF induces the phosphorylation of TrkB in Y516, a residue known to participate in the latter signaling pathway (Fan et al., 2020).Nevertheless, TrkB.T1 may also participate in the facilitation of NSC proliferation by BDNF as the truncated form of TrkB has been suggested to induce BDNF-dependent proliferative effects on both embryonic NSCs (Islam et al., 2009) and embryonic neural progenitors (Tervonen et al., 2006). Our results indicate that the intrinsic ability of TrkB to confer both self-renewal and proliferative capacity to NSCs (i.e., the proliferative capacity that would be observed upon pharmacological inhibition of p75 NTR ) becomes unexpectedly abolished when BDNF is added at 50 ng/ml.We explain this result immunostaining for p NTR in untreated or treated NSCs with or ng/ml of BDNF as well as ng/ml BDNF-treated cells with the p NTR antagonist (p -i).Quantification of p NTR fluorescence intensity in arbitrary units is shown as mean ± SEM (n = ).In (A-C, E), Rpl was used as a housekeeping gene.DAPI was used to counterstain DNA in (D, G).In (A, D, F), error bars show SEM.In all panels, p-values and the number of samples are indicated.Only di erences that are statistically significant are shown.Scale bars: µm (D); µm (G). in terms of the differential capacity of BDNF to activate p75 NTR depending on its concentration (Rodriguez-Tebar et al., 1990).We propose that the activation of p75 NTR with 50 ng/ml BDNF seems to permanently modify the proliferative signaling of TrkB.We refer to this effect as "co-receptor dependence for TrkB signaling."The mechanism of acquisition of this novel co-receptor dependence is currently unknown.However, it should not derive from a different mode of TrkB activation by the higher BDNF concentration since the binding capacity of BDNF to the high-affinity TrkB receptor has already reached a plateau at the range of 10-50 ng/ml (Rodriguez-Tebar et al., 1990).This co-receptor dependence for TrkB signaling might be physiologically relevant in vivo, in neurogenic regions where local enrichment of BDNF results in the upregulation of p75 NTR and the modulation of TrkB/p75 NTR signaling.Our results are consistent with previous studies in postnatal hippocampal NSCs demonstrating the implication of p75 NTR in the proliferation capacity of these cells since the p75 NTR -ligand proNGF inhibits proliferation of the NCSs (Guo et al., 2013).As proNGF cannot interact with TrkB, it likely prevents the functional interaction of p75 NTR with the latter in response to endogenously produced BDNF.This effect was also abolished in p75 NTR knock-out mice (Guo et al., 2013), thus providing genetic evidence that this receptor is involved in the proliferation of NSCs. Our results also indicate that the pharmacological inhibition of TrkB results in a dramatic reduction in the number of neurospheres even in the absence of added BDNF, suggesting that low levels of this neurotrophin may be released by the NSCs facilitating their self-renewal.Indeed, previous studies have shown BDNF expression in the SVZ (Galvão et al., 2008) and embryonic NSCs (Blurton-Jones et al., 2009).We have shown that Bdnf -specific mRNA is transcribed by adult NSCs, a finding consistent with a previous study by Goldberg et al. (2015).In contrast, TrkB inhibition in the Ki67 proliferation assay without exogenous BDNF does not lead to a significant reduction in cell cycle progression.The main difference between both results is the density of the NSCs that were used.In the proliferative assay, high NSC density was employed, while in the self-renewal assay, NSCs were plated at low density.Therefore, one explanation for this discrepancy may derive from a hypothetical capacity of TrkB to stimulate the expression or function of the cell adhesion molecules involved in the generation of the neurospheres (Zhou et al., 1997).Consequently, NSCs would not adhere to each other to generate multicellular structures in the presence of TrkB-i. In this study, we have demonstrated that BDNF induces the differentiation of adult NSCs in vitro into oligodendrocytes and neurons, as previously shown to take place in newborn NSCs (Chen et al., 2013;Langhnoja et al., 2021).This is consistent with the capacity of BDNF to promote the progression of oligodendrocyte lineage and to enhance myelination through the p75 NTR receptor (Cosgaya et al., 2002).The studies by Chen et al. (2013) and Langhnoja et al. (2021) mentioned above did not compare the roles of TrkB and p75 NTR in this process.Nevertheless, we note that high concentrations of BDNF were used by these authors to detect a potent differentiative effect on newborn NSCs (25 and 50 ng/ml BDNF, respectively).We therefore decided to explore which BDNF receptor is responsible for the differentiative effect of BDNF.Our results indicate that BDNF induces differentiation through p75 NTRdependent signaling based on two lines of evidence.On the one hand, this effect could not be observed with 10 ng/ml BDNF, a concentration that is insufficient to activate p75 NTR (Rodriguez-Tebar et al., 1990).On the other hand, the use of p75-i, in contrast to TrkB-i, significantly blocked BDNF-dependent oligodendrocyte differentiation.These results agree with the known inhibition of oligodendrogenesis in a p75 NTR -dependent manner since this process was blocked in the presence of proNGF and p75 NTR knockout mice (Guo et al., 2013). We have observed that the p75 NTR -specific inhibitor was not able to prevent neuronal differentiation in vitro, which is consistent with the observation that p75 NTR null mice had nearly identical levels of surviving BrdU-positive cells in the OB relative to wildtype mice 28 days after DNA labeling with this nucleotide analog (Bath et al., 2008).This contrasts with our observation that 50 ng/ml BDNF, but not 10 ng/ml BDNF, is required to induce neuronal differentiation in our cultures.In this regard, we note that a great statistical error can be observed in the increase of TUJ1positive cells when the NSCs are treated with 50 ng/ml BDNF under differentiative conditions (Figures 2C, 4E).Therefore, it cannot be strongly concluded that BDNF triggers a clear effect on neuronal differentiation through p75 NTR . Our results are consistent with the observation that intraventricular administration of BDNF increases the number of newly generated neurons in the adult rat olfactory bulb (Zigova et al., 1998;Benraiss et al., 2001;Henry et al., 2007).They are also consistent with the reduction in the number of newborn neurons that is observed in the OB of mice lacking one copy of the Bdnf gene (Bath et al., 2008).They are also consistent with the claim that TrkB is not essential for adult SVZ neurogenesis (Galvão et al., 2008).Mechanistically, the observation that neurotrophin binding to p75 NTR modulates Rho activity and axonal outgrowth (Yamashita et al., 1999) and that developmental biology is one of the enriched pathways associated with p75 NTR function (Sajanti et al., 2020) may explain the differentiative effect of BDNF-dependent activation of p75 NTR in adult NSCs. Taken together, our results provide the basis to understand the role of BDNF in the homeostasis of SVZ-derived adult NSCs and the implications of this relevant neurotrophin in pathological conditions as we have clarified the differential contribution of TrkB and p75 NTR to BDNF-dependent self-renewal, proliferation, and differentiation of adult NSCs.Furthermore, our results reveal an undescribed mechanism based on a co-receptor dependence for TrkB signaling in the regulation of self-renewal and proliferation of adult NSCs that may be a clue to understand BDNF effects in the neurogenic niche. FIGURE FIGURE BDNF promotes NSC self-renewal and proliferation.(A) Schematic representation of the treatments for di erent concentrations of BDNF in adult NSCs in the self-renewal assay (left panel).Number of neurospheres after the culture of adult NSCs at low density ( cells/µl) in the absence or presence of or ng/ml BDNF (right panel).(B) Diameter of the neurospheres in the self-renewal assay in the absence or presence of or ng/ml of BDNF (upper panel).Representative images of neurospheres formed in the absence or presence of BDNF treatments (lower panel).(C) Percentage of proliferative NSCs at high density ( .cells/µl), measured as the proportion of Ki + cells, in untreated or BDNF-treated cultures ( or ng/ml).Immunochemistry images for the proliferative marker Ki (green) and the neural precursor marker Nestin (red) in NSCs treated with di erent concentrations of BDNF are also shown.DAPI was used to counterstain DNA.All error bars show SEM.p-values and the number of samples (circles) are indicated.Only di erences that are statistically significant are shown.Scale bars in (B) µm; in (C) µm. FIGUREA FIGURE A higher dose of BDNF is required to favor neuronal and oligodendroglial di erentiation.(A) Boxplots illustrating the expression of the neuronal marker Dcx, the oligodendrocytic marker Olig , and the astrocytic marker S β, in BDNF-treated NSCs ( , , or ng/ml).Rpl was used as a housekeeping gene.(B) Boxplots illustrating the expression of Nestin in adult NSCs after being treated with , , or ng/ml of BDNF.Rpl was used as a housekeeping gene.(C) Percentage of neurons, measured as TUJ + cells, after days under di erentiation-promoting condition in the absence or presence of or ng/ml of BDNF.(D) Percentage of oligodendrocytes, measured as O + cells, after days under di erentiation-promoting condition in the absence or presence of or ng/ml of BDNF.(E) Percentage of astrocytes, measured as GFAP+ cells, (Continued) FIGURE FIGURE (Continued)after days under di erentiation-promoting condition in the absence or presence of or ng/ml of BDNF.(F) Immunocytochemistry images for TUJ (red), O (gray), or GFAP (green) in NSCs after DIV of di erentiation in the absence or presence of or ng/ml of BDNF.DAPI was used to counterstain DNA.In (C-E), p-values and the number of samples (circles) are indicated, and all error bars show SEM.Only di erences that are statistically significant are shown.Scale bars in (F) µm. FIGURE FIGURE BDNF induces the expression of functional TrkB and p NTR receptors.(A) Expression of mRNA encoding the full-length (TrkB FL) and the truncated (TrkB.T ) isoform of Ntrk gen and p NTR (Ngfr) in adult NSCs maintained in the absence of exogenous BDNF in proliferation-promoting conditions and during the di erentiation process [after days in vitro (DIV) and DIV].(B) Boxplots illustrating the expression of TrkB FL and TrkB.T isoform of Ntrk gen in adult NSCs after being treated with , , or ng/ml of BDNF.(C) Boxplots illustrating the expression of Ngfr (p NTR ) in BDNF-treated NSCs ( , , or ng/ml).(D) Representative high-magnification images illustrating the immunostaining for p-TrkB in Y (green) and TrkB (red) in untreated or BDNF-treated ( or ng/ml) neurospheres as well as ng/ml BDNF-treated neurospheres with the TrkB antagonist (TrkB-i).Vehicle: DMSO (left panel).Quantification of p-TrkB/TrkB and TrkB fluorescence intensity (in arbitrary units, a.u.) in these cultures (middle and right panels).(E) Boxplots illustrating the expression of p NTR receptor, Ngfr, in adult NSCs in the absence or presence of ng/ml of BDNF and treated with the antagonists TrkB-i or p -i. DMSO was used as a control.(F) Immunoblot for p NTR protein in NSC cultures treated with , , or ng/ml of BDNF (upper panel).Quantification in the Western blot of p NTR relative to α-tubulin protein (bottom panel).(G) Representative images illustrating the (Continued) FIGURE FIGURE (Continued) FIGUREp FIGURE p NTR regulates adult NSC proliferation and di erentiation in a higher dose of BDNF context.(A) Schematic representation of the treatments for TrkB and p NTR inhibition in adult NSCs in proliferation or di erentiation conditions.(B) Number of neurospheres in , , or ng/ml BDNF treatments in the presence of the TrkB antagonist, TrkB-i, or the p antagonist, p -i.As a control, DMSO was added to the cultures without antagonists.(C) Boxplots illustrating the expression of Bdnf and Olig by qPCR in untreated adult NSCs.Olig expression is shown as a positive control of a neural expressed gene.Rpl was used as a housekeeping gene.(D) Percentage of Ki positive cells in NSCs treated with , , or ng/ml of BDNF in the presence of the antagonists TrkB-i or p -i (left panel).Immunocytochemistry images for Ki (red) in these conditions (Continued) FIGUREFIGURE FIGURE (Continued) (right panel).(E) Percentage of TUJ + neurons, O + oligodendrocytes, and GFAP+ astrocytes after DIV under di erentiation-promoting conditions in ng/ml BDNF-treated or untreated NSCs in the presence of TrkB-i or p -i antagonists (upper panels).As a control, DMSO was added to the cultures without antagonists.Immunocytochemistry images for the lineage markers TUJ (red), O (gray), and GFAP (green) in these conditions are also shown (lower panels).DAPI was used to counterstain DNA.Error bars show SEM.p-values and the number of samples (circles) are indicated.Only di erences that are statistically significant are shown.Scale bars in (D) µm; in (E) µm. TABLE List of primary antibodies for immunocytochemistry (ICC) and Western blot (WB).
2023-12-24T16:09:57.923Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "0e75976d4216b045563ae1ff7197af1d873e7302", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2023.1271820/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d60e2adbc36efe30e31855258f3aaf6e0b890a6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13520669
pes2o/s2orc
v3-fos-license
Advances and Challenges in Studying Hepatitis B Virus In Vitro Hepatitis B virus (HBV) is a small DNA virus that infects the liver. Current anti-HBV drugs efficiently suppress viral replication but do not eradicate the virus due to the persistence of its episomal DNA. Efforts to develop reliable in vitro systems to model HBV infection, an imperative tool for studying HBV biology and its interactions with the host, have been hampered by major limitations at the level of the virus, the host and infection readouts. This review summarizes major milestones in the development of in vitro systems to study HBV. Recent advances in our understanding of HBV biology, such as the discovery of the bile-acid pump sodium-taurocholate cotransporting polypeptide (NTCP) as a receptor for HBV, enabled the establishment of NTCP expressing hepatoma cell lines permissive for HBV infection. Furthermore, advanced tissue engineering techniques facilitate now the establishment of HBV infection systems based on primary human hepatocytes that maintain their phenotype and permissiveness for infection over time. The ability to differentiate inducible pluripotent stem cells into hepatocyte-like cells opens the door for studying HBV in a more isogenic background, as well. Thus, the recent advances in in vitro models for HBV infection holds promise for a better understanding of virus-host interactions and for future development of more definitive anti-viral drugs. Introduction Hepatitis B virus (HBV) is a small DNA virus that infects the liver and is a major cause for end stage liver disease and liver cancer [1]. The small viral genome is 3.2 kb in length with an overlapping gene organization. Following binding to its receptor, the bile-acid pump sodium-taurocholate cotransporting polypeptide NTCP [2,3], HBV enters into the cell and establishes a nuclear pool of episomal DNA in the form of covalently closed circular DNA (cccDNA). The cccDNA molecules reside in the infected cells' nuclei and serve as the template for viral transcription. HBV is dependent for its replication on viral-encoded polymerase that reverse transcribes the pre-genomic RNA to form the partially double-stranded DNA in the mature virion [4]. Nucleot(s)ide analogues, currently the standard of care for chronically infected patients, effectively block the viral polymerase activity and hence viral replication. However, those drugs do not affect the cccDNA pool and, therefore, complete viral eradication resulting in chronically infected patients' cure is still a major challenge [5,6]. Since its discovery in the late 1960s and throughout the years, HBV research has been hampered by the lack of robust and reproducible cell culture systems that reliably mimic the viral life cycle [7]. This review summarizes major milestones in the development of cell culture systems for HBV, focusing on recent technological and methodological advances enabling the development of more robust and physiologically relevant infection systems based on immortalized cell lines as well as on primary human hepatocytes. In vitro Systems Based on Primary Non-Human Hepatocytes To study the HBV life cycle and its interactions with the host in vitro, one should ideally incorporate a constant and authentic viral pool, a genuine host cell, and a reliable and relatively easy to perform readout(s) for viral infection. Yet, none of these seem to be easy to achieve in the case of HBV. Obviously, primary human hepatocytes are considered as the gold standard host cells for HBV infection. However, those cells are phenotypically unstable in vitro, losing their permissiveness for HBV infection soon after isolation and plating on culture dishes [8,9]. Earlier attempts to infect primary human hepatocytes with infectious inoculums of HBV were encountered with large variability among hepatocyte donors as well as low rates and short durability of infection, even upon supplementation of dimethyl sulfoxide (DMSO) to support the differentiation state of the cells [10,11]. The woodchuck hepatitis virus (WHV) was the first of the mammalian and avian hepadnaviruses described following the discovery of HBV [12]. Primary cultures of woodchuck hepatocytes proved to be susceptible to infection with WHV, resulting in cccDNA formation and active viral replication and were therefore used as a platform to study the effect of anti-viral drugs on cccDNA persistence [13,14]. However, only few in vitro studies using the WHV system have been published, most probably due to difficulties in reproducing conditions to achieve productive infection. Nevertheless, the major utility of the WHV system remained in the context of in vivo studies on infected animals. These were pivotal for anti-viral drug studies [15] as well as for elucidating molecular pathways in HBV-associated carcinogenesis [16,17] and the interactions between the virus and the anti-viral immune response [18,19]. As opposed to both primary human hepatocytes and WHV hepatocytes, primary duck hepatocytes infected with duck hepatitis B virus (DHBV) have been found to be much easier to handle and very useful for studying basic questions in viral life cycle and especially in cccDNA formation and amplification [20,21]. However, despite being a member of the Hepadnaviridae family and sharing a similar life cycle to human HBV, DHBV still differs from HBV in several properties, including its shorter genome and the absence of the functional HBV X (HBx) protein [22]. Therefore, conclusions derived from DHBV system regarding cccDNA amplification and maintenance [23] as well as viral entry [24] might not necessarily hold true for HBV and are, therefore, clinically irrelevant. This emphasizes the need for using a system incorporating authentic HBV for studying the virus and its interactions with the host. Tupaia Belangeri (treeshrew), on the other hand, is the only species susceptible for HBV infection besides humans and chimpanzees. Primary Tupaia hepatocytes have been shown to support HBV infection in vitro, although the magnitudes of infection efficiency and viral spread in this system are not entirely clear [25]. Importantly, primary Tupaia hepatocytes have been used as the target cells for photo-cross-linking experiments with a synthetic pre-S1 peptide that were key in identification of NTCP as the receptor for HBV and hepatitis D virus (HDV) [3], a major milestone in HBV research in recent years (for further discussion see Section 3.4.). Stably HBV-Transfected Cell Lines Immortalized hepatoma cell lines, such as HepG2 and Huh7 cells, are very convenient to work with but are normally not permissive for HBV infection. To circumvent this problem, Sells and colleagues transfected hepatoma cells with a cloned head to tail HBV dimer, resulting in viral gene expression and replication as well as the formation of infectious viral particles that can readily infect naïve chimpanzees [26,27]. The so-called HBV-expressing HepG2.2.15 clone has been extensively used since then for studying basic questions in HBV biology as well as a platform for testinganti-viral drugs [28,29]. This system, as well other similar systems based on stably integrated HBV DNA [30], have the obvious advantage of stably expressing viral gene products and maintaining continuous HBV replication, and are therefore also used as a source for tissue culture derived virions for infection experiments. However, unlike the situation in vivo, viral production is mainly derived from the integrated rather than from the episomal DNA, which is hard to detect by conventional methods in this cell line. The introduction of hepatoma cells stably expressing HBV from a Tet-on/Tet-off system, the HepAD38 cell line, not only allowed for a better and more tightly controlled system to study HBV, but also resulted in a more robust production of virions and enhanced cccDNA accumulation in the cells [31,32]. More recently, a newer version of HepG2 cells stably transfected with a Tet-inducible HBV genome has been introduced, designated HepDE19 cell line. In this system, the 1.1 over-length HBV transgene is mutated in its 5 1 pre-core ATG, whereas the 3 1 pre-core ATG remains intact. As a result, the HBV e-Antigen (HBeAg) is expressed from the episomal DNA (cccDNA) but not from the integrated genome. By analyzing secreted HBeAg as a surrogate marker for cccDNA abundance, this system has been used as a platform for a large-scale screening for cccDNA-targeting drugs [33,34]. Delivery Vector Systems of the HBV Genome Although the aforementioned cell lines are based on functional, integrated HBV genome, HBV integration is not obligatory for the HBV life cycle and does not produce infectious viruses in vivo. Therefore, with the entry machinery of HBV into the cells still remaining much of a black box, efforts have been made over the years to find alternative ways to deliver the HBV genome to the cells in a more physiological manner. The development of a recombinant HBV baculovirus system, produced in insect cells, enabled the delivery of a functional HBV genome into hepatoma cells resulting in productive HBV replication, formation of infectious viruses and establishment of a detectable intracellular cccDNA pool [35][36][37]. This system has been used for a variety of in vitro studies, such as testing the efficiency of novel anti-HBV drugs [38] as well as for drug resistance studies [39]. Another potential delivery system for the HBV genome is the adenovirus vector [40]. Adenovirus vector carrying the HBV genome (Ad-HBV) has been shown to infect a wide range of hepatocytes irrespective of species barrier, resulting in episomal DNA formation and robust HBV replication [41,42]. The delivery of HBV genome using a lentiviral vector has been experimentally used for in vitro experiments, as well [43]. However, albeit having several advantages over the traditional HepG2.2.15 cell line and its derivatives, those delivery vector systems still suffered from significant limitations; first, delivery of the HBV genome by a viral vector completely bypassed the natural entry stage of HBV, thereby eluding studies regarding this crucial step in HBV life cycle. Second, a part of the host response to HBV infection could have been largely masked by the non-specific response to the viral vector used for HBV delivery, making it hard to interpret data regarding the innate immune response to HBV infection [44], for example. Third, safety issues especially regarding work with HBV harboring lenti-viral vectors are of a major concern and are therefore a major obstacle for a wide usage of this delivery system. Differentiated Hepatoma Cell Lines Given their easy handling, low cost and reproducibility, ongoing efforts have been made to achieve authentic infection in the traditionally non-permissive hepatoma cell lines, by their further differentiation into cells better resembling primary human hepatocytes. In one report, Shaul's group was able to show that upon supplementation of DMSO (and even more than that, the combination of DMSO and 5-aza-2 1 -deoxycytidine), HepG2 cells become permissive to HBV infection [45]. A big leap forward was the introduction of a novel hepatoma cell line, designated HepaRG, that presents morphological and functional features similar to primary hepatocytes and that is susceptible to HBV infection upon supplementation of corticoids and DMSO to maintain the cells' differentiationstate [46]. The ability of this system to recapitulate the whole life cycle of HBV in the context of authentic infection established its role as an experimental platform for studies addressing key questions in HBV biology such as the role of the innate immune response in counteracting HBV infection [44,47,48], cccDNA regulation [23,49], and mechanisms of viral entry [50,51]. However, this infection system still suffered from substantial limitations, such as the need for polyethylene glycol (PEG) supplementation to achieve infection, relatively low infection efficiency, and stringent conditions to maintain those cells' state of differentiation. NTCP Expressing Hepatoma Cell Lines It was not until a decade later that the bile acid pump NTCP has been shown to serve as a receptor for both HBV and HDV [2,3]. This revolutionizing discovery and the realization that NTCP expression on the plasma membrane of hepatoma cells is much less abundant as compared to primary hepatocytes, has opened the door to establishing HepG2 and Huh7-based cell lines in which NTCP is over expressed and that can be readily infected with HBV [52]. More recently, a novel system based on NTCP expressing hepatoma cells co-cultured with HBV-specific CD8 cells has been suggested as a platform for studying the immunobiology of HBV in the context of a tissue-culture format, as well [53]. However, although improved techniques, such as spinoculation during HBV inoculation, greatly enhanced infection efficiency of NTCP expressing cells [54], the system still has its limitations; first, the multiplicity of infection (MOI) needed to achieve substantial infection is extremely high (in the range of hundreds, and even thousands) and, in most instances, PEG is needed to enhance infection. Second, in contrast to the situation in vivo, infection is short-lived, it does not result in substantial viral spreading and the amount of cccDNA detected is modest. This suggests that other factors essential for productive HBV infection are probably impaired or even missing in those cancerous cell lines. Last but not least, despite their flexibility and easy handling, hepatoma cells are physiologically impaired in many intracellular pathways and functions, limiting their use as a platform for studying virus-host interactions. Therefore, despite the great advance the NTCP-hepatoma cell lines have provided, there is still some need for more robust and physiologically authentic systems that mimic more reliably the situation in vivo. Interestingly, the expression of human NTCP in mouse hepatocyte cell lines confers them susceptible to HDV, but not to HBV infection [55][56][57]. An early study suggested that at least one major intracellular block to HBV infection in mouse hepatocytes is at the level of cccDNA formation [58]. A recent study found that HBV cccDNA can be formed in an immortalized mouse hepatocyte cell line and this can be correlated with the instability of HBV mature nucleocapsids in these immortalized mouse hepatocytes, suggesting that nucleocapsid uncoating may be a major intracellular determinant in the susceptibility of hepatocytes to HBV infection [59]. Another recent study indicated that it is a dependency factor, rather than a restriction factor, that is missing in mouse hepatocytes and prevents infection [60]. The identification of this critical factor, or factors, will facilitate the development of HBV infection systems based on murine hepatocytes, a requisite for establishing a mouse model for HBV infection [61] (further discussed in Section 6). In vitro Systems Based on Primary Human Hepatocytes As previously discussed, earlier attempts to establish HBV infection systems based on primary human hepatocytes have been hampered by the phenotypic instability of the cells in vitro, reflected by a rapid loss of their authentic hepatocyte function soon after plating accompanied by loss of permissiveness for HBV infection. This has been a major obstacle for using primary human hepatocytes to study a slow growing virus like HBV. Human Fetal Hepatocytes Several studies have attempted to use fetal human hepatocytes as a platform for HBV infection system. Ochya and colleagues have infected highly confluent cultured fetal human hepatocytes with hepatitis B virions produced by hepatoma cell line and were able to show a 12% infection efficiency with active replication that started two days after infection and accumulated during 16 days post infection [62]. The limited infection efficiency and the apparent absence of viral spreading were explained by the relative narrow window of time in which cells remained susceptible for infection, and therefore virions released into medium from infected cells possibly could not infect adjacent cells any more. Another study has similarly shown that fetal human hepatocytes could be infected with HBV infectious serum but that productive infection could remain for a limited period of time (up to 16-18 days) concomitant with the maintenance of normal hepatocytic phenotype [63]. Interestingly, the addition of DMSO appeared to enhance viral replication in this system. Another recent study has shown that co-culturing of fetal human hepatocytes with hepatic non-parenchymal cells and the subsequent addition of 2% DMSO leads to the formation of hepatocyte islands, resulting in prolonged phenotypic maintenance of those cells and susceptibility for HBV infection for up to 10 weeks [64]. However, although the above studies suggest fetal human hepatocyte as a possible platform for in vitro HBV studies, the limited availability of fetal hepatocytes and the large donor-to-donor variations are major limitations of this system. Micro Patterned Co-Cultured Cells Realizing that the in vitro maintenance of phenotypically stable primary human hepatocytes over time is a major goal not only for studying hepatotropic viruses, but also for the purpose of drug screenings as well as for metabolic and toxicity studies [9], Bhatia's lab has integrated tissue engineering with micro technology techniques to create a miniature system of phenotypically stable primary human hepatocytes designated micro-patterned co-cultured (MPCC) system [65]. This system is based on micro-patterning of human hepatocytes in small islands of 200-400 cells each and co-culturing the cells with mouse fibroblasts, thereby providing the cells with the necessary homotypic and heterotypic cell-cell interactions to preserve their long-term viability and function. Initial studies have shown that the MPCC system preserves hepatocyte functions over weeks following their plating, as measured by their level of albumin secretion, urea synthesis, phase I and phase II enzymes activity and phase III transporter activity. Furthermore, the MPCC system has been shown to serve as a platform for drug toxicity and drug interaction studies. Last but not least, cryopreserved and not solely fresh hepatocytes could be micro-patterned and maintain their functionality over time, making the system much more practical to use. The MPCCs has been shown to express all the required factors for hepatitis C virus (HCV) entry and to support HCV infection for several weeks [66]. This system has also been successfully used to support the hepatic stage of both, plasmodium falciparum and plasmodium vivax, and was validated as a platform for medium throughput anti-malarial drug screening [67]. More recently, MPCCs have been shown to support HBV infection, as well [68]. Following inoculation of co-cultured micro patterned cryopreserved human hepatocytes, derived from different donors, with HBV infected plasma, cells were first screened for their permissiveness for HBV infection. Quantification of cccDNA and HBV surface antigen (HBsAg) as surrogate markers for productive infection revealed a wide variability between donors in terms of HBV permissiveness. This variability could not be explained by hepatocyte phenotypic differences, since measuring albumin secretion as well as urea production and CYP3A4 activity did not differ significantly between donors. Importantly, to achieve productive infection, the inhibition of JAK-STAT pathway by Janus kinase (JAK) or TANK-binding kinase 1 (TBK1) inhibitors was required, although in several of the screened hepatocyte donors, even JAK-STAT inhibition could not rescue HBV infection. Interestingly, the HBV receptor NTCP has been expressed much more robustly on the plasma membrane of human hepatocytes seeded in the micro patterned format as compared to its expression on human hepatocytes co-cultured with mouse fibroblasts but seeded in a random manner. This suggests that micro patterning of human hepatocytes and the resulting differentiated phenotype may secure the proper expression of host factors essential for productive infection. However, some major limitations of the system are worth mentioning; first, no measurable spread of infection was noted and as judged by immunostaining for HBV core protein, infection efficiency was at the range of 30%. Second, although measurable amount of cccDNA was detected from around day 10 post infection and seemed to be rising, other measures of active gene expression and replication, such as pre-genomic RNA level, HBeAg as well HBsAg levels peaked at around day 16 post infection and declined rapidly thereafter. Third, medium collected from infected cells was not able to re-infect naïve cells, suggesting that viral production by the system was not robust enough to produce substantial infectious viral inoculum. However, despite these limitations, the system provided some important information regarding the activation of the innate immune response following HBV infection. Specifically, in addition to a detectable amount of both interferon (IFN)α and IFNβ, several anti-viral interferon-stimulated genes (ISGs) products, such as viperin, cGAS, and ISG15 among others have been induced in a temporal manner following HBV infection. This implies that, despite the general belief supported by few in vivo observations [69] of HBV being a "stealth virus" [70,71], this might not hold true at least in the context of the MPCC system. Interestingly, this observation is in line with other studies mainly performed in HepaRG cell line, suggesting that HBV infection is implicated in activation of the innate immune response [72,73]. However, further studies should better address this issue by focusing on inherent differences between various infection systems and their impact on the ability of HBV to induce the innate immune response. In addition, it will be interesting to test HBV infection of the MPCC system in the context of a 3D, rather than 2D, culture since liver architecture and the position of the hepatocytes relative to their neighboring parenchymal and non-parenchymal cells may play an important role in their permissiveness to HBV [74]. In vitro Systems Based on Induced Pluripotent Stem (iPS) Cell-Derived Human Hepatocytes Induced pluripotent stem cells were first introduced by Yamanaka and colleagues, who forced the expression of a set of transcription factors in adult-derived cells [75,76]. The resulting pluripotent cells can remain genetically stable and self-renew in culture with the potential to be differentiated into cell lineages of all three germ layers including hepatocyte-like cells (HLCs) [77][78][79]. Notably, HLCs are typically similar to fetal hepatocytes and do not represent the full phenotypic spectrum of primary adult human hepatocytes [80]. Despite this caveat, iPS-derived HLCs have been shown to support the whole life cycle of HCV [81] and the hepatic stage of plasmodium infection, the causative agent of malaria [82]. Recently, HLCs have been also shown to support HBV infection [79]. Specifically, iPS cells were cultured and differentiated to HLCs over a 20-day differentiation process according to a well-defined protocol. A time-course experiment coupled to the differentiation process of those cells demonstrated that both a full activation of the transcription machinery and a robust expression of NTCP on the cells' surface are essential to achieve a productive infection, reflected by cccDNA production and HBsAg secretion. Interestingly, the shift point for the cells to become HBV permissive was at around days 18-20 of differentiation, which is the time of phenotypic switch from hepatoblast-like to fetal hepatocyte like cells. Given the recent discovery of small molecules that can further differentiate HLCs into more mature phenotype that resemble adult human hepatocytes [83], it will be interesting to test weather HBV infection would be more robust under those conditions. Of note and similar to the case of MPCCs, HLCs susceptibility to infection was largely dependent on silencing the type I interferon response by using a JAK inhibitor prior to and following infection. In addition, HBV infection of HLCs resulted in the induction of a set of anti-viral ISGs in a similar pattern observed with HBV infected MPCCs. The establishment of HBV infection system based on HLCs can serve as a platform to dissect important host factors essential for HBV infection and replication. This can be done by comparing the gene expression profile of cells just prior to and following the tipping point of HBV permissiveness followed by the establishment HLCs derived from iPS cell lines knocked-down or knocked-out for specific candidate genes. In accordance with this, the HBV receptor NTCP, known as one of the central factors induced during the late stages of HLCs differentiation provides at least a partial explanation for the late stage of differentiation in which cells become permissive for HBV infection. Recently, the p.Ser267Phe NTCP variant has been shown to confer resistance to HBV infection following genetic and epidemiologic analyses of Han Chinese cohort [84]. The production of iPS cells from those patients' fibroblasts and their differentiation to HLCs could serve as an elegant platform to definitely prove the resistance of this variant to HBV infection in vitro. Chimeric Mice Models for HBV Infection Based on Human Hepatocytes In contrast to human hepatocytes, murine hepatocytes are not permissive for HBV infection even upon over-expression of the human homologue of NTCP and, therefore, the creation of a small animal model for HBV infection is a challenge. Long-standing in vivo models such as the HBV transgenic mice [85] are severely limited by their inability to tackle basic issues in HBV biology, such as viral entry as well as cccDNA formation and maintenance. The technical and conceptual progress made with the isolation and maintenance of primary human hepatocytes paved the way towards using these cells for creating chimeric mouse models that can recapitulate the whole life cycle of HBV, a much needed tool for studying HBV in vivo (reviewed in [86,87]). The basic idea is to implant freshly-isolated primary human hepatocytes that can be stably integrated in the animal's liver parenchyma. For this, one should use immune-compromised animals to avoid immunological response against the transplanted xenogenic hepatocytes and at the same time to initiate limited liver damage to create a proper niche for the engraftment and propagation of the transplanted hepatocytes. The two best characterized models are the Alb-urokinase type plasminogen activator (uPA) transgenic mouse [88,89] in which sub-acute liver failure is induced by the uPA transgene and the knockout fumarylacetoacetate hydrolase (FAH) mouse model [90][91][92] in which hypertyrosinemia and liver failure ensue unless the animals are protected by consuming the NTCB drug. Following their intra-splenic injection, the successful engraftment of human hepatocytes in the animals' livers usually takes several weeks, during which time measurement of serum human albumin levels can be used as an indicator for the magnitude of engraftment [93]. Both the uPA and the FAH deficient mouse models have been shown to support HBV infection. Following inoculation, the virus gradually spreads in the engrafted human hepatocytes to ultimately infect the vast majority of engrafted cells [88,91,92,94]. The ability to use cryopreserved hepatocytes for liver engraftment not only made the system much more flexible and technically feasible for routine use, but also opened the door for studying viral biology and the effect of anti-viral drugs in the context of different genetic backgrounds. Furthermore, the human chimeric mouse models make it possible to use natural viruses derived from various sources, as well as recombinant mutated viruses for infection experiments to study basic questions in HBV biology and in virus-host interactions. For example, a study performed in uPA-SCID mice has shown that following HBV infection, binding of the viral pre-S1 motif to the bile-acid pump NTCP results in major alterations in the expression of genes implicated in lipid metabolism and bile-acid synthesis [95]. Those findings emphasize the intimate link between HBV infection and liver metabolism [96]. Another recent study performed using the same animal model has demonstrated that HBV and HDV co-infection results in a much more robust induction of ISGs as compared to HBV mono-infection [97]. These findings may provide a mechanism for the more severe liver damage frequently observed in co-infected patients and for the well-known phenomenon of HBV suppression by HDV among those patients. However, in vivo systems for HBV infection based on engrafted human hepatocytes still have their limitations; the experiments are expansive due to the high costs of both human hepatocytes and the animals, the engraftment process is long and cumbersome, and the animals are deficient in most of the components of their immune system, precluding studies addressing host-adaptive immune system interactions. Reconstitution of a functional immune system by taking both hepatocytes and immune cells from the same donor to avoid immunological response against the xenograft [98] is one example for current efforts made to create a humanized immune-competent mouse model to study hepatotropic viruses. Conclusions and Future Perspectives Although emerging in vivo infection systems for HBV hold much promise, there is still a crucial need for in vitro systems mimicking HBV infection to address key questions in HBV biology. However, in a sharp contrast to the situation in vivo, a robust and long-lived HBV infection is extremely difficult to achieve in vitro in almost any cell culture system developed so far (Summarized in Table 1). The reason for this discrepancy is not clear and is a subject for speculations and hypotheses, but recent technological as well as conceptual progress has advanced the development of more robust infection systems. The development of hepatoma cell-based cultures over expressing the HBV receptor, NTCP, seems to disrupt the long-standing barrier in infecting those easy to handle cell lines. Novel co-culturing techniques with immune cells hold promise for applying this system for studies regarding HBV immunobiology, as well. Concomitantly, there is increasing effort to improve our ability to maintain primary human hepatocytes phenotypically stable for long periods of time or, alternatively, to produce hepatocyte like cells using the powerful iPS cells technology. Those systems hold promise to serve as a platform for HBV infection on a more physiologically authentic background. It is conceivable that many conclusions regarding HBV immunobiology and its interactions with the host, previously derived from artificial over-expression systems, will not prove to hold true following experiments involving more reliable in vitro infection systems. Thus, with the advent of more physiological and robust HBV infection systems, one can expect for renewed discoveries side by side with the fall of old concepts regarding this fascinating virus. HepG2 and Huh7-based cell lines. Flexibility and easy handling. In most instances PEG is needed to enhance infection. Upon co-culturing with HBV-specific CD8 cells (trans-well system) the system can be used for immunobiology studies. No substantial viral spreading following infection, infection is short-lived small amount of cccDNA detected. Hepatoma cells are physiologically impaired in many intracellular pathways and functions, limiting their use as a platform for studying virus-host interactions. Primary human hepatocytes The gold standard host cell for HBV infection experiments. Phenotypically unstable in vitro. Rapidly lose permissiveness for HBV infection. Large variability among hepatocyte donors Short durability of infection. Fetal human hepatocytes Phenotypically close (but not equal) to primary adult human hepatocytes. Limited infection efficiency and apparent absence of viral spreading. The addition of DMSO may enhance viral replication. Co-culturing with hepatic non-parenchymal cells and subsequent addition of 2% DMSO leads to the formation of hepatocyte islands with prolonged phenotypic maintenance Limited availability. Cell Culture Systems Advantages Disadvantages Comments Micro-patterned co-cultured (MPCC) system Preserves hepatocyte functions and viability over weeks following plating. Wide variability between donors in terms of HBV permissiveness. The system is based on micro-patterning of human hepatocytes in small islands of 200-400 cells each and co-culturing the cells with mouse fibroblasts. May serve as a platform for drug toxicity and drug interaction studies. Infection efficiency is low (30%), no substantial spreading of infection. Fresh as well as cryopreserved hepatocytes could be micro-patterned. The inhibition of the innate immune response is required to achieve infection. Does not represent the full phenotypic spectrum of primary adult human hepatocytes (similar to fetal hepatocytes). The shift point for the iPS cells to become HBV permissive is at around days 18-20 of differentiation, which is the time of phenotypic switch from hepatoblast-like to fetal hepatocyte like cells. Isogenic background. Needs high degree of expertise. Complicated protocols involved. May serve as a platform to dissect host factors essential for HBV infection and replication. The inhibition of the innate immune response is required to achieve infection Delivery vector systems A recombinant HBV baculovirus system Enables the delivery of a functional HBV genome into hepatoma cells resulting in productive HBV replication, formation of infectious viruses and establishment of a detectable intracellular cccDNA pool. Bypassing the natural entry stage of HBV. The vector is produced in insect cells. Part of the host response to HBV infection might be masked by a non-specific response to the viral vector. Conflicts of Interest: The authors declare no conflict of interest.
2016-03-14T22:51:50.573Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "f448ed10f1a6344c31f1dd904378ed7357639e62", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/8/1/21/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f448ed10f1a6344c31f1dd904378ed7357639e62", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
31648099
pes2o/s2orc
v3-fos-license
Myocilin Allele-Specific Glaucoma Phenotype Database Glaucoma, a complex heterogenous disease, is the leading cause for optic nerve–related blindness worldwide. Since 1997, when mutations in the myocilin ( MYOC ) gene were identified as causing juvenile onset as well as a proportion of primary open-angle glaucoma (POAG), more than 180 variants have been documented. Approximately one in 30 unselected patients with POAG have a disease-causing myocilin mutation and it has been shown that firm genotype–phenotype correlations exist. We have compiled an online catalog of myocilin variants and their associated phenotypes. This locus-specific resource, to which future submissions can be made, is available online (www.myocilin.com; last accessed 28 August 2007). The database, constructed using MySQL, contains three related sheets that contain data pertaining to the information source, variant identified, and relevant study data, respectively. The website contains a list of all identified variants and summary statistics as well as background genomic information, such as the annotated sequence and cross-protein/species homology. Phenotypic data such as the mean 7 standard deviation (SD) age at POAG diagnosis, mean 7 SD maximum recorded intraocular pressure, proportion of patients requiring surgical intervention, and age-related penetrance can be viewed by selecting a particular mutation. Approximately 40% of the identified sequence variants have been characterized as disease causing, with the majority ( (cid:1) 85%) of these being missense mutations. Preliminary data generated from this online resource highlight the strong genotype–phenotype correlations associated with specific myocilin mutations. The large-scale assimilation of relevant data allows for accurate comprehensive genetic counseling and the translation of genomic information into the clinic. Hum Mutat 29(2), 207–211, 2008. INTRODUCTION Worldwide, glaucoma is the leading irreversible cause for blindness and by the year 2020 it is estimated that approximately 80 million people will be affected [Quigley and Broman, 2006]. Primary open angle glaucoma (POAG; MIM] 137760) is a neurodegenerative disorder characterized by progressive excavation (cupping) of the optic disc with corresponding loss of peripheral vision, and is frequently associated with elevated intraocular pressure (IOP) [Hewitt et al., 2006b]. Although the prevalence of POAG increases with age, a subset of patients are diagnosed with a juvenile onset form (JOAG). In 1997, Stone and colleagues [Kubota et al., 1997;Polansky et al., 1997;Stone et al., 1997] identified mutations in the myocilin (MYOC) gene (MIM] 601652; formerly: trabecular meshworkinduced glucocorticoid response gene [TIGR]) in families affected by autosomal dominant JOAG and POAG. MYOC maps to the GLC1A locus at 1q24-q25 [Fingert et al., 2002]. The MYOC gene has three exons, encoding a 504-amino acid polypeptide, which has an N-terminal leucine zipper domain and a C-terminal olfactomedin-like domain. The majority of the identified diseasecausing variants are clustered in the evolutionary conserved olfactomedin-domain of exon 3 [Fingert et al., 2002]. MYOC mutations account for most cases of autosomal dominant JOAG and approximately one in 30 unselected cases of POAG [Fingert et al., 1999]. MYOC-related glaucoma is predominantly associated with an elevated IOP and strong genotype-phenotype correlations exist within the spectrum of MYOC mutations [Alward et al., 1998]. Interestingly, many MYOC mutations appear to have arisen from a common founder [Baird et al., 2003;Faucher et al., 2002;Hewitt et al., 2007b]. MYOC is expressed ubiquitously in the eye and despite some descriptions of nonsense MYOC mutations, haploinsufficiency of the MYOC protein has been excluded as the primary disease mechanism [Fingert et al., 2002]. Interestingly, POAG is not induced through genetically increasing or decreasing wild-type MYOC expression [Gould et al., 2004], and people homozygous for disease-causing variants do not necessarily manifest disease [Hewitt et al., 2006a;Morissette et al., 1998]. A gain-of-function disease model was suggested through the observation that mutant forms of the MYOC protein are misfolded and aggregate, akin to Russell body formation, in the endoplasmic reticulum of trabecular meshwork cells [Jacobson et al., 2001;O'Brien et al., 2000;Tamm, 2002;Yam et al., 2007;Zhou and Vollrath, 1999]. Trabecular meshwork cells are essential for the homeostatic regulation of aqueous humor outflow from the eye, and dysfunction generally manifests as elevated IOP. Shepard et al. [2007] have recently demonstrated that there is a mutation-dependent, gain-offunction association between human MYOC and the peroxisomal targeting signal type 1 receptor (PTS1R) caused by mutationinduced misfolding and exposure of a normally cryptic C-terminal binding site. It has been hypothesized that specific myocilin mutations may lead to different amounts of MYOC misfolding, with corresponding varying degrees of recognition by the ubiquitin-proteasome degradation pathway. A greater opportunity for mutant MYOC to interact with PTS1R may allow for poorer clearance from the trabecular meshwork endoplasmic reticulum and greater trabecular cell dysfunction, culminating in a higher IOP phenotype [Shepard et al., 2007]. Genetic screening offers an effective means for identifying people predisposed to disease development. Thus, a detailed understanding of the phenotypic variation associated with specific alleles is required. Locus-specific phenotypic databases offer a universal means for transferring clinically useful data, which can affect the clinical management of the individual's glaucoma and has implications for the screening of family members. This is particularly important in the case of a disease as common as POAG, in which the primary care is provided by ophthalmologists who in turn will increasingly need ready access to updated information regarding the mutation spectrum and associated phenotypes. Herein we introduce a comprehensive online database (www.myocilin.com) of MYOC allele-specific phenotype information. This database has many potential benefits, for example it should support ophthalmologists, solidifying presymptomatic diagnoses by aiding in the interpretation of genetic test results. The genotype-phenotype data will specifically enhance the accuracy of prognosis counseling and this platform shall also provide a uniform yet dynamic format to facilitate comparisons and interpretations of future work. DATABASE RELATIONSHIPS AND WEBSITE STRUCTURE To ensure flexibility and ease of management, the database was constructed using the MySQL database package (www.mysql.com). This database contains three significant sheets, with the majority of information being stored in the tblstudy sheet. This sheet has one-to-many and many-to-one direct relationships to the tblvariant and tbcitation sheets, respectively (Fig. 1). The total number of control and glaucomatous case subjects recruited, as well as the number of case and control subjects identified as carrying a particular MYOC variant (as linked to the tblvariant sheet) is recorded on the tblstudy sheet. For identified variant carriers, phenotypic fields in the tblstudy sheet include: subject ethnicity, mean and SD age (years) at diagnosis, mean and SD maximum recorded IOP (mmHg), number of patients undergoing trabeculectomy, and penetrance at age 25, 50, and 75 years, as well as the age of the youngest diagnosed carrier, and the age of the oldest unaffected carrier. Given that there are no major structural differences between the optic disc in glaucomatous subjects who have MYOC mutations compared with individuals with non-MYOC-related POAG, such morphological data was not recorded in the database [Hewitt et al., 2007a]. The primary key for the tblstudy sheet (Id) serves to uniquely catalogue each submission and has no direct relationships within the database. The disease-causing status of each sequence variant is coded as a field in the tblvariant sheet and as a drop-down menu that allows the selection of the correct variant type. Two fields are used to document the name of each identified variant both at the DNA and protein levels. This information is used to determine the genomic location (nucleotide index), and its corresponding site in MYOC (e.g., promoter, exon-1, intron-1, etc). Data relating to the cross-protein and cross-species homology for the particular amino acid is determined by a ''lookup'' array according to the codon index or number. Additionally within the tblvariant sheet, Blosum-62 matrix scores and particular amino acid properties are determined by similar arrays, according to the particular substitution [Henikoff and Henikoff, 1992]. The tbcitation sheet contains fields allowing for the identification of the submitted source. An hypertext markup language (HTML) script, which uses minimal embedded script so as to eliminate browser-browser variation, was written for the website. Using the heading bar, it is easy to navigate from the homepage to specific information such as identified variants, background genomic data, and summary statistics. The specific region of interest can be further investigated by using the ideogram for the MYOC gene. For example, when the user selects the third exon of the ideogram, they are automatically directed to the variants identified in this corresponding region. The total number of variants and citations recorded in the database is displayed at the top of the Variants web page. On this page, particular phenotypic and genotypic information can be viewed by selecting a specific variant (listed by genomic location). Additionally, one can navigate to the allele specific information from the Statistical Summary page. NOMENCLATURE Variant designation has been based on the guidelines established by the Human Genome Nomenclature Working Group [Antonarakis, 1998;den Dunnen and Antonarakis, 2000;den Dunnen and Paalman, 2003]. The first nucleotide (A) of the initiator methionine codon is denoted nucleotide 11 with the nucleotide 5 0 to this being numbered -1. Sequence variant descriptions were verified using the Mutalyzer v1.0.1 program (www.LOVD.nl/mutalyzer). DATA INTEGRITY Each study is classified by design as being either a case-control, family-based, or mixed case-control/family-based investigation. The latter subtype designates studies that were initiated on a case-control basis and then extended to examine, in a cascadescreening manner, all of the mutation-carrying proband's available relatives. Identified variants are classified as being: missense, nonsense, synonymous; or by location: splicing, regulatory, or noncoding. Noncoding region variants are classified as substitutions, insertions, or deletions, complex rearrangements, or repeat expansions. Deletions, insertions, and indels are subclassified as small (o21 bp) or gross (420 bp). There is often a publication bias toward the reporting of positive associations, thus pathogenicity of variants was assigned by an independent review of the full literature. In assigning pathogenic status to a given variant, the following issues are taken into consideration: the predicted disruption of protein translation (e.g., frameshift mutations and premature stop codons), the frequency of the sequence variant in the control (unaffected) populations, the location of the variant in the MYOC gene (i.e., cross-species conservation of coding sequence), evidence for partial segregation with the phenotype within a family, and when available, results of solubility studies [Gobeil et al., 2006;Zhou and Vollrath, 1999]. Nonetheless, as further work is conducted the pathogenic status ascribed to rare variants may change. INITIAL DATA SOURCE Phenotype and mutation data have been compiled initially from the published literature. An online search of literature was systematically conducted using PubMed covering all years from 1997 to May 2007. Search terms included: myocilin, MYOC, trabecular meshwork-induced glucocorticoid response, TIGR, glaucoma, and genetics. These identical search terms were used to establish a customized National Center for Biotechnology Information (NCBI) web service to facilitate in the identification of future data. Articles cited in reference lists of other manuscripts were also searched. All articles, independent of the language of publication, were included and data were extracted from Englishtranslated abstracts. The database contains web-links to the citation source published by the National Library of Medicine (www.pubmed.com). SUMMARY OF BACKGROUND GENOMIC DATA The Genomic Information page displays the genomic sequence of MYOC and its cross-protein and species homology of the coding FIGURE 1. Structure of the myocilin database. The majority of information stored in this locus-speci¢c dataset is contained in the tblstudy sheet.This sheet has one-to-many and many-to-one relationships to the tblvariant and tbcitation sheets, respectively (dashed lines). Phenotypic and genotypic information can be viewed by selecting a speci¢c variant on the Variants web page or from the Statistical Summary page. [Color ¢gure can be viewed in the online issue, which is available at www.interscience. wiley.com.] regions. The annotated sequence of human (Reference Sequence: NM_00261) and other animal species was obtained from the Ensembl database (www.ensembl.org) and the nucleotide sequence renumbered to conform to Nomenclature Working Group guidelines. Protein sequences were obtained from the protein knowledgebase Swiss-Prot (http://au.expasy.org/sprot), and the Basic Local Alignment Search Tool of the U.S. NCBI website (www.ncbi.nlm.nih.gov/BLAST). Protein and genomic alignments were performed using CLUSTALW, with a BLOSUM-62 protein weighted matrix; a gap open penalty score of 10, and a gap extension penalty score of 0.05 [Chenna et al., 2003]. DATABASE SUMMARY STATISTICS A large volume of clinically relevant data has been compiled through this locus-specific database. A summary of relevant information generated for each allele is displayed above the list of contributing resources listed on each variant-specific page. Over 180 variants have been identified within the exons and surrounding noncoding regions of the MYOC gene. Approximately 40% of the identified variants have been characterized as diseasecausing, with the majority (85%) of these being missense mutations (Fig. 2). This latter information can be viewed on the upper section of the Statistical Summary page. The lower section of the Statistical Summary page contains the frequency and corresponding genomic location of disease-causing variants identified in case-control designed studies. The relative diseaserelated prevalence of each variant is displayed in a long format and, to the left of the page, a floating ideogram of the MYOC gene is provided to facilitate navigation. To overcome issues relating to allele-specific penetrance or expressivity, and recruitment bias, phenotypic information is only extracted from investigations that had a family-based or mixed case-control/family-based design. For all phenotypic data the weighted values were calculated according to the number of mutation-carrying subjects phenotyped, thereby providing the most clinically appropriate representation. An example of the data that can be extracted from the database is displayed in Table 1, highlighting the strong genotype-phenotype correlations. Age-related penetrance was determined for three specific age groups (25, 50, and 75 years). This value was calculated as the proportion of mutation-carrying subjects who were diagnosed less than the specified age, divided by the total number of people diagnosed less than the specified age and all mutation carriers older than this who were not diagnosed with the disease by that respective age. In reviewing these data, it is acknowledged that this provides a relatively simplistic estimate of penetrance and does not take into account the fact that POAG is a disease spectrum and that separate studies often utilize differing diagnostic criteria to define affected status. To provide a further clinically useful parameter, information relating to the age of the youngest person at diagnosis and the age of the oldest clinically undiagnosed person were also included. Figure 3 demonstrates the wide variation in categorized age-dependent penetrance calculated by the weighted means for which more than one published family-based study contributed. FUTURE DATA SUBMISSION AND CONTROL Genotypic and phenotypic information can be submitted for upload into this database by using the Submit Variant page. Upon entry to this data submission page, the user is asked for his/her name, institution, and contact details. Then after entering data such as study design, number of control and glaucomatous case subjects, the user is directed to a form that contains data pertaining to the specific variant identified and its associated phenotypic data. Information for each specific allele is entered separately and additional allelic variants can be incorporated by selecting '' Add another variant'' once the initial data have been submitted. Submissions will be verified for nomenclature and then emailed back to the submitter to allow for final approval before being manually uploaded using the control panel of the myocilin.com domain host. To maintain patient confidentiality, submitted data will be reviewed to ensure no identifying information (such as subject name or date of birth) has been entered. The source of unpublished submissions can be identified by the listing of contributing individuals or research groups and their respective institutional address. CONCLUSION The ability to directly provide patients, researchers, and clinicians with relevant information about particular disease genotype-phenotype correlations advances the possibility of individualized medicine utilizing genotype analysis becoming a reality. Researchers must be encouraged to collect data from a large number of patients and ensure that its assimilation is in a publicly accessible, user-friendly format. With the increasing availability of genotype-phenotype databases, individual anecdotal evidence for allele-specific disease natural history will be surpassed by much larger datasets providing increased validity to interpretation of genetic tests and enabling clinicians to provide the best possible interpretation of results. We believe integration of this allele-specific phenotypic data relating to MYOC will provide a useful resource for clinicians and researchers alike.
2018-04-03T01:57:07.997Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "fa8dac1f59c9a85de4599313a2d920a431635c54", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/humu.20634", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "8bd60bcaf9368560dc612d58301f2925a45b2d44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
264955849
pes2o/s2orc
v3-fos-license
A Novel Operations-Based Application of Natural Language Processing to Enhance Aircraft System Troubleshooting Troubleshooting an aircraft system is difficult. With flights often logging hundreds, or even thousands, of codes, the task of isolating the root cause of an issue is a complex undertaking. By leveraging Natural Language Processing techniques such as Word2Vec, artificial intelligence can be used to extract patterns from the context of these faults. Treating the fault codes issued by the on-board system in an aircraft as the “words” which make up a body of text, a model can be trained to understand the patterns of this language in a similar approach to how natural language is processed by computers to discretize the order and structure of human language. By assessing the cosine similarity of vectorized fault sequences used to train the model, faults occurring in similar sequences can be extracted, resulting in improved troubleshooting. The result of this effort is a tool to aid maintainers in isolating faults by quantifying the relations between the different codes and analyzing the patterns in which they occur. The benefits of such a tool include significant reduction in time and cost in aircraft maintenance by avoiding unnecessary exploratory maintenance. BACKGROUND A major cost driver in the life of an aircraft is the cost of maintenance.Parts, labor, hangar space, and many other factors contribute to a significant expense in conducting various repairs, inspections, and refurbishments (Heisey).Although many of these costs are unavoidable, there is a significant interest in the aerospace community to drive down cost of maintenance by using more advanced analytics to avoid unnecessary maintenance. Traditionally, aircraft maintenance can be separated into two categories: scheduled and unscheduled.Scheduled maintenance includes replacement of life-limited components, inspections, and many other tasks determined to be necessary to ensure an acceptable factor of safety for flight.These requirements are typically defined in a maintainers manual and include a list of tasks and part replacements that are tracked against usage metrics.Typical metrics that initiate these repairs are flight hours, number of landings, or a calendar date.Although there are efforts underway to optimize the frequency of these such tasks (such as Condition Based Maintenance, CBM+), they are rigid and do not leave significant room for improvement.For the scope of this paper, these tasks will not be the focus of the maintenance improvement effort. The second traditional category of aircraft maintenance is unscheduled events.This refers to reactive maintenance to address a failure that happens unexpectedly.In operation, when a failure occurs, the fault system aboard an aircraft reports a fault code.While these automated reports can have corrective action suggestions, the situation is typically reviewed by a maintainer once the aircraft is grounded to ensure proper action.If action is needed, a maintenance task can be initiated and performed to address this failure.Upon completed repair, the aircraft can be returned for flight usage. Problem Overview The focus of this paper will be the troubleshooting process of assessing unscheduled failures.One of the main challenges in isolating a fault on an aircraft system is the sheer number of fault codes reported.At various points throughout startup, taxi, and flight, both fault and status codes are recorded.These codes are the language of the aircraft, and they communicate valuable information about the status of the system.The function of these codes can differ greatly.Some codes indicate nominal status, while others offer insight to major mechanical problems.This is similar to the dashboard of a car, which is a combination of status (engine RPMs, engine temperature, gas level) and issues that require the driver's immediate attention (check engine light, seat belt unbuckled).With thousands of codes in the library of possible aircraft fault codes, noise becomes a significant problem when trying to isolate a fault.It quickly becomes difficult to read and understand the reported codes and extract required action. For many aircraft systems in the defense industry, it is common for a single flight to incur thousands of these codes.It is also common for a singular root failure to cause the issuance of multiple fault codes, which introduces the problem of sympathetic faults.Sympathetic faults are fault codes that occur as a downstream failure to a parent system.An illustration of sympathetic faults is a domestic power outage.When experiencing a power outage, all lights in a home would go out.Although the first thought may be that the bulbs or the lamps themselves may have broken, this is merely a symptom of the actual fault.Replacing the lamp or its bulb would not solve the issue and would be an inefficient use of time and resources.A helpful tool in this situation might be a model that has studied past issues and recognizes the connection between power outages and the lights going out.This simple example shows the importance of learning the patterns of past failures to inform our future action and reduce unnecessary maintenance.Although in this example this pattern would be very easy for a human to recognize and would not require a model, more complex failure modes, such as those found in aircraft systems, are often much less obvious. To compound the importance of this problem, it is common in deployed environments for aircraft maintainers to be young and inexperienced.This lack of expertise makes trouble shooting these systems more difficult.If multiple faults were indicated, each with a different prescribed corrective action, an undesired response would be for the maintainers to progress through list, performing maintenance tasks until the issue is resolved.Because of the significant cost and time associated with exploratory maintenance, any information that can be offered to the maintainer to aid in fault isolation is extremely valuable. To address this, an effort is being made to look at the historical patterns of faults and establish relationships between them.If certain fault codes commonly occur in groups, this often indicates a common root cause.Thus by quantifying these relationships, with sufficient historical data, a distinction can be made between root cause faults and sympathetic faults. Literature Review It is difficult to fully evaluate the existing solutions to this problem, as many approaches to these types of problems (including the one described in this paper), are propriety information, kept as industry trade secrets.However, there are some approaches documented through publication that warrant mentioning.Ezhilarasu et al. describe an approach that considers the interactions between sub-systems and their effect on the overall health of an aerospace system (Ezhilarasu, 2019).By using AI to understand the connections between these subsystems, an IVHM (Integrated Vehicle Health Management) system can be established to monitor the health of various components.This groundwork paves a way toward CBM (Condition Based Maintenance), which eliminates periodic maintenance entirely, only performing maintenance tasks when needed.This approach uses rulesets and an inference model to determine overall system health.While this approach may be effective, one major difficulty with applying such a model is the domain knowledge requested to set it up.Intimate understanding of the systems and nature in which they interact make this approach very laborious to stand up.Kala et al. document a method in which natural language processing is used in the aerospace domain to organize and understand maintenance log reports (Kala, Analyzing Aircraft Maintenance Findings with Natural Language, 2022).Since many fault reports are written manually by maintainers, it is complex to synthesize these natural language datasets.Using a technique such as natural language processing can help to quantify the meaning of these write-ups and use this information to inform future decisions on corrective action. Summary of NLP Methods In exploring an appropriate AI method to apply to this problem, the nature of the fault codes must be first analyzed.These fault codes, issued automatically by the on-board system, are the language of the system.In many ways they parallel human language, as each code has meaning in itself, but is not valuable to communicate information until the context of its occurrence is seen.A string of these fault codes issued by the system can be thought of as a sentence, made up of many words in a specific order that communicate the state of the system.Because of these similarities, Natural Language Processing (NLP) techniques were investigated to see if they could provide value. One solution explored was Word2Vec, a pip installable package that leverages the numpy python library (word2vec Tutorial, 2022).This technique was first published in 2013 (word2vec, 2023) and used a neural network model to explore the word associations in a large body of text.Word2Vec is not a single algorithm, but instead a set of model architectures that vectorize the individual words in a body of text by considering their surrounding context and inferred meaning.These types of models are notably used for text prediction.A famous example of this can be seen in smart phone technology, where AI will predict the next word in a sentence based on the previous words typed and personal vocabulary history.These predictions work through a neural network with one or more hidden layers, shown below in In this figure, a variable input layer can intake C context words.Using one or many hidden layers, weights are applied to the connections between these matrices in order to make predictions on the target output layer.As this is established technology, further details on neural networks will not be explored in this paper.For more details on the operation of neural networks, please read further in the reference section of this paper. DATA PRE-PROCESSING To explore this theory, a dataset of historical maintenance data was used.In order to clean and sort the data into a format where it can be used to train a NLP model, the data was first loaded into a Domino workspace.Domino is a data science platform used to aid in the heavy computing of model training (Domino Data Labs, 2023).The data was first arranged into chronological order.It is important for the word context that the faults are timeordered, as the position of the fault in the list of faults greatly affects the context and the prediction of similarity.The data was next segmented by flight, with each unique flight being a separate file to train the model.The dataset was then reduced to an ordered list of faults, leaving only the fields of fault codes.This reduction drops the timestamps of the faults as well as all other information surrounding them, but retains the chronological order of their occurrence.With the input data prepared, a model can then be trained.A sample of the data format can be seen below in Table 1.Once the data is sorted and cleaned into a format conducive for NLP, Word2Vec can be used to train a model on this corpus.The idea behind this approach is to treat a fault code dataset as natural language and perform NLP using existing algorithms.In Table 2 below the translations between these two domains can be seen.In this context, a fault code will be treated as a word, a flight full of codes will be treated as a sentence, and the entire dataset of all flights across all aircraft will be treated as the corpus.By using an approach like this, a problem with numeric data can be approached using NLP methods. No filtering or windowing of the data was used.It is acknowledged that this could lead to some class imbalance, as the faults likely do not occur in proportional frequencies.Word2Vec is a strong modeling approach for these types of datasets, as the method of training the embeddings handles unbalanced data well (word2vec Tutorial, 2022), but this is an area that will receive additional attention in the future development of this project. METHODOLOGY With the data prepared, the model can be trained.Domino was used to perform the model training.The result of this effort is a Word2Vec model that contains the vectorized word embeddings of the training dataset.This model can then be used to extract fault code relationships and similarity to aid in troubleshooting.A diagram of this workflow can be seen below in Figure 2. Training Data Metrics A large dataset of historical maintenance logs was used to train the model.Due to data privacy, the specifics of the data cannot be disclosed.However, below in Table 3 are metrics on the dataset used to train this model. Model Cosine Similarity With the word embeddings trained in the Word2Vec model, the similarities between these embeddings can be useful.This information is particularly useful for troubleshooting, as it gives indication as to which faults occur in similar situations and may have common root causes. The way this information is extracted from the model is through the similarity metric built in to Word2Vec.Similarity computes the cosine similarity between two words used to train the model, as seen in Equation 1(Introduction to Word Embedding and Word2Vec, 2023). 𝑠𝑖𝑚(𝐴, 𝐵) = cos(𝜃) = 𝐴 • 𝐵 ‖𝐴‖‖𝐵‖ ( 1) From this equation, a high similarity score will occur when two words have high cosine similarity, indicating similar trajectory of their vectors.These values, referred from hereon as similarity "strengths", indicate the level or correlation between two faults.The intention of this feature is to identify synonyms in natural language.However, in our use case, this similarity can be used to relate fault codes to each other.For each fault code in the model, the top 100 strengths were reported, with the only information extracted being the target fault, the associated faults, and the strength of their context similarity.Note that these relationships are symmetric, so the similarity of two faults is equal in both directions. One important aspect of quantifying these cosine similarity scores is that these vectors are related to each other through context.A high strength in this respect refers to two codes that have similar surrounding context.When others codes and indicators are present from both before and after the fault of interest, it creates a specific context.Our goal in highlighting these similarity strengths is to identify faults that may have common causes or create similar downstream effects.The hypothesis is that these patterns of fault relation will give insight to root cause failures and help reduce noise in troubleshooting. RESULTS Since the objective of this work is to provide a tool for maintainers to aid in troubleshooting, the results of this experiment are not numeric metrics but instead specific examples of instances where this tool provided insight that could have led to cost-saving action.Due to their proprietary nature, those specific examples will not be shown in this paper.Instead, a general overview of the tool will be shown with acknowledgement of specific applications of the data it provides. The main method of visualizing these similarity strengths was through the use of a Circos plot.A Circos plot is a visualization tool developed by Krzywinski to display data in a circular layout (Krzywinski, 2009).Although originally created with the field of biology and genomes in mind, the data format has many applications.One advantage of a visualization method like this over a similar wordembedding visual such as t-SNE 2D projection, is clarity.Using a Circos plot, only the top several faults can be displayed, showing clear connections between these faults through the connecting bands.This approach was chosen to quickly communicate the strongest relationships in the dataset and avoid a noisy plot that may be difficult to interpret.Note that all data shown in the following figures has been renamed and sanitized. Figure 3: The Circos Plot Figure 3 shows a Circos plot with a single fault code selected and highlighted in green (dash_bio.Circos Examples and Reference, 2023).The bands emanating from this fault connect to related faults, with these faults being arranged in descending order clockwise from the origin based on their cosine similarity strengths.This means that going around the circle clockwise will display the strongest associated fault code to the target first, followed by the second strongest, etc.The width of the band connecting two faults indicates the strength of connection.The color of the boxes maps to the first 3 digits of their fault codes, which indicate the subsystem from which they originate.This color scheme gives a clear indication to strong relationships across subsystems, a phenomenon that is particularly hard to observe without the aid of pattern recognition tools such as NLP. The maintainer may further be interested in not only the strength of similarity between the selected fault and its top relatives, but also among those relatives themselves.To address this a feature was added to also display the strength bands between the various fault codes in the plot, as seen below in Figure 4. 4 are hidden to avoid visual clutter.This was done by adjusting a filter to only display strengths between two fault codes that exceed a defined strength.This is a vital feature to avoid cluttering the plot with excessive bands, nullifying the information trying to be communicated.For additional information, the user can hover over a specific band, which displays the source and target, as well as its corresponding strength. CONCLUSION This project successfully demonstrated that a dataset such as fault codes on an aircraft system can successfully be analyzed using NLP methods.By training a Word2Vec model using only an ordered list of fault codes, the context of their issuance can be observed and quantified.Many examples were found where this tool highlighted a connection between two fault codes that may have proved useful for a maintainer.In these situations, incorrect action was taken that may have been avoided with proper knowledge of the relationships between these faults.Due to ever increasing system complexity, this problem is only becoming more relevant, and an automated AI tool to address this could be extremely valuable.Any information offered by a tool that can help a maintainer isolate the cause of a failure will lead to significant cost reduction.While autonomous fault detection is not currently fully actualized, this advancement is a vital stepping stone towards that capability. CONTINUED WORK This project is ongoing at Lockheed Martin.While the current implementation serves only as a tool for a user to aid in the troubleshooting and decision making process, future uses of this model may include fully automating a pattern matching system to connect current situations to past corrective actions.If the model could recognize the behavior it is seeing in real time and connect this behavior to a past issue which led to an active resolution, the model could recommend the same action with a given confidence.Although reducing cost is a main objective, there are impressive benefits to technology like this beyond fiscal.The safety of an aircraft could also improve as the chances of unexpected events would reduce due to proper addressing of maintenance issues. As this work continues in industry, the benefits of such technology offer exciting opportunities for the future of health management in the Aerospace community. Figure 1 : Figure 1: A Multi-Input Neural Network Figure 2 : Figure 2: Concept of Approach Further information on how the word embeddings can be used to improve troubleshooting can be seen in the sections below. Figure 4 : Figure 4: The Circos Plot with Internal Bands Table 1 : Example Data for Maintenance LogNote that although the additional fields of the flight number, the aircraft number, and the timestamp, were not used in the model training, but are included in the table for clarity of the data structure. Table 2 : The Bridge Between NLP and Aircraft Faults Table 3 : Metrics of Training Data
2023-11-03T15:09:18.834Z
2023-10-26T00:00:00.000
{ "year": 2023, "sha1": "db6ccdf5fc2934ef9d0c01c7fdccfda45a0fa393", "oa_license": "CCBY", "oa_url": "https://papers.phmsociety.org/index.php/phmconf/article/download/3579/phmc_23_3579", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "33ffc731beda08bc2c983947177189ca906ad7c2", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
157236312
pes2o/s2orc
v3-fos-license
STOCK MARKET RETURNS AND EXCHANGE RATE MOVEMENTS IN A MULTIPLE CURRENCY ECONOMY : THE CASE OF ZIMBABWE This study seeks to provide new evidence on the stock market and exchange rate relationship in Zimbabwe, a country that does not have its own sovereign currency. The bivariate vector autoregressive approach is used to establish the relationship between the stock market and exchange rates. The results show that no relationship exists between the stock market and the proxy exchange rate. The findings contradict the expectation that exchange rate movements would influence domestic stock market prices. This finding is especially interesting given the fact that Zimbabwe uses a basket of currencies for transacting purposes, albeit with the United States dollar as a major currency for reporting and stock market pricing purposes. The findings provide new evidence of a disconnect between the stock market and exchange rate movements. This has implications for international portfolio diversification and the use of foreign currency as an asset class in an economy using a multiple currency system. INTRODUCTION The impact or reverse thereof of stock markets on exchange rates has received considerable attention in the literature over the past couple of decades.Numerous approaches have been used to establish such a relationship and results remain obscure, as they are based either on a specific country or bloc of economies.The stock market-exchange rate nexus discourse has continued to receive attention since the work of Dornbusch and Fischer (1980) on assets pricing and exchange rates.Existing studies either focus on individual countries' stock markets and exchange rates or empirically test several stock markets within trading blocs and how they relate to exchange rate movements.The typical study on the stock market and exchange rate relationship has been conducted in countries that manage their own foreign exchange rate systems.This study takes a different angle in that the relationship is tested based on a proxy exchange rate due to the multiple currency regime system adopted by Zimbabwe in 2009.In this study, a proxy exchange rate is used to represent the basket of currencies used as the medium of exchange in Zimbabwe. Zimbabwe abandoned its currency in favour of the multiple currency system on 30 January 2009 (Mutengezanwa, Mauchi, Njanike, Matanga, & Gopo, 2012) after experiencing close to a decade of economic and financial turmoil (Nakunyada & Chikoko, 2013).After adopting a multiple currency system Zimbabwe has been using a basket of currencies that include the United States (US) dollar, South African rand, Botswana pula, euro and the pound sterling.However, the two main currencies used are the US dollar and the South African rand (Mutengezanwa et al., 2012;Pindiriri, 2012).The US dollar has been used as the currency for financial and fiscal reporting purposes (Pindiriri, 2012).Since the adoption of the multiple currency system coupled with reduced company productivity, Zimbabwe has relied on South Africa for imports (Nakunyada & Chikoko, 2013;Pindiriri, 2012).Consequently, the US dollar/rand exchange rate is the main currency exposure facing Zimbabwean investors, firms and investors seeking international portfolio diversification.Zimbabwean firms and investors face currency risk, and hedging such risk is costly given that the behaviour of the exchange rate is not within the control of the country's monetary authorities (Brixiová & Ncube, 2014). Established in 1894 and once ranked second to South Africa in terms of market breadth and depth, the Zimbabwe Stock Exchange (ZSE) (Sibanda & Holden, 2013) has had its successes and failures owing mainly to the performance of the political economy.The ZSE has two main indices, the industrial index and the mining index.The exchange has experienced a drop in the number of firms listed from 76 in 2010 to 58 in 2015.This decline in the number of listed companies is mainly due to suspensions and delisting arising mainly from viability issues (Rusvingo, 2014).However, investing on the Zimbabwean stock market could be taken as a currency hedge in a similar way as investing in US dollar-denominated stocks is.A currency hedge is similar to currency risk hedging where the ultimate goal is to minimise portfolio volatility (de Roon, Eiling, Gerard, & Hillion, 2012).Stock prices on the stock exchange are quoted in US cents and company reporting is also in US dollars (Zimbabwe Stock Exchange, 2013).Since stock prices are quoted in US dollar terms, investing in such stocks should theoretically yield the same returns as would be achieved in investing in US dollar quoted stocks, implying the stock market can act as an effective haven for US dollar rate movements.This study therefore seeks to establish the relationship between the ZSE industrial index and the US dollar/rand exchange rate for a period spanning February 2009 to May 2015.The ZSE industrial index was rebased in February 2009 following the adoption of the multiple currency system.It is important to note that the ZSE halted trading at the peak of the hyperinflation period from August 2008 to January 2009 as the then Zimbabwean dollar depreciated tremendously in line with the then astronomically high inflation levels. LITERATURE REVIEW Several studies have looked at the relationship between stock prices and exchange rates, albeit without reaching consensus regarding results.This has been particularly so in emerging market economies, where the influence of stock prices could be subjected to international macroeconomic factors (Gay Jr, 2011;Ehrmann, Fratzscher, & Rigobon, 2011;Muhammad, Rasheed, & Husain, 2002).The dynamic relationship between stock (assets) prices between two countries could also be a function of changes in the countries' exchange rates (Ehrmann et al., 2011;Dornbusch & Fischer, 1980).Thus, if stock prices in two different countries move together then the stock price movements of these two countries could be influenced by these countries' exchange rate movements (Kisaka & Mwasaru, 2012).The relationship between stock markets and exchange rates has been demonstrated using different methods, and results tend to vary accordingly.Cross-correlations between the stock market and exchange rate have been observed in China, for example, suggesting that the relationship varies with time and is dependent on reforms of the exchange regime (Cao, Xu, & Cao, 2012). A negative relationship between the stock market and exchange rates implies that increases in stock market prices lead to an appreciation in the real exchange rate (Moore & Wang, 2014).However, in a country without its own currency this is intuitively different, as the exchange rate is determined outside the country's economic fundamentals.The relationship can be unidirectional from the stock market (exchange rate) to the exchange rate (stock market) or bidirectional (Liang, Chen, & Yang, 2015).The relationship is, however, country specific and evidence from emerging markets shows that changes in the exchange rates causes changes in the stock prices (Liang et al., 2015;Gay Jr, 2011;Moore & Wang, 2014).Interestingly, results differ even in advanced economies, with the US and United Kingdom (UK) providing evidence that suggests unidirectional causality from the stock market to the exchange rate showing the influence of the US and UK stock markets on exchange rate movements (Caporale, Hunter & Ali, 2014).However, Caporale et al. (2014) also showed that Switzerland and the euro area demonstrated bidirectional causality, suggesting that stock market and exchange rate movements influence each other (Caporale et al., 2014).When using a long-run co-integration model, a long-run causal relationship from stock market to exchange rate has been observed in the European Union (EU), while only a short-run relationship existed in the US during the financial crisis of 2008-2012 (Tsagkanos & Siriopoulos, 2013).The difference is attributed to the nature of economic and political connectivity in the EU compared to the US (Tsagkanos & Siriopoulos, 2013).This evidence clearly shows that the relationship between the stock market and exchange rates remains contemporary and relevant in asset diversification in that stock markets could be used to hedge against currency risk or simply both the stock market and foreign currency as standalone assets in a portfolio. Transmission of shocks from one asset to another asset, a phenomenon known as contagion, tends to influence asset prices (Caccioli, Shrestha, Moore, & Farmer, 2014).Consistent with contagion, stock market prices tend to exhibit co-movement traits with exchange rates, especially during periods of crises as compared to stable periods (Lin, 2012).This is in line with the stock-oriented models of exchange rate (Frenkel, 1987).However, the uncovered equity parity, a condition which asserts countries with stock markets that are expected to perform strongly should experience a currency depreciation, was tested in 43 countries and evidence showed no relationship between stock prices and exchange rates, suggesting that the two variables may not influence each other (Cenedese, Payne, Sarno, & Valente, 2014).Although causality may exist (unidirectional or bidirectional), such relationships vary across time scales and frequency (Tiwari, Bhanja, Dar, & Islam, 2015).Similarly, a study on the BRICS (Brazil, Russia, India, China, and South Africa) bloc confirms that stock markets tend to influence exchange rates in both turbulent and stable periods (Chkili & Nguyen, 2014).Of the BRICS bloc, Chkili and Nguyen (2014) found that it is only in South Africa that the stock market returns do not impact on exchange rates, while exchange rates do not impact on the stock market returns in the entire bloc. In international portfolio management, the total dollar return on an investment is calculated as a function of foreign currency return multiplied by current gain or loss (Shapiro, 2010).The appreciation of the domestic currency further influences capital flows and in particular portfolio investments by both private and public investors (Combes, Kinda, & Plane, 2012).Portfolio flows are, however, more volatile than other capital flows such as remittances and foreign direct investments (Combes, Kinda, & Plane, 2012;Jongwanich & Kohpaiboon, 2013).It is therefore important to ascertain the relationship of the stock market and exchange rates in a country without its own currency.This relationship is essential in the understanding of asset allocation and risk hedging, hence portfolio diversification.A country without its own currency does not have monetary policy flexibility; hence interventions in the foreign exchange market by the central bank are limited (Brixiová & Ncube, 2014).The next section describes the sources of data and the methodology used in this study. RESEARCH DESIGN The study uses the ZSE industrial index and US dollar/rand exchange rate spanning from February 2009 to May 2015.Monthly data on each series is obtained on the last trading day of the month to avoid survival bias and any adjustments to the data.Monthly stock market returns are assumed to capture economic and business conditions in a country (Brooks, 2008).Observations over 74 months are obtained.The ZSE industrial index data is obtained from the ZSE, while the US dollar/rand exchange rate data is obtained from the South African Reserve Bank (SARB) website.US dollar/rand exchange rate data is obtained as rand/US dollar exchange rates, and these rates are converted into direct quotation taking the US dollar as home to Zimbabwe.The exchange rate is quoted as midpoints obtained by SARB from banks in South Africa (South African Reserve Bank, 2015).Nominal exchange rates are used instead of real exchange rates.However, the fact that Zimbabwe does not influence the currency through its monetary and fiscal interventions means that using real exchange rates is not possible.The US dollar / South African rand exchange is used as a surrogate exchange rate to represent a basket of currencies being used in Zimbabwe (multiple currencies).Zimbabwe adopted the US dollar as the reporting and official transacting currency in 2009 (Nakunyada & Chikoko, 2013).The exchange rate is chosen due to the fact that the US dollar is used as an official reporting currency and that South Africa is Zimbabwe's major trading partner (in the form of imports into Zimbabwe) (Pindiriri, 2012). Stock market returns are determined by using the ZSE industrial index returns (ZSE).Stock market returns represent average reward for investors who choose the equity investments as an asset class.Exchange rate gains/losses are calculated as returns on exchange rates (EX).EX is measured as the dollar price of foreign currency where a negative value means the dollar has appreciated while a positive value means the dollar has depreciated against the South African rand.For both series, the returns are calculated as follows: where the nominal is return of the ZSE (EX) at month t; is the value of ZSE (EX) at month t; and −1 is the value of the ZSE (EX) at the previous month. This study adopts the short-run dynamics based on the bivariate vector autoregressive approach (VAR).VAR treats all variables as endogenous; hence there is no need to specify which variables are endogenous or exogenous (Brooks, 2008).In a bivariate VAR only two variables are used, and their current values depend of different combinations of the previous values of the two variables and error terms (Brooks, 2008).VAR further allows for the use of Granger causality tests.Granger causality tests will be used to test the relationship between stock prices and exchange rates.Thus the direction of causality will be the main aim of the Granger causality tests.The relationship can either be unidirectional or bidirectional.However, it is also possible to have no relationship at all.Stationarity tests are determined using the Augmented Dickey-Fuller (ADF) and the Phillips-Perron approaches to test the null hypothesis that the series are stationary (Agiakloglou & Newbold, 1992;Dickey & Fuller, 1981).Stationary series are defined 'as one with a constant mean, constant variance and constant autocovariances for each lag' (Brooks, 2008:318).The use of non-stationary data in time series analysis can lead to spurious regressions.The ADF is preferred in annual, quarterly and monthly time series over the Phillips-Perron tests (DeJong, Nankervis, Savin, & Whiteman, 1992;Krämer, 1998).However, both tests are conducted to verify the rejection or failure of rejection of the null hypothesis that the series contain a unit root (Tsagkanos & Siriopoulos, 2013).The following equations demonstrate how the ADF is conducted (Abdalla & Murinde, 1997;Dickey & Fuller, 1981;Enders, 2010): where ∆ is first difference operator, hence ∆ = - −1 and ∆ = - −1 ; α, 1 , 2 , 1 , 2 , , ∅ are coefficients; T = time trend; 1 and 2 are white noise errors.The null hypothesis is that and have unit roots, i.e.H0 = 1 ,= 2 = 1.If the null hypothesis is rejected then the series are said to be integrated of order zero, i.e.I (0) and VAR at levels will be conducted to test the short-run relationship between the two series (Tsagkanos & Siriopoulos, 2013;Enders, 2010;Brooks, 2008).However, failure to reject the null hypothesis means the series will be first differenced (Enders, 2010).The failure to reject the null hypothesis suggests that the series is non-stationary and use of such data could lead to spurious regression output.If the null hypothesis that the series contain a unit root after first differencing is not rejected, then the series will be integrated of order one, i.e.I ( 1) and VAR at first differences will be conducted (Brooks, 2008).In this case, the series is said to be stationary at first differencing and can be used in the regression analysis.One condition of the use of VAR is that the series be stationary either at levels or first differences (Brooks, 2008). Lag selection criteria used are the R 2 as the Schwartz-Bayes Information criterion and Akaike's information criterion (AIC), which produced conflicting outcomes.The maximum lag length used is 2 lags.A bivariate VAR where two variables, ZSE and EX, are used as follows: where u1t is a white noise disturbance term with E(ui t ) = 0, (i = 1, 2), E(u1tu2t ) = 0 (Brooks, Introductory Econometrics for Finance, 2008).Given the fact that Zimbabwe is using other countries' currencies and the fact that its economy is negligible compared to the owners of the currencies, this study expects a unidirectional causality from the US dollar/rand exchange rate to the ZSE. DISCUSSION The ZSE industrial index and US dollar/rand exchange rate have a correlation coefficient of 0.53982, suggesting that the two series move together, although the relationship is not that strong.The correlation coefficient is statistically significant at all levels of significance, suggesting that there is a linear relationship between the index and the exchange rate.Source: Author's computations.Data from SARB and ZSE websites The descriptive statistics above shows the behaviour of the series during the period under review. A close analysis of the descriptive statistics shows that the ZSE Industrial Index series has some outliers, which could influence the mean and hence the large range value.After plotting the series on a graph, it is evident that the series values from February 2009 to April 2015 were inflated and characterised by instability.This was mainly due to the rebasing of the index upon the adoption of the multiple currency system in February 2009.As the economy moved from the use of the Zimbabwean dollar to the multiple currency system the market found it difficult to properly adjust the Zimbabwean dollar values to US dollar values, hence the series overshooting (Brogaard, Hendershott, & Riordan, 2014).Researchers should therefore take caution when using the series starting from February 2009 to April 2015. Sibanda The average return on the ZSE Industrial Index was 1.7% per month, with a standard deviation of 9.5% over the six-year period.On the other hand, the mean return on the US dollar/rand exchange rate was negative 0.2%, with a standard deviation of 3.0%.The Augmented Dickey-Fuller and Phillips-Perron tests for unit roots in the series are used for both the ZSE Index and the US dollar/rand exchange rate.Stationarity and unit root tests results are presented in TABLE 2. Both approaches test the null hypothesis that the series has a unit root.The null hypothesis that the series has a unit root is rejected in favour of the alternative hypothesis.This suggests that the series is integrated of order I (0), thus eliminating the possibility of a co-integrating relationship between the two series.The use of a nominal exchange rate is expected to lead to short-run dynamics as suggested in the literature (Chinn, 2006), hence a non-cointegrating relationship. The VAR model is applied to the series to establish the existence of short-run relationships.The condition for the use of VAR is that both series be stationary at levels or first differences (Johansen, 1988).Thus the series is stationary at levels and meets the requirements of the use of the VAR methodology.TABLE 3 shows the output from the VAR model depicting the relationship between exchange rates and the Zimbabwe stock market. The VAR results show that despite the two series being integrated of order I (0), there is no Granger causality running in either direction of the US dollar/rand exchange rate and the ZSE Industrial Index.However, the US dollar/rand exchange rate is influenced by its past returns, while the ZSE Industrial Index is influenced by its past returns in the second month.In line with existing evidence (Moore & Wang, 2014), a negative relationship between the exchange rate and the stock market is observed, although it is statistically insignificant at all conventional levels of significance.In spite of the adoption of a stable currency as a reporting and transacting currency, the ZSE is not realising the benefits accruing to such stability.The use of the US dollar for transacting purposes makes the Zimbabwean goods and stocks more expensive relative to other countries as the dollar appreciates against other currencies.For instance, investors in South Africa would have to spend more South African rands to buy one unit of foreign currency (US dollar), making dollardominated assets expensive in relative terms.However, international investors who already have US dollar-denominated assets in Zimbabwe could sell (translate) them and realise (accrue) higher foreign currency exchange gains. The Pairwise Granger Causality tests in The null hypothesis that the US dollar/rand exchange rate does not Granger Cause ZSE is not rejected at all conventional levels of significance.Consequently, the expectation that the ZSE is influenced by changes in the exchange rate is not upheld. Although some studies have demonstrated a unidirectional causality from the exchange rate to the stock market (Caporale et al., 2014), others show a unidirectional causality from the stock market to the exchange rate (Liang et al., 2015;Chkili & Nguyen, 2014), and a few show bidirectional causality (Cenedese et al., 2014;Tsagkanos & Siriopoulos, 2013;Caporale et al., 2014).This study shows that no relationship exists between the stock market returns and exchange rates in the multiple currency set up in Zimbabwe.The results from this study fail to prove any previous empirical evidence on the relationship between stock markets and exchange rates mainly due to the use of a proxy exchange rate and the unavailability of a domestic sovereign currency for the stock exchange in question. CONCLUSION This study provides new evidence on the exchange rates-stock market return with its primary focus on a country without a sovereign currency.In an ideal economy with a sovereign currency, the relationship between stock market returns and exchange rates displays either a unidirectional causality from the stock market to the exchange rate or vice versa.However, the literature shows that some economies experience bidirectional causality between stock market returns and exchange rates.The relationship between the two series can either be determined through the use of short-run dynamic models, such as VAR, or long-run co-integration models, such as error correction models.Using the VAR technique, this study finds no relationship between the stock market index and exchange rates in a multiple currency system.A proxy exchange rate based on the official reporting currency and the major currency of imports is used, and reveals that the exchange rate series and the stock market index have no short-run and long-run relationships despite the two series being integrated of order zero I (0).This has implications for economic integration and international portfolio diversification.The implications for economic integration are that the Zimbabwean economy could be influenced by the South African economic activity and the latter's exchange rate regime; portfolio diversification in that foreign currency and stocks could be used as independent or complementary assets in a portfolio, since the two do not move in tandem.Consequently, the US dollar (used as a proxy currency) / South African rand exchange rate and the stock market can be used as an independent asset class, thus providing portfolio diversification benefits to investors.Portfolio diversification allows investors to minimise portfolio risk and enhance returns in spite of economic cycles.However, the stock market cannot be used as a currency hedge in a multiple currency economy, as the exchange rate and stock market movements are not related. TABLE 1 provides descriptive statistics for the two series. TABLE 3 : Short-run dynamics between the US Dollar/Rand and ZSE industrial index TABLE 4 : TABLE 4 further demonstrate the non-existence of a relationship between the US dollar/rand exchange rate and the ZSE Industrial Index.Pairwise Granger Causality tests
2018-12-18T01:54:07.948Z
2015-12-27T00:00:00.000
{ "year": 2015, "sha1": "f056521057d6289d48436a5433618829bee7aff2", "oa_license": "CCBY", "oa_url": "https://jefjournal.org.za/index.php/jef/article/download/119/115", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f056521057d6289d48436a5433618829bee7aff2", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
251948211
pes2o/s2orc
v3-fos-license
CTPS cytoophidia formation affects cell cycle progression and promotes TSN-induced apoptosis of MKN45 cells Cytidine triphosphate synthase (CTPS) forms filamentous structures termed cytoophidia in numerous types of cell. Toosendanin (TSN) is a tetracyclic triterpenoid and induces CTPS to form cytoophidia in MKN45 cells. However, the effects of CTPS cytoophidia on the proliferation and apoptosis of human gastric cancer cells remain poorly understood. In the present study, CTPS-overexpression and R294D-CTPS mutant vectors were generated to assess the effect of CTPS cytoophidia on the proliferation and apoptosis of gastric cancer MKN45 cells. Formation of CTPS cytoophidia significantly inhibited MKN45 cell proliferation (evaluated using EdU incorporation assay), significantly blocked the cell cycle in G1 phase (assessed using flow cytometry) and significantly decreased mRNA and protein expression levels of cyclin D1 (assessed by reverse transcription-quantitative PCR and western blotting, respectively). Furthermore, the number of apoptotic bodies and apoptosis rate were markedly elevated and mitochondrial membrane potential was markedly decreased. Moreover, mRNA and protein expression levels of Bax increased and Bcl-2 decreased markedly in MKN45 cells following transfection with the CTPS-overexpression vector. The proliferation rate increased, percentage of G1/G0-phase cells decreased and apoptosis was attenuated in cells transfected with the R294D-CTPS mutant vector and this mutation did not lead to formation of cytoophidia. The results of the present study suggested that formation of CTPS cytoophidia inhibited proliferation and promoted apoptosis in MKN45 cells. These results may provide insights into the role of CTPS cytoophidia in cancer cell proliferation and apoptosis. Introduction Cytidine triphosphate synthase (CTPS) is a key enzyme responsible for de novo synthesis of CTP, which is an essential nucleotide and precursor for RNA and DNA synthesis (1); therefore, CTPS activity affects cell cycle progression. It has been reported that CTPS forms filamentous structures termed cytoophidia (Greek for 'cellular snakes') in Drosophila (2,3), bacteria (4), yeast (3), zebrafish (5), human and rat cells (6,7), which suggests that the cytoophidium is an evolutionarily conserved subcellular structure that may serve an essential role in regulating metabolism (8). Cytoophidia are mesoscale, intracellular, filamentous structures that contain metabolic enzymes; they are not membrane-bound cell organelles. They comprise a type of intracellular compartment and are involved in cell metabolism (2). Certain studies have reported that cytoophidia may serve as metabolic stabilizers and a buffer system in response to environmental changes (5,9,10). Cytoophidia respond to nutrient stress by elongating following nutrient deprivation in Drosophila (11) and budding yeast (12). In Schizosaccharomyces pombe, cytoophidia formation decreases following cold or heat shock (13). Certain studies have reported that cytoophidium sequester the active binding sites of enzymes, thereby inhibiting CTPS activity in Escherichia coli and Drosophila tissue (14,15). However, Strochlic et al (16) reported that Drosophila CTPS within cytoophidia is catalytically active. These aforementioned studies suggest that CTPS activity following cytoophidia formation differs with cell type. The changes in CTPS activity are reported to be associated with cancer progression (17,18). Significantly higher activity of CTPS has been reported in acute lymphocytic leukemia cells compared with lymphocytes of healthy controls (19). CTPS also promotes malignant progression of triple-negative breast cancer (20). Cytoophidia formed by CTPS have been reported in human hepatocellular carcinoma cells but not in adjacent non-cancerous hepatocytes (21). To the best of our knowledge, the potential association between CTPS cytoophidia and cancer cell proliferation is has not been previously elucidated. Toosendanin (TSN) is a triterpenoid derivative extracted from the bark of Melia toosendan Sieb et Zucc and exerts anticancer effects on numerous types of human cancer cell, CTPS cytoophidia formation affects cell cycle progression and promotes TSN-induced apoptosis of MKN45 cells such as colorectal cancer cells and glioma cells (22)(23)(24)(25). Our previous studies demonstrated that TSN induces apoptosis of human gastric cancer MKN45 cells (26) and induces formation of CTPS cytoophidia. To the best of our knowledge, however, the association between formation of CTPS cytoophidia and apoptosis in MKN45 cells remains unknown. The present study evaluated whether the CTPS formed cytoophidia affected TSN-induced MKN45 cell proliferation or apoptosis. The results of the present study may facilitate further understanding of the role of CTPS cytoophidia in cancer cell apoptosis. Materials and methods Cell culture. -CAT AAG CTT AAG TTT AAA CGC TAG CCA GC-3' and reverse 5'-TAC CCA TAC GAT GTT CCA GAT TAC GCT TGA GGA TCC ACT AGT CCA GTG TGG-3'. The full-length coding sequences of human CTPS were amplified using PCR with primers as follows: forward 5'-GCG TTT AAA CTT AAG CTT ATG AAG TAC ATT CTG GTT ACT GGT GGT-3' and reverse 5'-TGG AAC ATC GTA TGG GTA GTC ATG ATT TAT TGA TGG AAA CTT CAG-3 Cell cycle analysis. Following transfection with plasmids or treatment with TSN as aforementioned, cells were washed twice with PBS and detached from the plate surface by digestion using trypsin. Cells were centrifuged at 300 x g at 4˚C for 10 min, the pellet was resuspended in PBS, centrifuged again at 300 x g at 4˚C for 10 min and resuspended in ice-cold 70% ethanol and stored at 4˚C for 18 h. Samples were washed once in PBS and resuspended in DNA staining solution (propidium iodide, 5 µg/ml; RNase A, 0.5 mg/ml; PBS) and incubated at 37˚C in the dark for 30 min. All samples were assessed using a Cytomics FC500 Flow Cytometer (Beckman Coulter, Inc.) and analyzed using CXP Software version 2.3 (Beckman Coulter, Inc.). Early apoptosis assay. Following transfection with plasmids or treatment with TSN as aforementioned, the cells were detached from the plate surface, digested by trypsin and centrifuged at 300 x g at 4˚C for 10 min, washed twice with cold PBS, then 500 µl Annexin V-FITC binding buffer (No. C1062M, Beyotime Institute of Biotechnology) was added to each sample. The cells were incubated with 5 µl FITC-annexin V and 5 µl PI for 15 min at 25˚C in the dark. After washing, aliquots of 2x10 4 cells/sample were examined using a Cytomics FC500 Flow Cytometer and analyzed with CXP Software ver.2.3 (Beckman Coulter). EdU proliferation assay. Cells were seeded in 96-well plates at 1x10 4 cells/well and placed in a humidified incubator at 37˚C with 5% CO 2 for 12 h following treatment with TSN or transfection with plasmids for 48 h as aforementioned. Cell proliferation was assessed using the EdU Cell Proliferation Assay kit (Guangzhou RiboBio Co., Ltd.) as described by Wang et al (28). The percentage of EdU-positive cells was calculated from five random fields using ImageJ (National Institutes of Health). JC-1 staining for mitochondrial membrane potential. To determine the mitochondrial membrane potential, cells were seeded in 6-well plates at a density of 1x10 4 cells/well and transfection with plasmids or treatment with 80 nM TSN for 48 h. JC-1 staining was performed as described by Sabarwal et al (29). Statistical analysis. All data are presented as the mean ± SEM, and data were obtained from three replications. Representative bands of western blotting were selected from independent experiments. All statistical tests were performed using GraphPad Prism 5.0 software (GraphPad Software, Inc.). One-way ANOVA was used to compare independent groups with Dunnett's multiple comparisons test for comparisons against a single control and Tukey's multiple comparisons test when ≥3 groups were analyzed. TSN induces CTPS cytoophidia formation in MKN45 cells. Cell survival rate markedly decreased as the TSN concentration and treatment duration increased (Fig. 1A). CTPS cytoophidia were observed in MKN45 cells treated with different concentrations (0, 60, 80 and 120 nM) of TSN for 72 h (Fig. 1B). Compared with the control (0 nM TSN), cytoophidia were detected in 36.4% of MKN45 cells treated with 60 nM TSN and 46.65% of MKN45 cells treated with 80 nM TSN; this showed that CTPS cytoophidia formation was significantly increased compared with the control. However, the percentage decreased to 16.01% in MKN45 cells treated with 120 nM TSN (Fig. 1C). These results indicated that TSN decreased cell viability while induced CTPS cytoophidia formation in MKN45 cells. CTPS cytoophidia formation inhibits proliferation of MKN45 cells. To determine the effect of CTPS cytoophidia on the proliferation rate of gastric cancer MKN45 cells, OE-CTPS and R294D-CTPS mutant (OE-CTPS R294D ) vector were generated ( Fig. 2A). The formation of CTPS cytoophidia and CTPS protein expression levels were assessed. CTPS assembled into cytoophidia in OE-CTPS cells; but did not assemble into cytoophidia in OE-CTPS R294D cells (Fig. 2B). CTPS protein expression levels in OE-CTPS R294D cells were significantly lower compared with those in OE-CTPS cells (Fig. 2C). MKN45 cell viability decreased after being transfected with OE-CTPS compared with control; however, cell viability increased following transfection with OE-CTPS R294D (Fig. 2D). These results indicated that the formation of CTPS cytoophidia decreased cell viability in MKN45 cells. The proliferation rate of gastric cancer MKN45 cells was also assessed. Compared with the control, the percentage of EdU-positive cells significantly decreased in OE-CTPS cells ( Fig. 3A and B). Furthermore, the percentage of G 1 /G 0 -phase cells significantly increased and the percentage of S-phase cells markedly decreased in OE-CTPS cells compared with the control (Fig. 3C and D). Following treatment with 80 nmol/l TSN, compared with group of control+TSN, the EdU-positive rate decreased, the percentage of G 1 /G 0 -phase cells increased and S-phase cells decreased in OE-CTPS +TSN group. The EdU-positive rate increased, percentage of G 1 /G 0 -phase cells decreased and the percentage of S-phase cells increased in OE-CTPS R294D cells compared with the control. The same changes were observed in group of OE-CTPS R294D +TSN compared with control+TSN group. Furthermore, mRNA and protein expression levels of CCND1 markedly decreased in OE-CTPS cells compared with the control but significantly increased in OE-CTPS R294D cells compared with both the control and OE-CTPS ( Fig. 4A and B). The same changes of CCND1 mRNA and protein expression levels were observed following treatment with 80 nmol/l TSN in OE-CTPS cells and OE-CTPS R294D cells compared with control. (Fig. 4C and D). These results indicated that CTPS cytoophidia formation could inhibit MKN45 cells proliferation by affecting cell cycle progression. CTPS cytoophidia formation promotes apoptosis of MKN45 cells. To assess the effect of CTPS cytoophidia on apoptosis of gastric cancer MKN45 cells, cells were transfected with OE-CTPS or OE-CTPS R294 vectors. Subsequently, morphological changes and the presence of early apoptotic cells were evaluated. Compared with the control, chromosomes were markedly more aggregated and marginalized in OE-CTPS cells; however, no notable morphological changes in OE-CTPS R294 cells were observed compared with the control (Fig. 5A). FITC-annexin-V/PI staining demonstrated that apoptosis rate was higher in OE-CTPS cells compared with the control and significantly lower in OE-CTPS R294 cells compared with both control and OE-CTPS ( Fig. 5B and C). Following treatment of transfected cells with 80 nmol/l TSN, apoptotic bodies were prominent in OE-CTPS+TSN cells compared with in groups of control+TSN; however, in OE-CTPS R294 +TSN cells no apoptotic bodies was observed, chromosome aggregation or marginalization occurred only in some cells. Apoptosis rate was more pronouncedly increased in group of OE-CTPS +TSN cells compared with both control+TSN and OE-CTPSR294+TSN groups. Mitochondrial membrane potential in OE-CTPS cells was markedly lower compared with control cells; however, the potential in OE-CTPS R294 cells was significantly higher compared with OE-CTPS cells without TSN treatment and markedly higher compared with OE-CTPS cells with TSN treatment (Fig. 6A and B). Furthermore, mRNA and protein expression levels of Bax and Bcl-2 were assessed by RT-qPCR and western blotting. The mRNA and protein expression levels of Bax increased significantly, whereas Bcl-2 mRNA expression levels markedly decreased and protein expression levels significantly decreased, in OE-CTPS cells compared with control cells. Furthermore, mRNA and protein expression levels of Bax significantly decreased, whereas mRNA and protein expression levels of Bcl-2 significantly increased in OE-CTPS R294 cells compared with OE-CTPS cells in the presence or absence of TSN treatment (Fig. 6C-H). Discussion TSN exhibits anticancer effects on numerous types of human cancer cell (32), such as suppresses hepatocellular carcinoma proliferative and metastasis (33), induces the apoptosis of human Ewing's sarcoma (34). In the present study, TSN significantly inhibited proliferation of MKN45 cells in a time-and dose-dependent manner. CTPS formed cytoophidia following TSN-induced inhibition of MKN45 cell proliferation; moreover, the number of CTPS cytoophidia increased with TSN dosage. However, high concentrations of TSN led to cell death and affected the formation of CTPS cytoophidia; fewer CTPS cytoophidia were observed when cells were treated with 120 nM TSN. These data suggested that cytoophidia formation may affect proliferation rate and apoptosis of cancer cells. Cytoophidia are a type of intracellular compartment conserved across prokaryotes and eukaryotes and are involved in cell metabolism (35). The first reported component of the cytoophidia was CTPS (2)(3)(4). CTPS is a cytosol-associated glutamine amidotransferase enzyme that catalyzes de novo biosynthesis of CTP, a key nucleotide. Polymerization of CTPS into filamentous structures (cytoophidia) regulates its enzymatic activity (8,35). The formation of cytoophidia is reported to inhibit CTPS activity in E. coli and Drosophila (14,15). In the present study, R294D-CTPS mutants were generated and used to evaluate the effect of CTPS cytoophidia on TSN-induced proliferation and apoptosis. Although the R294D-CTPS mutant did not form cytoophidia, CTPS activity was not affected (10,36). As expected, significantly fewer EdU-positive cells were observed in the OE-CTPS group compared with the control in the present study. However, a significantly higher percentage of EdU-positive cells was observed in OE-CTPS R294D compared with OE-CTPS cells. The decrease in EdU-positive cells demonstrated that formation of CTPS cytoophidia affected the proliferation rate of MKN45 cells. Proliferating cells have been reported to demonstrate higher RNA and DNA synthesis rates during G 1 -and S-phase (37). Therefore, proliferating cells synthesize increased amounts of ribonucleotides and deoxyribonucleotides. As CTPS is key for de novo synthesis of CTP, a precursor for RNA and DNA synthesis, it can be hypothesized that CTPS activity increases in G 1 -phase of the cell cycle to support increased synthesis of nucleic acids. The present study demonstrated that the percentage of G 1 /G 0 -phase cells significantly increased and the percentage of S-phase cells markedly decreased in OE-CTPS cells compared with the control. Moreover, the mRNA and protein expression levels of CCND1 markedly decreased in OE-CTPS cells compared with the control. OE-CTPS R294D cells which demonstrated significantly decreased percentage of G1/G0-phase cells and increased the percentage of S-phase cells, meanwhile increased CCND1 mRNA and protein expression levels compared with both OE-CTPS and control cells. The aforementioned effects of OE-CTPS and R294D-CTPS mutation were greater following treatment with 80 nmol/l TSN, and subG1 peak was observed simultaneously, but the proportion of subG1 values were not calculated as the software cannot analyze it. The aforementioned results suggested that formation of CTPS cytoophidia affected RNA synthesis during the cell cycle, thereby inhibiting MKN45 cell proliferation induced by TSN. TSN has been reported to suppress proliferation and induce apoptosis in numerous types of human cancer cell, such as hepatocellular carcinoma (26,33). The present study assessed the effect of CTPS cytoophidia on TSN-induced apoptosis in MKN45 cells. Following formation of TSN-induced CTPS cytoophidia, the number of apoptotic bodies and apoptotic rate increased markedly in OE-CTPS cells compared with the control. A decrease in mitochondrial membrane potential occurs during early cell apoptosis; the present study demonstrated that mitochondrial membrane potential markedly decreased following formation of TSN-induced CTPS cytoophidia in OE-CTPS cells. However, the mitochondrial membrane potential increased significantly when the formation of CTPS cytoophidia was prevented in OE-CTPS R294D cells. Furthermore, mRNA and protein expression levels of Bcl-2 markedly decreased whereas those of proapoptotic Bax markedly increased in OE-CTPS compared with control; however, when formation of CTPS cytoophidia was prevented in OE-CTPS R294D cells, increased Bcl-2 mRNA and protein expression levels and decreased Bax mRNA and protein expression levels were observed compared with both OE-CTPS and control cells. In conclusion, the results of the present study suggested that CTPS promoted cell proliferation and inhibited apoptosis in MKN-45 cells. However, when CTPS formed cytoophidia after MKN45 cells were treated with TSN, CTPS activity was inhibited, which arrested the cell cycle in G 1 phase, inhibiting cell proliferation and promoting apoptosis. However, the mechanism by which TSN induces CTPS to form cytoophidia is still unclear and requires further study.
2022-08-31T15:18:40.765Z
2022-08-29T00:00:00.000
{ "year": 2022, "sha1": "1bf9bac302462f12e61377667f8228c3f42f4e1e", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2022.12835/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d8afa7fab29a0f9d9d03a22f9f10aa24d7efa32e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
11149592
pes2o/s2orc
v3-fos-license
Histochemical and Immunohistochemical Study of α-SMA, Collagen, and PCNA in Epithelial Ovarian Neoplasm Background: Alpha-smooth muscle actin (α-SMA) is an isoform of actin, positive in myofibroblasts and is an epithelial to mesenchymal transition (EMT) marker. EMT is a process by which tumor cells develop to be more hostile and able to metastasize. Progression of tumor cells is always followed by cell composition and extracellular matrix component alteration. Increased α-SMA expression and collagen alteration may predict the progressivity of ovarian neoplasms. Objective: The aim of this research was to analyse the characteristic of α-SMA and collagen in tumor cells and stroma of ovarian neoplasms. In this study, PCNA (proliferating cell nuclear antigen) expression was also investigated. Methods: Thirty samples were collected including serous, mucinous, endometrioid, and clear cell subtypes. The expression of α-SMA and PCNA were calculated in cells and stroma of ovarian tumors. Collagen was detected using Sirius Red staining and presented as area fraction. Results: The overexpressions of α-SMA in tumor cells were only detected in serous and clear cell ovarian carcinoma. The histoscore of α-SMA was higher in malignant than in benign or borderline ovarian epithelial neoplasms (105.3±129.9 vs. 17.3±17.1, P=0.011; mean±SD). Oppositely, stromal α-SMA and collagen area fractions were higher in benign than in malignant tumors (27.2±6.6 vs 20.5±8.4, P=0.028; 31.0±5.6 vs. 23.7±6.4, P=0.04). The percentages of epithelial and stromal PCNA expressions were not significantly different between benign and malignant tumors. Conclusion: Tumor cells of serous and clear cell ovarian carcinoma exhibit mesenchymal characteristic as shown by α-SMA positive expression. This expression might indicate that these subtypes were more aggressive. This research showed that collagen and α-SMA area fractions in stroma were higher in benign than in malignant neoplasms. Introduction compared to non-invasive (Lee et al., 2006). Alpha-SMA, together with vimentin, E-cadherin, and fibronectin are the markers for the epithelial to mesenchymal transition (EMT) process. The EMT is considered as one of the steps involved in normal cells to become cancerous (Kalluri and Weinberg, 2009). PCNA proteins have been recognized as an essential contributor of DNA replication in cell division. The expression of this substance was established in normal cells and several malignant neoplasm cells. Their prognostic and predictive values have been assessed to conclude their role in the diagnosis of cancer, yet the results were various (Han et al., 2015;Li et al., 2015;Jurikova et al., 2016). Collagen is the most abundant protein found in the extracellular matrix of the tissue. Collagen plays an important role in maintaining tissue structural integrity. It also determines whether a tissue can function properly or not (Rich and Whittaker, 2005). Changes due to the extracellular matrix remodeling and degradation of collagen are considered to play a role in the development of tumor cells. Collagen alters the microenvironment around tumor cells to release biochemical signals which will be responded to by tumor cells and stromal cells (Fang et al., 2014). The combination of picrosirius red staining, circularly polarized light, and hue analysis provides a powerful tool for the structural analysis of collagen fibers (Rich and Whittaker, 2005). Studies of α-SMA, collagen, and PCNA in epithelial ovarian neoplasms are still limited. Thus, this study aimed to observe the expression of α-SMA, collagen, and PCNA in epithelial ovarian neoplasms by histochemical and immunohistochemical methods. Tissue samples from patients Institutional Review Board approval was given from Medical Faculty Universitas Gadjah Mada before conducting this study. All samples of ovarian epithelial neoplasm tissues used in this research consisted of 12 benign or borderline ovarian epithelial neoplasm tissues (40%) and 18 ovarian carcinoma tissues (60%). Histopathological subtype of benign or borderline samples included 5 serous (41.7%) and 7 mucinous (58.3%) subtypes. Additionally, the histopathological subtypes of malignant samples involved 8 serous (26.7%), 3 mucinous (10%), 4 endometrioid (13.3%) and 3 clear cell (10%) subtypes. The malignancy of all samples was determined by two pathologists of Medical Faculty Universitas Gadjah Mada, Indonesia. The age of the patients ranged from 15 to 71 y (mean age 48.33 y), which consisted of 5 patients whose ages were less than 45 y and 25 patients who attained the age of 45 y or more. The median age of the patients with malignant neoplasms tended to be slightly older than those with benign neoplasms (50 vs. 48.5 y). Immunohistochemistry Tissue sampling in this research was processed to be formalin-fixed paraffin-embedded tissue blocks. The paraffin-embedded tissue was placed in 10% buffer formalin that was cut into 4 μm sections. Next the fourmm paraffin sections were deparaffinized and stained with Hematoxyllin-Eosin to examine the histopathology of the neoplasms. For immunohistochemistry analysis, paraffin-embedded tissues were used with the antibody α-SMA (eBioscience, Tokyo, Japan) and PCNA (Santa Cruz, Tokyo, Japan). The 4-μm paraffin sections were placed on poly-L-Lysine coated slides. After being deparaffinized, endogenous peroxidase was reduced by incubating with 3% hydrogen peroxidase in phosphate buffer saline (PBS) for 5 minutes. The secondary antibodies used were EnVision + System HRP anti rabbit (K4002, Dako, Tokyo, Japan) for α-SMA and PCNA. Diaminobenzidine was used as chromogen. Finally for counterstaining we used hematoxylin. Evaluation of immunostaining Positively stained carcinoma cells were counted on 10 representative fields with x40 magnification (Olympus CX22 microscope) for assessing the stain of α-SMA and PCNA. The result of immunohistochemistry staining was counted using the method described by Khatun et al.. A mean percentage of positive cancer cells was calculated and the staining intensity was classified as 0-3 (0, no staining; 1, slight staining; 2, medium staining; and 3, strong staining). PCNA was expressed in the nuclei of the cells and the score was counted as stained nuclei among the total number of tumor nuclei in 10 representative high power field (x40 magnification). PCNA score was shown as a percentage that ranged from 0% to 100%. Histochemical staining with Sirius Red For Sirius red staining, 1% picric acid solution and picosirius red solution were used. Paraffin blocks were deparaffinized with xylene then hydrated with absolute alcohol (5 times) and running water. After that, the sections were soaked in picosirius red solution for 1 hour and hydrated again using absolute alcohol and xylene. Then the preparations were incubated at room temperature for 24 hours. To evaluate Sirius red-stained sections, an Olympus CX22 microscope was used. The preparation's pictures were taken manually in at least 10 representative fields with x40 magnification. Sirius red was positively expressed in stromal and analyzed by using fraction area in Image J software. Statistical analysis Categorical variables were analyzed using χ2 test or Fisher's exact test, while continuous variables were evaluated by Independent-Samples T Test or Mann Whitney Test. Values of P < 0.05 were considered statistically significant. Results Immunohistochemical characteristics of α-SMA and PCNA in ovarian epithelial neoplasm Figure 1 showed the result of immunohistochemical staining of α-SMA, both in malignant and in benign ovarian neoplasms. Alpha-SMA was not expressed in the tumor cells of mucinous and serous cystadenoma. The histoscore of α-SMA was higher in the cytoplasm of malignant than in benign or borderline ovarian (Fig. 4). Some of mucinous cystadenocarcinomas obviously had higher expression of PCNA than mucinous cystadenoma as shown in Figure 2. Figure 3 presented the result of Sirius red staining in serous cystadenoma and serous cystadenocarcinoma. Mean of fraction areas that positively expressed for Sirius red staining was higher in benign/ borderline neoplasm than in malignant neoplasm (31.0±5.6 vs. 23.7±6.4, P=0.04) (Figure 4). Mean percentage of endometrioid and clear cell cystadenocarcinoma collagen fraction areas were the lowest among all of the epithelial ovarian neoplasm subtypes used in this study, as shown in Figure 5. Discussion The process of normal cell to become malignant can be divided into four steps, (1) initiation, (2) progression, (3) epithelial to mesenchymal transition (EMT), and (4) metastasize. In EMT, the gene expression pattern changes and the cells will obtain mesenchymal phenotype (Jinka et al., 2012;Ding et al., 2014). They tend to invade the surrounding tissue and infiltrate blood vessels (Jinka et al., 2012). Alpha-SMA is expressed by tumor cells carcinoma. Tumor cells that express α-SMA are predicted to be the cells that have the invasive nature, tend to metastasize, and have poorer prognosis (Lee et al., 2006;Choi et al., 2013;Parikh et al., 2014). Alpha-SMA expression was also positive in the surrounding stroma of tumor cell nests of serous ovarian carcinoma that metastasize to the peritoneum (Lee et al., 2006). In the present study, we found that α-SMA was expressed in the tumor cells of serous and clear cell ovarian carcinoma. This finding might indicate that these subtypes have distinct behavior with the other subtypes. They could be more invasive and tend to metastasize. Our results showed higher expression of α-SMA in malignant ovarian neoplasm cells compared with that in benign tumors. Oppositely with the stroma, the benign tumor had higher expression of α-SMA compared to the malignant tumor. Previous research established that alpha-SMA was not expressed in normal ovarian surface epithelium (Kobayashi et al., 1993). Regardless the location whether in tumor cells or in stroma, α-SMA expression was higher in benign ovarian tumors compared to malignant tumors. The main source of high α-SMA expression in benign ovarian tumor was blood vessels and myofibroblasts in the stroma. One previous study showed that the expression of α-SMA was positive in the blood vessels and stroma surrounding the tumor cells, but not in cells of epithelial tumors. The explanation for this result was the differences of blood vessels' maturity. Blood vessels in benign tumors were more mature than in malignant tumors, which results from angiogenesis. The myofibroblasts of the stroma surrounding the tumor cell were also more abundant in benign ovarian tumors thus involving the higher α-SMA expression. Alpha-SMA expression was considered to be predictive factor for prognosis in ovarian tumor (Kobayashi et al., 1993). The present study revealed similar results that were explained in the previous study. The extracellular matrix surrounding tumor cells undergoes changes along with tumor progression. Extensive changes of the normal extracellular matrix into the matrix of the tumor consists of degradation of matrix components and/or new synthesis of matrix components that are not found in normal tissue (Ricciardelli and Rodgers, 2006). The production of extracellular matrix components is increased in the stroma surrounding the tumor cell. Stromal tumor has an abundant amount of immune cells, endhotelin, and fibroblasts. Due to the effects of mass suppression by tumor cells, fibroblasts in the stroma undergo differentiation and obtain the phenotype resembling myofibroblast. Fibroblasts which have this myofibroblast phenotype produce reactive stroma which has different characteristics from stroma in normal cells. Stromal tumor has a number of ED-A fibronectin, tenascin-C, and type I collagen (Shieh, 2011). Increased production of extracellular matrix components is associated with poor prognosis in ovarian carcinoma (Labiche et al., 2010). There is an increase of collagen type III intensity and decrease in type I collagen in benign ovarian tumors. The production of collagen in benign ovarian tumor is the result of the fibroblasts. Although in malignant ovarian tumors the synthesis of collagen increases, the total collagen decreases if compared to benign tumors (Ricciardelli and Rodgers, 2006). This change is because in the malignant tumor there is degradation of extracellular matrix components in the stroma, because of the presence of matrix metalloproteinase enzymes (Kamat et al., 2006). Changes in the structure of collagen that induce the interaction between tumor cells and stroma mark the initiation of the process of EMT (Motrescu et al., 2008). Degradation and redeposition of collagen in the stroma regulate the microenvironment around the tumor. Collagen is a physical barrier against invasion of tumor cells, but it is also known in inducing infiltration, angiogenesis, invasion, and migration of tumor cells (Fang et al., 2014). In this study PCNA expression showed no significant statistical difference between benign and malignant tumors. Thus we still could not conclusively determine what role PCNA has in epithelial ovarian carcinoma's cell proliferation and aggressivity. However, previous studies showed the association of poor prognosis of cancer and positive PCNA expression (Berny et al., 2004;Barboza et al., 2005;Han et al., 2015;Li et al., 2015). In summary, our findings suggest that α-SMA might affect the biological tumor behavior of epithelial ovarian neoplasms. Furthermore, serous and clear cell carcinoma Figure 5. Fraction area of Stromal α-SMA and Sirius Red for Each Pathological Subtypes of Ovarian Epithelial Neoplasm. In sirius red staining, serous and mucinous cystadenoma were positively higher than malignant subtypes. Fraction area of stromal α-SMA in malignant subtypes were also detected less than the benign might have higher aggressivity compared with the other subtypes because they express α-SMA which is one of the epithelial to mesenchymal markers. Future study should also focus on α-SMA as a prognostic marker and targeted therapy in ovarian cancer.
2017-08-15T03:43:03.691Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "af5b88a8c47ae9ff65e10b21ca6708b68ae62ab3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "af5b88a8c47ae9ff65e10b21ca6708b68ae62ab3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
221380437
pes2o/s2orc
v3-fos-license
Prevalence of Phosphatidylinositol-3-Kinase (PI3K) Pathway Alterations and Co-alteration of Other Molecular Markers in Breast Cancer Background: PI3K/AKT signaling pathway is activated in breast cancer and associated with cell survival. We explored the prevalence of PI3K pathway alterations and co-expression with other markers in breast cancer subtypes. Methods: Samples of non-matched primary and metastatic breast cancer submitted to a CLIA-certified genomics laboratory were molecularly profiled to identify pathogenic or presumed pathogenic mutations in the PIK3CA-AKT1-PTEN pathway using next generation sequencing. Cases with loss of PTEN by IHC were also included. The frequency of co-alterations was examined, including DNA damage response pathways and markers of response to immuno-oncology agents. Results: Of 4,895 tumors profiled, 3,558 (72.7%) had at least one alteration in the PIK3CA-AKT1-PTEN pathway: 1,472 (30.1%) harbored a PIK3CA mutation, 174 (3.6%) an AKT1 mutation, 2,682 (54.8%) had PTEN alterations (PTEN mutation in 7.0% and/or PTEN loss by IHC in 51.4% of cases), 81 (1.7%) harbored a PIK3R1 mutation, and 4 (0.08%) a PIK3R2 mutation. Most of the cohort consisted of metastatic sites (n = 2974, 60.8%), with PIK3CA mutation frequency increased in metastatic (32.1%) compared to primary sites (26.9%), p < 0.001. Other PIK3CA mutations were identified in 388 (7.9%) specimens, classified as “off-label,” as they were not included in the FDA-approved companion test for PIK3CA mutations. Notable co-alterations included increased PD-L1 expression and high tumor mutational burden in PIK3CA-AKT1-PTEN mutated cohorts. Novel concurrent mutations were identified including CDH1 mutations. Conclusions: Findings from this cohort support further exploration of the clinical benefit of PI3K inhibitors for “off-label” PIK3CA mutations and combination strategies with potential clinical benefit for patients with breast cancer. About 40% of HR-positive breast cancers harbor PIK3CA mutations. Alpelisib (PIQRAY, Novartis Pharmaceuticals Corporation), a PI3K inhibitor, received FDA approval in combination with fulvestrant for patients with hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative PIK3CA-mutated advanced breast cancer. Approval was based on SOLAR-1, a phase 3 randomized trial that showed a benefit of 5.3 months in progression-free survival with the addition of alpelisib in the cohort of patients with PIK3CA-mutated breast cancer (12). PIK3CA mutations that were considered for trial enrollment in SOLAR-1 included C420R, E542K, E545A, E545D (1635G > T only), E545G, E545K, Q546E, Q546R, H1047L, H1047R, and H1047Y. The FDA also approved the therascreen R PIK3CA RGQ PCR Kit, (QIAGEN Manchester, Ltd.), a companion test able to select patients who have these specific mutations. For the purpose of the current report, mutations detectable by the companion test were considered alpelisib "on-label" (12). In this study, we report the prevalence of PI3K pathway alterations and co-expression with other markers of clinical interest in different breast cancer subtypes, based on somatic molecular profiling. This approach can lead to the identification of novel drug combinations with potential synergy that could be further evaluated in the clinical trial setting. Study Design A retrospective review of molecular profiles was performed for 4,845 female and 50 male breast cancer cases submitted to Caris Life Sciences, a Clinical Laboratory Improvement Amendments (CLIA)/College of American Pathologists (CAP)/ISO15189/New York State Department of Health (NYSDOH)-certified clinical laboratory (Phoenix, AZ), between January 2015 and June 2019. Specimens were obtained from more than 500 centers, primarily within the United States, and patient demographics were deidentified (29)(30)(31). Next-Generation Sequencing (NGS) NGS was performed on genomic DNA isolated from FFPE tumor samples using the NextSeq platform (Illumina, Inc., San Diego, CA). Matched normal tissue was not sequenced. A custom-designed SureSelect XT assay was used to enrich 592 whole-gene targets (Agilent Technologies, Santa Clara, CA). All variants were detected with > 99% confidence based on allele frequency and amplicon coverage, with an average sequencing depth of coverage of >500 and an analytic sensitivity of 5%. Prior to molecular testing, tumor enrichment was achieved by harvesting targeted tissue using manual microdissection techniques. Genetic variants identified were interpreted by board-certified molecular geneticists and categorized as "pathogenic, " "presumed pathogenic, " "variant of unknown significance (VUS), " "presumed benign, " or "benign, " according to the American College of Medical Genetics and Genomics (ACMG) standards. When assessing mutation frequencies of individual genes, "pathogenic, " and "presumed pathogenic" were counted as mutations while "benign, " "presumed benign" variants, and "VUS" were excluded. Microsatellite Instability (MSI)/Mismatch Repair (MMR) Status Up to three different testing methods were used to determine MSI/MMR status of tumors profiled, including Fragment Analysis (FA), IHC, and NGS. FA was tested with Microsatellite Instability Analysis (Promega, Madison, WI), which included fluorescently labeled primers for co-amplification of seven markers including five mononucleotide repeat markers (BAT-25, BAT26, NR-21, NR24, and MONO-27) and two pentanucleotide repeat markers (Penta C and D). The mononucleotide markers were used for MSI determination, while the pentanucleotide markers were used to detect either sample mix-ups or contamination. A tumor sample was considered MSI if two or more mononucleotide repeats were abnormal; if one mononucleotide repeat was abnormal or repeats were identical between the tumor and adjacent normal tissue, then the tumor sample was considered microsatellite stable (MSS). MMR protein expression was tested by IHC (using the following antibody clones: MLH1, M1 antibody; MSH2, G2191129 antibody; MSH6, 44 antibody, and PMS2, EPR3947 antibody [Ventana Medical Systems, Inc., Tucson, AZ, USA]). The complete absence of protein expression of any of the four MMR proteins tested (0 intensity in 100% of cells) was considered MMR deficient (dMMR). NGS method for measuring MSI (MSI-NGS) used over 7,000 target microsatellite loci and compared to the reference genome hg19 from the University of California, Santa Cruz (UCSC) Genome Browser database. The number of microsatellite loci that were altered by somatic insertion or deletion was counted for each sample. Only insertions or deletions that increased or decreased the number of repeats were considered. Genomic variants in the microsatellite loci were detected using the same depth and frequency criteria as used for mutation detection. MSI-NGS results were compared with results from over 2,000 matching clinical cases analyzed with traditional PCRbased methods. The threshold to determine MSI by NGS was determined to be 46 or more loci with insertions or deletions to generate a sensitivity of >95% and specificity of >99%. The three platforms generated highly concordant results as previously reported, and in the rare cases of discordant results, the MSI or MMR status of the tumor was determined in the order of FA, IHC, and NGS (33). Tumor Mutational Burden (TMB) TMB was measured (592 genes and 1.4 megabases [MB] sequenced per tumor) by counting all non-synonymous missense mutations found per tumor that had not been previously described as germline alterations. The threshold to define TMBhigh was ≥10 mutations/MB (34). Statistical Analysis The proportion of pathogenic or presumed pathogenic coalterations (mutation and/or expression) identified from all tumor specimens tested for each specific mutation were calculated and compared between mutated (MT) and wild type (WT) breast tumors, defined based on the presence of PIK3CA-AKT-PTEN alterations, and among the breast cancer subtypes. Sequencing tests with indeterminate results due to low depth of coverage were excluded from the total number for percentage calculation. The total frequency of PIK3CA-AKT-PTEN-MT cases in the complete cohort and per subtype was calculated by dividing the number of tumors with at least one alteration in PIK3CA, AKT1, or PTEN by the total number of tumors tested. Statistical analysis was performed using Chi-square tests. P < 0.05 were considered statistically significant. The log2 odds ratio was calculated for biomarker pairs to assess the tendency of mutual exclusivity (value ≤ 0) or co-occurrence (value > 0), with p-values derived from a one-sided Fisher's exact test and qvalues derived from a Benjamini-Hochberg correction procedure to decrease the false discovery rate. PIK3CA-AKT1-PTEN Alteration The co-existence of PIK3CA-AKT1-PTEN alterations with other alterations in pathways of clinical relevance was explored, including genes involved in homologous recombination (HR) and DNA damage sensors, chromatin remodeling, RAS-RAF-MEK-ERK pathway, and potential predictors of benefit to immunotherapy. The frequency of selected co-mutations with PIK3CA-AKT1-PTEN alterations is illustrated in Table 3. There was overall low co-alteration frequency for the HR deficiency (HRD)-related genes across all subtypes. Table 3). In the RAS signaling pathway, there was an increased HRAS, KRAS, and NRAS co-mutation frequency in the MT cohort across all subtypes, with no HRAS or NRAS mutations identified in the HER2-positive subtypes, and no KRAS mutations identified in the HR-negative HER2-positive subtype. BRAF co-mutation frequency was increased in PIK3CA-AKT1-PTEN-MT cohort across all subtypes (p ns); however, BRAF co-mutation frequency was very low for both MT and WT cohorts. Other statistically significant increased co-alterations between PIK3CA-AKT1-PTEN -WT and PIK3CA-AKT1-PTEN-MT cohorts were seen with TP53 (53.1 vs. 60.9%), CDH1 (6.1 vs. 10.3%), NF1 (2.1 vs. 6.2%), and RB1 (2.6 vs. 5.5%). The frequency of CDH1 mutations in PIK3CA-AKT1-PTEN-MT was higher in lobular than in non-lobular carcinoma (73.5 vs. 6.8%), although frequency of CDH1 mutations remained positively associated with PIK3CA-AKT-PTEN-MT after lobular cases were excluded from the analysis. We evaluated the co-occurrence of possible driver events and events associated with mutual exclusivity using an Oncoprint plot (Figure 2). Genomic features were selected based on mutual exclusivity analysis that identified TMB, CDH1, NF1, PD-L1, CHEK2, and BRCA1/2 as significant tendencies of co-occurrence. Alpelisib On/Off-Label PIK3CA Mutations in Breast Cancer In this cohort, PIK3CA pathogenic/presumed pathogenic mutations (n = 1616) were classified as on-label (if included in the SOLAR-1 trial) or off-label. There were 1,204 (74.5%) PIK3CA on-label mutations, and 412 (25.5%) PIK3CA off-label mutations. Of all PIK3CA mutations identified, 57/1,616 (3.5%) were off-label mutations (56 pathogenic/presumed pathogenic, 1 VUS) at amino acid positions that correspond to those of on-label mutations. Some of these off-label mutations have been described as activating in preclinical studies, including N345K, Q546K, and G1049R. In our study, N345K (n = 74), Q546K (n = 15), and G1049R (n = 19) comprised 108/412 (∼26%) of the off-label pathogenic/presumed pathogenic mutations (35,36). Other novel off-label pathogenic/presumed pathogenic mutations have not been defined as activating vs. deleterious. The prevalence of alpelisib on/off-label PIK3CA mutations was similar across breast cancer subtypes, with the lowest frequency seen in TNBC. More than 60% of alpelisib on/off-label mutations occurred in HR-positive HER2-negative subtype (Figure 3). The alpelisib on-label (n = 1,040) and off-label (n = 264) cohorts included cases with exclusively on-or off-label mutations, respectively. Cases with PIK3CA VUS mutations (n =1 14) and cases with both alpelisib on-and off-label mutations in the same tumor (n = 115) were excluded from the analysis. Few co-alterations were significantly different between alpelisib on-label and off-label PIK3CA-mutant cohorts, illustrated in Table S4. In all breast cancer subtypes, there was an increased co-mutation frequency in alpelisib off-label cohort compared to on-label cohort in CHEK2 and ERBB2, and increased co-mutation frequency in alpelisib on-and off-label cohorts compared to PIK3CA-WT in CHEK2, HRAS, TP53, CDH1, and NF1. DISCUSSION In this large cohort of 4,895 NGS molecularly profiled breast tumors, we observed a high prevalence of mutations in the PIK3CA-AKT1-PTEN pathway, in up to 72.7% of tumors across all breast cancer subtypes. Other studies have described distinct prevalence of mutations in the PIK3CA-AKT-PTEN pathway, varying from 38.9% PIK3CA mutations in the METABRIC database, 25% pathway mutations in the PAKT trial and 41% pathway mutations in the LOTUS trial (37)(38)(39). This variation likely reflects different criteria defining an altered pathway, distinct assays and variant calling or differences in patient populations. The majority of our tumor samples were from metastatic sites (60.8%), while the entire METABRIC database and 82% of the PAKT samples were obtained from primary sites (38,39). The PAKT and the LOTUS trials enrolled patients with TNBC only (37,38). In addition, there may also be racial and ethnic differences in the prevalence of PIK3CA-AKT1-PTEN alterations. It has been previously reported that there were differences on the location of PIK3CA mutations and were overall less prevalent in African Americans compared to Caucasians (40). In the LOTUS trial, nearly half of the patients were of Asian ethnicity (37). As previously reported, PIK3CA was commonly mutated in HR-positive subtypes (37.6%), in a higher percentage of cases than in SOLAR-1 (29%) which can be explained by a broader number of PIK3CA alterations included in our analysis (12,41). PIK3CA was also the most frequent alteration in HER2positive breast cancer. PTEN alterations mostly occurred in HER2-negative subtypes and were present in more than half of tumors tested (54.8%), by mutation and/or PTEN loss by IHC. AKT1 mutations were rare, with none identified in HER2positive tumors. This type of information may be relevant for clinical trial design. Recent phase II and III trials using AKT inhibitors have not used a specific biomarker selected population for trial participation, however, trials conducted with alpelisib enrolled a biomarker selected population (38,42). The use of immunotherapy in combination with chemotherapy has been established as the new standard of care in advanced PD-L1 positive TNBC with improved outcomes seen in IMpassion 130 trial (43). In our cohort, the most notable co-alteration identified was a significant increase in PD-L1 expression in tumor cells and high TMB in PIK3CA-AKT1-PTEN mutated cohorts, especially in HR-positive subtypes. This finding could form a basis for further development of drug combinations that affect the PIK3CA-AKT1-PTEN pathway in combination with agents that target the immune system. Such studies are underway, and include for example, a Phase Ib trial evaluating the safety and efficacy for ipatasertib, an AKT inhibitor, combined with atezolizumab and paclitaxel or nab-paclitaxel in patients with advanced TNBC, which showed an objective response rate of 73% for the combination in 26 patients at a median follow up of 6.1 months, regardless of PD-L1 or PIK3CA-AKT1-PTEN status (28). A phase III trial is currently underway for patients with advanced TNBC, evaluating the use of paclitaxel with ipatasertib vs. placebo, and atezolizumab vs. placebo for non-PD-L1 positive patients, and paclitaxel with atezolizumab and ipatasertib vs. placebo for PD-L1 positive patients (44). Of interest, most cases of CDH1 mutations also demonstrated concurrent mutations in the PIK3CA-AKT1-PTEN pathway and/or high TMB, regardless of histology. Of the 443 total CDH1-MT cases included in the cohort, none had a ROS1 mutation or fusion, indicating that CDH1 and ROS1 result in synthetic lethality, as has been previously described in breast cancer (45). In vivo, inhibition of ROS1 has been shown to produce significant antitumor effects in different models of E-cadherin-defective breast cancer. Therefore, ROS1 inhibitors may be of benefit for patients with CDH1 mutated breast cancers, in combination with PI3K or AKT inhibitors, with or without immunotherapy, and warrant further investigation in early clinical trials. In this report, the most common hotspot mutations in PIK3CA were in the kinase domain and helical domain, considered on-label mutations. However, off-label activating PIK3CA mutations were also seen, and we identified pathogenic and presumed pathogenic mutations not previously defined, with more than half (293/412, 71.1%) of cases occurring in HR-positive, HER2-negative subtype. At this point, not all novel off-label mutations have data regarding their functional consequences (i.e., activating vs. deleterious), and only a few off-label mutations have been previously reported as deleterious in preclinical studies (35,36). Therefore, it remains difficult to interpret the functional consequences of new genetic mutations, and the efficacy of PI3K inhibitors in tumors with alpelisib off-label mutations remains unknown. Notable co-alterations seen in off-label mutations include CHEK2 mutations and MSI-High/TMB-High, which may lead to novel targets and drug combinations, and wider use of NGS molecular profiling, given that the current FDA-approved companion test includes only on-label mutations tested in SOLAR-1 (12). The major limitations of this study are the lack of matched clinical data and outcomes, and therefore the clinical implications of our findings remain to be determined. In addition, the use of molecular profiling in this dataset was determined by clinicians and may have been influenced by patient and tumor characteristics. Therefore, the actual incidence of alterations in the PIK3CA-AKT1-PTEN pathway here described may not fully represent the general population. Lastly, given the lack of known activating potential of most "off-label" PIK3CA mutations, the clinical implications of our findings remain to be determined. In conclusion, we showed that the prevalence of alterations in the PIK3CA-AKT1-PTEN pathway is elevated across all tumor subtypes and that a considerable number of tumors harbor offlabel mutations. This is a very large and comprehensive dataset that characterized the PIK3CA-AKT1-PTEN pathway beyond PIK3CA mutations and included both primary and metastatic breast cancer cases. Although our dataset lacks outcome data, we believe that our results are hypothesis-generating and it is worth exploring the clinical implications of the "off-label" PIK3CA mutations. Finally, our data identified potential targets of interest for combination strategies and support the continuous investigation of the use of agents targeting the PIK3CA-AKT1-PTEN pathway in combination with immunotherapy. DATA AVAILABILITY STATEMENT The datasets presented in this article are not readily available because the raw data is protected proprietary information. Requests to access the datasets should be directed to aelliott@carisls.com of Caris Life Sciences. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the patients was not required to participate in this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS KK, AT, AE, JX, ZG, AH, CI, PP, LS, MS, WK, SS, and FL contributed to the design, implementation of the research, to the analysis of the results, and to the writing of the manuscript. All authors contributed to the article and approved the submitted version.
2020-09-01T13:03:21.047Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "979a6123d6412f88174efd1888d5b96364198071", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.01475/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "979a6123d6412f88174efd1888d5b96364198071", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1659292
pes2o/s2orc
v3-fos-license
under a Creative Commons License. Ocean Science Internal tides and energy fluxes over Great Meteor Seamount Internal-tide energy fluxes are determined halfway over the southern slope of Great Meteor Seamount (Canary Basin), using data from combined CTD/LADCP yoyoing, covering the whole water column. The strongest signal is semi-diurnal and is concentrated in the upper few hundred meters of the water column. An indeterminacy in energy flux profiles is discussed; it is argued that a commonly applied condition used to determine these profiles is in fact invalid over sloping bottoms. However, the vertically integrated flux can be established unambiguously; the observed results are compared with the outcome of a numerical internal-tide generation model. For the semi-diurnal internal tide, the vertically integrated flux found in the model corresponds well to the observed one. The observed diurnal signal appears to be largely of non-tidal origin. Introduction Recent estimates, based on satellite altimetry and modelling, indicate that barotropic tides lose about one third of their energy in the deep ocean (Egbert and Ray, 2003); this loss occurs predominantly over rough topography.From these findings, supplemented by in-situ observations, one can infer that the principal process responsible for this loss is internal-tide generation, a process in which energy is transferred from barotropic to baroclinic tides.Observations at the Hawaiian Ridge support this idea; internal-tide energy fluxes of the order of 10 kW m −1 were found at various locations (Rainville and Pinkel, 2006;Nash et al., 2006), and the total loss of barotropic tidal energy, for all the tidal constituents together, in the near-Hawaiian area is estimated at nearly 25 GW (Zaron and Egbert, 2006).Of this amount, an estimated 15% is lost to turbulence in the vicinity of the ridge, Correspondence to: T. Gerkema (gerk@nioz.nl)presumably by cascading of internal-tide energy to smaller scales (Klymak et al., 2006). The basic definition of internal-tide energy flux is E f = u ′ p ′ , where brackets denote the time-average over a tidal period; u ′ and p ′ are the baroclinic velocity component (in the direction of the energy flux) and baroclinic pressure, respectively.Since baroclinic pressure cannot be measured directly, one has to resort to indirect methods, using for example isopycnal excursions.From this, baroclinic pressure can be derived, save for a constant of integration.Attempts have been made to determine this constant.Kunze et al. (2002) proposed a "baroclinicity condition for pressure" to the effect that its vertical integral is assumed to be zero; this indeed fixes the constant.Although they added a cautionary remark ("this condition may not hold in regions of direct forcing"), they did not restrain its application to regions away from topography, nor did later authors (Nash et al., 2005(Nash et al., , 2006)).So, it has been indiscriminately applied over large canyons and ridges, even though its validity has not been demonstrated.We show here that the condition is in fact invalid over topography because it is incompatible with the other baroclinicity condition, that for horizontal velocity (see Sect. 4.1).We argue that it is fundamentally impossible to find the constant from single-profile measurements, implying an unresolvable indeterminacy in the energy flux profiles.However, the constant is immaterial to the vertically integrated energy flux, so this quantity can be determined unambiguously. The main purpose of this paper is to present observations over Great Meteor Seamount and to derive the vertically integrated internal-tide energy fluxes.Great Meteor Seamount lies in the western part of the Canary Basin, halfway between the Canary Islands and the Mid-Atlantic Ridge.It is a guyot, named after the research vessel "Meteor" with which it was discovered in 1938 (Dietrich, 1970).In recent years, the currents, tidal or otherwise, and stratification around Great Meteor Seamount have been studied; van Published by Copernicus Publications on behalf of the European Geosciences Union. T. Gerkema and H. van Great Meteor Seamount, with the location of the CTD/LADCP yoyo-station at the center of the asterisk (29.61 • N, 28.45 • W), and the track used in the numerical calculations indicated by the dashed diagonal.Depth is in km.This map was constructed from the database by Smith and Sandwell (1997).The top of the seamount is formed by a large plateau, where depths lie typically between 300 and 500 m.Haren (2005a) found a time-variability of the bottom boundary layer over this seamount.In the course of minutes, a steep front or bore may pass, whose overturning diminishes the local stratification profoundly; during the remainder of the tidal period the stratification is gradually reconstituted.An overview of the hydrography around Great Meteor Seamount was given by Mohn and Beckmann (2002), based on observational and modelling work.Besides a near southwestward flow, being part of the wind-driven subtropical gyre, they found semi-diurnal and diurnal barotropic and baroclinic tides (we discuss some of their specifics below).As Great Meteor Seamount covers, approximately, the latitudinal range 29.5-30.5 • N, diurnal components K 1 and O 1 are locally near-inertial. The measurements presented here were made by simultaneous CTD and LADCP (Lowered Acoustic Doppler Current Profiler) yoyoing over the slope of Great Meteor Seamount, during 24 1/2 h.The data are presented in Sect. 2. A harmonic analysis is applied to extract the semi-diurnal and diurnal components (Sect.3).From this we derive the vertically integrated energy fluxes of the semi-diurnal and diurnal internal tides (Sect.4.2).A comparison with a numerical internal-tide generation model is made in Sect. 5. Measurements The area of investigation is Great Meteor Seamount, centered around 30 • N, 28.5 • W. Combined CTD/LADCP yoyoing was carried out approximately halfway up its southeastern slope, at the spot marked in Fig. 1, where the water depth is 1980 m.The measurements started at 08:45 UTC on 7 June 2006, and continued until 09:15 UTC the next day (van Haren, 2006); in the figures shown below, we refer to the start as t=0.In this timespan of 24.5 h, 20 casts were made. The instrumental package was lowered and hoisted between 5 m from the surface and the bottom at a speed of about 1 m s −1 .The package consisted of a Sea-Bird 911plus CTD sampling at 24 Hz.For the present purposes, the CTD data were vertically subsampled at intervals of 0.5 dbar.On the same frame, two 300 kHz RDI ADCPs were mounted, one upward looking, the other, downward; together they form the LADCP.The ADCPs sampled currents at depth intervals between 8-20 m from their head at an accuracy of about 0.05 m s −1 . Temperature and salinity In the analysis of the temperature and salinity data, up-and down casts of the CTD were treated separately, making the total number of vertical profiles twice that of the number of casts.The data were interpolated to a regular time-grid with steps of half an hour, and vertically interpolated to a grid with z=0.5 m.The time-averaged signal is shown in Figs.2a, b.A conspicuous feature is the local salinity maximum at about 1100 m depth (accompanied by a less noticeable increase in temperature), which is due to the outflow of Mediterranean water. Ocean Sci., 3, 441-449, 2007 www.ocean-sci.net/3/441/2007/The buoyancy frequency N can be determined using its basic definition Here ρ is the in-situ density and c s the speed of sound; these quantities were calculated as functions of pressure, temperature and salinity using the equation of state for the Gibbs potential (Feistel and Hagen, 1995).The derivative dρ/dp was approximated by discretization with steps p of 0.5 dbar. The time-averaged profile of N is shown in Fig. 2c.In a few instances, N 2 is slightly negative; they are here rendered by N=0. Having obtained the in-situ density ρ from the equation of state, we can calculate its time-averaged value ρ , and hence buoyancy b defined by where ρ * is the mean of ρ over the vertical.So, b represents the departure of density from its time-average, scaled by a factor −g/ρ * .The field b, as a function of vertical and time, is shown in Fig. 3a.The predominantly semi-diurnal character of the signal is obvious, especially in the upper part of the water column.Vertical isopycnal displacements ζ can be derived from b via ζ =−b/ N 2 , see Fig. 3b.Peak amplitudes as large as 75 m are reached at some points (for clearer representation, the amplitude-range is however restricted to 50 m in Fig. 3b).The stripiness of the signal through the vertical is due to small-scale variations in N , cf.Fig. 2c.In the deeper parts of the water column, a weak quarter-diurnal signal is visible. Currents In the LADCP measurements the up-and downcasts were combined in the postprocessing to correct for systematic errors; hence the records provide 20 vertical profiles from the casts.The original set contains data every 20 m in the vertical, which we interpolated to a grid of z=0.5 m for consistency with the CTD data and later handling.The horizontal velocity was decomposed into a cross-slope component u, taken along the dashed diagonal in Fig. 1 (positive in the northeastern direction), and, perpendicularly to it, an along-slope component v (positive in the northwestern direction).Figure 4 shows the full signals u and v; the predominantly semi-diurnal character is clearly visible.A shift to offslope currents is visible in the upper 400 m in Fig. 4a (blue dominates), indicative of a southwestern background current, which fits in with the overall pattern of the eastern branch of the subtropical gyre (Mohn and Beckmann, 2002).Also, one finds in Fig. 4b that northwestern currents slightly dominate around 300 m (red dominates); these features, indicative of time-mean currents, are further illustrated in Fig. 5. Harmonic analysis of observed records The time-span of the data presented in the previous section (24.5 h) is obviously too short to resolve distinct semi-diurnal constituents such as the lunar component M 2 and the solar S 2 , let alone various diurnal constituents such as K 1 , O 1 , and the inertial period.In the following analysis, we therefore lump nearby constituents together, and distinguish only the categories "diurnal", "semi-diurnal", "quarter-diurnal", and a "time-mean". www.ocean-sci.net/3/441/2007/Ocean Sci., 3, 441-449, 2007 Let the original field q or (standing for current components, buoyancy etc.) be approximated by the superposition where σ n are the frequencies , and σ 3 =2×σ 2 (M 4 , quarter-diurnal), all in rad s −1 .The amplitudes a n and phases φ n are given by a n = 2 q sin σ n t 2 + q cos σ n t 2 1/2 ; tan φ n = − q cos σ n t / q sin σ n t , where • stands, as before, for time-averaging over the whole record.In this procedure, we treat different constituents as if they were orthogonal, mimicking a Fourier decomposition.The validity of this procedure can be checked a posteriori by comparing the original signal q or with the sum (2); we carried out such checks and found that the two were always very similar (an example is shown in Fig. 6). We present the results of this decomposition for the crossslope and along-slope currents.The time-mean flow is shown in Fig. 5; it confirms the presence of a flow that is predominantly directed off the seamount in the upper layer, as noted above already.We split the time-dependent constituents (i.e., diurnal, semi-diurnal and quarter-diurnal) of the velocity fields into two parts: a depth-averaged, or barotropic part, and the remainder, or baroclinic part.The barotropic crossslope flow is shown in Fig. 6.Amplitudes are: 0.02 (semidiurnal), 0.0075 (diurnal), and 0.0024 (quarter-diurnal), all in m s −1 .The semi-diurnal constituent is 2.7 times stronger than the diurnal one.This factor falls within the range of values observed by Mohn and Beckmann (2002), who found the following typical values for the tidal/inertial constituents (all in m s −1 ): M 2 , 0.14; S 2 , 0.04; K 1 /f, 0.03; O 1 , 0.02.The diurnal components together thus are 2 to 3.6 times smaller than the semi-diurnal ones, depending on the moment within the spring-neap cycle.Our measurements were made approximately half-way between first-quarter and full moon, so that the ratio is in agreement with that of Mohn and Beckmann (2002).The magnitudes of the currents as such are much larger in Mohn and Beckmann (2002), due to the fact that their measurements were made over the top of the seamount, where water depth is smaller (by about a factor of five).They also found that the diurnal components are strongly reduced off the seamount; in the neighbouring open ocean, they form a much smaller fraction (order one-tenth) of the total tidal signal. The results for the baroclinic cross-slope component, u ′ , are shown in Fig. 7a, d.The semi-diurnal constituent (red line) has its largest amplitudes in the upper 500 m of the water column, and is generally stronger than the diurnal constituent, except near 300 m depth, where the latter peaks (blue).The semi-diurnal phase shows a clear upward increase between 300-600 m depth, indicating upward phase propagation and hence downward energy propagation.The phases are here represented in "unwrapped" angles; as a consequence, they cover intervals larger than the strictly necessary length of 2π.(This is done for clarity of presentation; otherwise the diurnal constituents, in particular, would give rise to highly erratic plots, due to the jumps from 0 to 2π, and vice versa, which of course have no physical significance in themselves.) The remaining panels of Fig. 7 show amplitudes and phases of the along-slope baroclinic current velocity v ′ , and of buoyancy b.(The latter represents the total, i.e. barotropic Ocean Sci., 3,[441][442][443][444][445][446][447][448][449]2007 www.ocean-sci.net/3/441/2007/plus baroclinic signal; we determine its baroclinic part in Sect.4.2.)Overall, the phase of the semi-diurnal constituent of v ′ lags that of u ′ by values of around π/2 (typically between 1.3 and 1.8 in the upper 600 m), consistent with the idea of along-slope uniformity (which we assume in Sect.5), which implies v ′ t =−f u ′ and hence gives rise to a phase shift of π/2.The diurnal across and along-slope components both show a distinct peak at around 300 m depth, with nearly identical amplitudes, consistent with circular polarization, as may be expected at this near-inertial frequency.The numerical experiments, discussed in Sect.5, suggest that the peak is not of tidal origin. The harmonic constituents, taken together, give a reasonably faithful description of the original signal.The superposition of the semi-diurnal, diurnal, and (the overall weak) quarter-diurnal constituents deviates on average (in time and vertically) from the original signal by 0.012 m s −1 for the cross-slope baroclinic component (rms-value: 0.043 m s −1 ), by 0.013 m s −1 for the along-slope baroclinic component (rms-value: 0.038 m s −1 ), and by 6.4×10 −5 m s −2 for buoyancy (rms-value: 2.0×10 −4 m s −2 ). Energy fluxes The basic definition of internal-tide energy flux reads where the baroclinic velocity u ′ is calculated from observed profiles by subtracting the depth-averaged part (which is presumed to represent the barotropic signal).The principal difficulty lies in finding the baroclinic pressure, p ′ ; we discuss this problem first. Indeterminacy in energy-flux profiles We start with the linear hydrostatic momentum equations where p is pressure (now divided by a constant reference value of density, ρ * ), and b buoyancy, defined in Eq. ( 1).These quantities represent the barotropic plus baroclinic fields; in Eq. ( 6), the static fields have been left out.We note that because p is here defined as pressure divided by ρ * , the definition of energy-flux (Eq. 3) changes into E f =ρ * u ′ p ′ . To calculate the internal-tide energy flux, we need to distill first their baroclinic parts (denoted by primes).For the horizontal velocity components, we do so by subtracting the depth-average values: Here the surface is placed at z=0, and the bottom at z=−h(x, y); we do not assume uniform depth.By construction, the vertical integrals of u ′ and v ′ are zero, a property we may refer to as the "baroclinicity condition for velocity". The other baroclinic quantity we need is pressure p ′ , which is related to b ′ via the hydrostatic balance, p ′ z =b ′ .For the moment we shall suppose we have been able to determine b ′ (we return to this point in Sect.4.2), and focus henceforth on deriving p ′ from it. The hydrostatic balance implies where the first term on the right is a "constant" of integration; the value of z 0 is arbitrary, but natural choices are z 0 =0 (surface) or z 0 =−h(x, y) (bottom).Garcia Lafuente et al. (1999) took the former, but neglected, without any justification, the constant of integration.This amounts to assuming that baroclinic pressure vanishes www.ocean-sci.net/3/441/2007/Ocean Sci., 3, 441-449, 2007 ∫ dz p′ = 0 p′ surface = 0 p′ bottom = 0 Fig. 8. Energy-flux profiles for the semi-diurnal internal tide, based on different ways of evaluating baroclinic pressure: the solid line is based on the assumption that the vertical integral of baroclinic pressure is zero ("Kunze condition"); the dotted line assumes baroclinic pressure to be zero at the surface ("Garcia Lafuente condition"); the dash-dotted line assumes it to be zero at the bottom. at the surface, an assumption rightly criticized by Kunze et al. (2002).(We note that baroclinic surface pressure does not even vanish under the rigid-lid approximation -assuming it does is an elementary misconception that occasionally surfaces in the literature.) The central problem -to determine the constant of integration -thus remains.To solve this, Kunze et al. (2002) proposed a "baroclinicity condition for pressure", meaning that the vertically integrated baroclinic pressure must be zero; this would indeed fix the constant.However, this condition is incompatible with the other baroclinicity condition, that for velocity -except in the absence of topography (i.e. if the bottom is purely horizontal).This point seems to have passed unnoticed in the literature, but it is easy to prove.To begin with, it is clear from Eqs. ( 4) and ( 5), applied to the baroclinic fields, that the baroclinicity condition for velocity implies 0 Thus, the vertically integrated horizontal derivatives of baroclinic pressure vanish.Moreover, we have the mathematical identity (and an analogous expression in terms of the y derivative). The second term on the right is zero because of Eq. ( 9).The first term on the right, however, contains the baroclinic pressure at the bottom, which in general is not zero.It thus follows that, in the presence of topography, the vertically inte-grated baroclinic pressure cannot be assumed to be zero.In fact, even if the baroclinic bottom pressure were assumed to be zero, it may still be inconsistent to require the vertically integrated pressure to be zero, because this requirement may yield a profile in which the value at the bottom is nonzero, contradicting the original assumption.(The profile in Fig. 8, solid line, is a case in point.)The failure of the "baroclinicity condition for pressure", which was meant to fix the constant of integration in Eq. ( 8), means that we are left with an indeterminacy in the energy-flux profiles.Note that energy-flux profiles in the y direction too suffer from an indeterminacy even if ∂h/∂y=0.The presence of a slope in x (∂h/∂x =0) is sufficient to invalidate the "baroclinity condition for pressure"; and the resulting failure to fix the constant of integration automatically has a bearing on the y direction as well; after all, the same (undetermined) constant of integration is at stake in v ′ p ′ . In the absence of any topography, on the other hand, we can write the baroclinic vertical velocity as a sum of modes W n (z) exp i(k n x+l n y−σ t) (summing over mode number n), in which case the baroclinic pressure and horizontal velocities are all proportional to its vertical derivative dW n /dz; it then follows immediately that the vertical integrals of these quantities must be zero (since W vanishes at the surface and bottom). The underlying cause why the presence of a slope spoils the "baroclinicity condition for pressure" proposed by Kunze et al. (2002), lies in the non-separable nature of the problem.In the absence of topography, separation of horizontal and vertical coordinates applies, and one can deal with the vertical structure independently of the horizontal position.In the presence of topography, the two become intertwined.Indeed, it is clear from Eq. ( 8) that one could find the "constant" of integration, which is due to vertical integration, from information of the horizontal dependence of velocity.(Specifically, taking z 0 = 0, one could find the constant by horizontally integrating Eqs. ( 4) and ( 5), with respect to x and y, respectively.)But from measurements at a single station, such information is simply not available. As the problem seems to be fundamentally unsolvable, this leaves us no other choice than a pragmatic approach.As a matter of fact, in its source region, i.e. over the slope, the internal tide is usually concentrated in a beam.Suppose, for example, that the beam is located in the upper layer of the water column, and that baroclinic currents are very weak in the lower layer; then it makes sense to assume that all baroclinic fields, including pressure, are weak there.One may then simply assume the baroclinic pressure at the bottom to be zero. To see how the choice of the level of zero pressure affects the energy-flux profiles, we consider three cases, all for the semi-diurnal internal tide (Fig. 8).(At this stage we ignore the barotropic contribution in b, and simply assume the observed b to be entirely baroclinic, i.e., b ′ =b; we return to this point below.)The solid line is based on the assumption of Ocean Sci., 3,[441][442][443][444][445][446][447][448][449]2007 www.ocean-sci.net/3/441/2007/zero-integrated pressure as proposed by Kunze et al. (2002). Assuming baroclinic pressure to be zero at the bottom gives a somewhat different curve (dash-dotted line).Both show a clear negative flux in the upper 500 m, i.e. directed away from the seamount, as one would expect because internal tides are generated near the top of the seamount, and, according to Fig. 7a (red line), the semi-diurnal cross-slope signal is particularly strong in the upper 500 m.It is for this reason that the dotted line in Fig. 8 should be rejected as unphysical; it is based on the assumption of zero surface pressure.We emphasize that the constant of integration affects only the energy-flux profiles, not their vertically integrated values, since the first term on the right-hand side of Eq. ( 8) plays no role in the vertically integrated u ′ p ′ , by virtue of the baroclinicity condition for velocity.So, for each of the three profiles in Fig. 8, the integrated value is the same, namely −2.4 kW m −1 . Results The buoyancy field shown in Figs. 3 and 7c, f contains a baroclinic as well as a barotropic tidal signal; the latter (which we denote by B) represents merely the movement of the isopycnals that is kinematically induced by the barotropic tidal flow over the slope.To calculate the baroclinic energy flux properly, this barotropic part should be removed.It can however not be directly deduced from the data, and some additional assumptions are needed.We assume that the barotropic cross-slope transport is spatially uniform; hence, for each tidal constituent, the cross-slope barotropic velocity can be written as U =Q sin(σ t− )/ h(x), where Q is the amplitude of the barotropic cross-slope flux.By continuity, the vertical barotropic component then becomes The barotropic part of buoyancy is then given by B t =−N 2 W .At the measurement site, dh/dx≈0.14.The remaining parameters (Q, ) follow from the harmonic analysis.This allows to remove the barotropic part B from b.The correction thus made, however, is small; for example, for the semi-diurnal component the difference between the amplitudes of b and b ′ =b−B is, on average, only 4×10 −5 m s −2 (cf.Fig. 7c, red line). Next we integrate b ′ vertically to obtain baroclinic pressure, following Eq.( 8), and then, by the procedure described in the previous section, the vertically integrated energy flux.The results are: −2.3 kW m −1 (semi-diurnal) and +0.12 kW m −1 (diurnal); negative (positive) means a net flux away from (toward) the seamount.The magnitude of the semi-diurnal flux is slightly smaller than the value given at the end of the previous section; this is because we have here properly calculated b ′ =b−B, whereas the earlier value was simply based on the assumption that B is negligible.To shed more light on the energy fluxes of the semi-diurnal and di-urnal components, we now consider results from numerical experiments. Numerical modelling We compare the energy fluxes obtained from the yoyo measurements with those from a linear hydrostatic internal-tide model that was previously used to estimate energy fluxes in the Bay of Biscay (Gerkema et al., 2004); the model assumes uniformity in the along-slope direction.The required input consists of three things: a vertical profile of buoyancy frequency N, for which we use Fig. 2c; a topographic profile, for which we use the track shown in Fig. 1; and the crossslope barotropic tidal transports (Q).The latter can be derived from the barotropic current amplitudes mentioned in Sect. 3 (see also Fig. 6), by multiplying with the local water depth (1980 m); this gives Q=39.6 (semi-diurnal) and 14.9 (diurnal), both in m 2 s −1 .The resulting pattern for the semidiurnal tide, in terms of the amplitude of baroclinic u ′ , is shown in Fig. 9.The lower panel shows the corresponding amplitude profile of u ′ at the location of the yoyo-station; this profile is compared with the observed one (dotted line).In both, the largest amplitudes occur in the upper 200 m, but the observed signal has a much smaller amplitude and is much wider, in other words, it is more smeared out than the beam in the numerical model.These effects of amplitude reduction and widening partly compensate each other in a depthintegrated sense.This becomes apparent if one calculates the vertically integrated energy flux, which is −2.6 kW m −1 , being only 13% larger in magnitude than the observed value (which was −2.3 kW m −1 ). For the diurnal component, the signal is much weaker (Fig. 10), since the cross-slope barotropic component, which determines the forcing, is about 2.6 times weaker.The energy flux is here predominantly negative: the model yields a vertically integrated energy flux of −0.034 kW m −1 , consistent with the idea of internal-tide propagation away from the seamount.Recall that the observed value was positive, and moreover much larger: +0.12 kW m −1 .Part of the explanation may lie in the fact that in the observed results, nearinertial internal waves dominate the "diurnal" signal that are not due to barotropic tidal forcing and hence not reproduced by the model. Barotropic to baroclinic conversion is only one of the potential mechanisms for the generation of diurnal signals at this location.Another mechanism is subharmonic resonance (e.g., Hibiya et al., 2002;MacKinnon and Winters, 2005;Gerkema et al., 2006): semi-diurnal internal tides may by parametric subharmonic instability excite internal tides of half that frequency at latitudes where the latter can exist as a free wave (i.e.equatorward of 29.9 • S/N for S 2 , and 28.8 • S/N for M 2 ).For S 2 this process may occur at the southern flank, but for M 2 only at some southward distance from Great Meteor Seamount.(We note that in defining the www.ocean-sci.net/3/441/2007/Ocean Sci., 3, 441-449, 2007 "critical" latitude, we use the "traditional" definition according to which it is the latitude where the tidal frequency equals the local Coriolis parameter f ; in weakly stratified regions, such as the abyssal ocean, this definition requires modification, as pointed out by Gerkema and Shrira (2005).)These "S 1 " and "M 1 " diurnal frequencies moreover lie close to the local inertial frequency f (which at this latitude shows an enhanced spectral peak, see van Haren, 2005b), at which nearinertial waves occur due to atmospheric forcing, a third possible source of the "diurnal" energy found in the measurements. To return to the semi-diurnal tidal energy flux, the measurements made here at a single location do not allow us to infer with any certainty how much Great Meteor Seamount as a whole contributes to the barotropic/baroclinic energy conversion.Still, to get an idea of the order of magnitude, we extrapolate the value found here to the entire seamount, multiplying 2.3 kW m −1 by the circumference of a circle, the radius of which is (roughly) estimated to be 20 km.This gives a total conversion of 0.3 GW, which is about sixty times less than at the Hawaiian Ridge (Klymak et al., 2006). Conclusions In estimating energy fluxes over Great Meteor Seamount, we have focussed on vertically integrated values rather than vertical profiles, because, as argued in Sect.4.1, the latter are fundamentally ambiguous over topographic features -a point not previously noted in the literature.Over a sloping bottom the "baroclinicity condition for pressure", as proposed by Kunze et al. (2002), fails to be valid.This failure is frustrating, since the primary interest of internal-tide energy fluxes lies in regions of strong topography!Fortunately, the vertically integrated values can be determined unambiguously. We found that the observed semi-diurnal internal-tide energy flux is very similar to the one found from a numerical model; also the location of large amplitudes is correctly modelled, but the model represents the internal tide as a more Ocean Sci., 3,[441][442][443][444][445][446][447][448][449]2007 www.ocean-sci.net/3/441/2007/intense, peaked beam than is found in the observations.The differences between model and observations are much larger for the diurnal signal, which at this latitude coincides with near-inertial signal.The observations yield a northward energy flux, i.e. towards the seamount, which is not only directionally opposed to the model result, but also much larger in amplitude.This is plausibly due to the fact that the mechanisms behind near-inertial waves (primarily the wind) are not included in the model.Still another mechanism may be responsible for the enhanced diurnal/inertial signal, namely parametric instability of the S 2 tide, creating a subharmonic (which is not included in the model, either). The semi-diurnal internal-tide energy flux, according to model and observations, is smaller than found for example in the Bay of Biscay, but only by a factor of four.The reason that the flux is not much smaller is that the plateau of Great Meteor Seamount, although obviously deeper than the shelf in the Bay of Biscay, still lies shallow enough for the slope to cross the permanent pycnocline, which was earlier shown to be a major factor in internal-tide generation (Gerkema et al., 2004). Fig. 1.Great Meteor Seamount, with the location of the CTD/LADCP yoyo-station at the center of the asterisk (29.61 • N, 28.45 • W), and the track used in the numerical calculations indicated by the dashed diagonal.Depth is in km.This map was constructed from the database bySmith and Sandwell (1997).The top of the seamount is formed by a large plateau, where depths lie typically between 300 and 500 m. Fig. 2 . Fig.2.Time-averaged profiles of temperature, salinity, and buoyancy frequency, derived from the full set of CTD yoyo-casts. Fig. 4 . Fig. 4. Results from the LADCP yoyo-casts: the total cross-slope and along-slope velocity components, in m s −1 . Fig. 5 . Fig. 5. Vertical profiles of the cross-and along-slope time-mean currents. Fig. 6 . Fig. 6.The harmonic constituents, and their superposition, of the cross-slope barotropic flow.An indication of the accuracy of the fit is given by |sum−observed| / |observed| =0.19, i.e. the fit deviates on average by 19%. Fig. 7 . Fig. 7.The semi-diurnal and diurnal constituents of the crossand along-slope baroclinic velocity (u ′ and v ′ , respectively), and of buoyancy b.Left panels show the amplitudes; right panels, the phases.In each panel, the semi-diurnal (red line) and diurnal (blue) constituents are shown. Fig. 9 . Fig. 9.The numerically modelled amplitude of the baroclinic semidiurnal cross-slope current, |u ′ | (in m s −1 ).Below, the corresponding modelled profile (solid line) at the yoyo position (marked by an asterisk above); in the same panel, the observed profile is shown (dotted line), reproduced from Fig. 7a. Fig. 10 . Fig. 10.The numerically modelled amplitude of the baroclinic diurnal cross-slope current, |u ′ | (in m s −1 ).Below, the corresponding modelled profile (solid line) at the yoyo position (marked by an asterisk above); in the same panel, the observed profile (dotted line), reproduced from Fig. 7a.
2014-10-01T00:00:00.000Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "f06d510156ba032e38eb37394bcc2f4156043cc7", "oa_license": "CCBYNCSA", "oa_url": "https://os.copernicus.org/articles/3/441/2007/os-3-441-2007.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6e2ccc9cc9ad0ff7fbeaf14c496af6602d63eaec", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
237561664
pes2o/s2orc
v3-fos-license
Differential response of plant water consumption to rainwater uptake for dominant tree species in the semiarid Loess Plateau Whether uptake of rainwater can increase plant water consumption in response to rainfall pulses requires investigation to evaluate the plant adaptability, especially in water limited regions where 15 rainwater is the only replenishable soil water source. In this study, the water sources from rainwater and three soil layers, predawn (Ψpd), midday (Ψm) and gradient (Ψpd−Ψm) of leaf water potential, and water consumption in response to rainfall pulses were analyzed for two dominant tree species, Hippophae rhamnoides and Populus davidiana, in pure and mixed plantations during the growing period (June– September). In pure plantations, the relative response of daily normalized sap flow (SFR) was 20 significantly affected by rainwater uptake proportion (RUP) and Ψpd−Ψm for H. rhamnoides, and was only significantly influenced by Ψpd−Ψm for P. davidiana (P < 0.05). Meanwhile, the large Ψpd−Ψm was consistent with high SFR for H. rhamnoides, and the small Ψpd−Ψm was consistent with the low SFR for P. davidiana, in response to rainfall pulses. Therefore, H. rhamnoides and P. davidiana exhibited sensitive and insensitive responses to rainfall pulses, respectively. Furthermore, mixed afforestation 25 significantly enhanced RUP, SFR, and reduced the water source proportion from the deep soil layer https://doi.org/10.5194/hess-2021-351 Preprint. Discussion started: 30 August 2021 c © Author(s) 2021. CC BY 4.0 License. Uptaking contrasting water sources between coexisting species usually shows water source separation and can minimize water source competition (Munoz-Villers et al., 2020;Silvertown et al., 2015); however, overlapping water sources among plant species may lead to competition in arid and semiarid regions (Tang et al., 2019;Yang et al., 2020). Rainfall pulses have been observed to relieve or eliminate water competition and thus maintain or increase plant water consumption in some water limited regions 70 (Du et al., 2011;Tfwala et al., 2019). Meanwhile, plant species with strong rainwater uptake ability generally exhibit more competitiveness than coexisting weak rainwater uptake ability species (Stahl et al., 2013;West et al., 2012). However, Liu et al. (2019) attribute opposite rainwater uptake ability to the stable coexistence of species in mixed plantations in semiarid regions, where the rainfall events are variable and less rainwater is uptake by one of the coexisting plant species. In addition, coexisting 75 species may also cope with or minimize water resource competition through plant leaf water potential or root distribution adjustment (Chen et al., 2015;Silvertown et al., 2015). It is still unclear whether these adjustments could influence the rainwater uptake and water consumption for coexisting species in water limited regions. The "Grain for Green project" has increased vegetation coverage by 25% in the Loess Plateau through afforestation activities since the 1990s, to deal with vegetation degradation and water and soil loss (Tang et al., 2019;Wu et al., 2021). Hippophae rhamnoides and Populus davidiana are typical dominant tree species, with high survival rate and drought tolerance, and occupy nearly 30% of the plantation area in this region (Liu et al., 2017;Tang et al., 2019). In addition to H. rhamnoides and P. davidiana pure plantations, mixed plantations of these two species were also widely promoted due to 85 the higher soil and water conservation capacity than pure plantations in the original afforestation stage (Tang et al., 2019;Wang et al., 2020). Rainwater has obvious seasonal variability and is the only replenished soil water source in this region because of the soil is approximately 100 m deep (Li et al., 2016;Zhang et al., 2017). The imbalance between rainwater input and plant water demand may weaken the sustainability of plantations with further plant growth (Jia et al., 2020;Wu et al., 2021). Previous 90 investigations in the region quantified the water sources from different soil depths Wu et al., 2021) and characterized the water consumption during drought stress periods for plantation species in pure plantations. To understand the adaptation of plantation species in this study, the water consumption, water sources from rainwater and different soil layers, and plant leaf water potential for H. rhamnoides and P. davidiana in pure and mixed plantations were analyzed. The 95 specific objectives were as follow: (1) to investigate the influence of rainwater uptake and leaf water potential on water consumption after rainfall events, and (2) to assess the mixed afforestation effect on these influences. Study site The study was conducted in the Ansai Ecological Station in the semiarid Loess Plateau (36.55°N, 109.16°E), Northern China. The study area has a semiarid continental climate. The annual average (mean ± SD) rainfall amount and air temperature are 493.1 ± 127.9 mm and 10.7 ± 0.5 °C (1985-2017), respectively. The soil is characterized as a silt loam soil according to United States soil taxonomy ( Sap flow observation Three standard individuals, with approximately mean height and trunk diameter, for specific species were chosen in each of the nine plots (Table S1). In each plot in the mixed plantation, three individuals of H. rhamnoides were chosen firstly, then a neighboring P. davidiana individual was selected at approximately 2 m distance from each chosen H. rhamnoides individual. The sap flow was monitored 140 by a pair of Granier-type thermal dissipation probes 10 mm in length and 2 mm in diameter in 36 selected individuals. During the plant growing season and ranging from 1 June (DOY 152) to 30 September (DOY 273) in 2018, the 30 s original and 30 min average sap flow values were monitored using a CR3000 data logger (Campbell Scientific Inc.). Waterproof silicone and aluminum foil were used to avoid the impact of the external environment on and physical damage to TDPs (Du et al., 2011). 145 The standard sap flow density (F d , ml m −2 s −1 ) was calculated as follows (Granier, 1987): where Δt and Δt max are the temperature difference of heated and unheated probes at 30 min intervals and the maximum Δt in each day, respectively. Steppe et al. (2010) suggested that F d should have a species specific calibration to validate Eq. (2). 150 Meanwhile, the possibility of underestimating the F d value with the Granier-type thermal dissipation method (Du et al., 2011) should be considered when the whole tree water consumption is calculated. From April to October 2018, at the end of each rainfall event, 19 rainwater samples were collected immediately using a rain gauge cylinder placed in the middle of the plantation plots, and stored at 4 °C . To avoid the influence of sample collection on sap flow observation, one standard individual for the specific species nearby each sap flow monitored individual was selected for plant stem and soil water collection. In the mixed plantation, the distance was approximately 2 m between the selected H. an interpulse period longer than 7 days to eliminate the potential influence of the previous rainfall event. At each of successive three days after every selected rainfall event, one suberized stem after removing A vacuum line (LI-2100, LICA Inc., China) was used to extract water from soil samples and plant stems. The water isotopic values of rainwater, soil samples, and plant stems were determined using a DLT-100 water isotope analyzer (LGR Inc., USA), with accuracy of ± 0.1 (δ 18 O) and ± 0.3 ‰ (δD). The potential influence of organic matter on water isotopic values produced during water extraction 180 from stems was eliminated using the method of Yang et al. (2015). proportion of rainwater in plant stem as follows (Cheng et al., 2006): Equations (4) and (5) In addition, on the first day after rainfall, the relative water uptake proportions from different soil depths were calculated using the MixSIR program ( Moore and Semmens, 2008). The model input 205 parameters were the average δ 18 O and δD values in plant stem water, soil water at seven depths in each plot, and rainfall water. The SD for δ 18 O and δD at each soil depth was used to accommodate the uncertainties of these values, and no fractionation was considered during water source uptake by plant roots. In addition, the calculated water uptake proportions from seven soil depths were combined into three soil layers (shallow, middle, and deep) to facilitate water source comparisons, for soil depths of 0-210 30, 30-100, and 100-200 cm, respectively. In this study, on the first day after rainfall, the water uptake proportions from rainwater and soil layers were calculated separately. The sum of RUP and relative water uptake proportions from three soil layers were larger than 100%. Thus, no significant difference was determined between RUP and water sources from different soil layers in the following analysis. Leaf water potential measurement On the same day as plant stem and soil sample collections, the Ψ pd and Ψ m were measured by a PMS1515D analyzer (PMS Instrument, Corvallis Inc., OR, USA) at 4:30-5:30 (predawn) and 11:20-12:40 (midday), respectively. One leaf was selected for each sap flow monitored individual, and the 220 average value for each species in each plot was used for further analysis. The diurnal variation in leaf water potential (Ψ pd -Ψ m ) was used to illustrate the leaf water potential gradient. Plant fine root investigation In August 2018, six soil cores were dug around each selected standard individual for plant stem and soil Statistical analysis In the present study, the first day after rainfall was the maximum normalized F d within 3 days for H. rhamnoides and P. davidiana in both plantation types, except after 24 and 35.2 mm for P. davidiana in 235 pure plantation. The maximum normalized F d for P. davidiana in pure plantation was observed on the second day after these two rainfall events. However, for P. davidiana in pure plantation, there was no significant difference (P > 0.05) in diurnal sap flow between the first and second day after each of these two rainfall events based on independent-sample t-test ( Fig S1). Therefore, the normalized F d on the first day after each selected rainfall amount was used in Eq. (7) to calculate the relative response of 240 daily normalized F d (SF R , %) to rainfall pulses: where X after and X before are the normalized F d on the first day after and on the day before the rainfall event, respectively. Meanwhile, none of Ψ pd , Ψ m nor Ψ pd -Ψ m showed significant differences between the first and second 245 day after each rainfall events (P > 0.05) for these two species in both plantation types (Table S2). The Ψ pd , Ψ m , and Ψ pd -Ψ m on the first day after each rainfall event were used in the following analysis to illustrate the influence of leaf water potential on SF R in response to rainfall pulses. A repeated ANOVA (ANOVAR) was used to analyze the differences in water consumption, water sources, and plant physiological parameters between these species in pure and mixed plantations, 250 respectively. This analysis was conducted with SF R , RUP, relative water uptake proportions from three soil depths, and Ψ pd -Ψ m as response variables, and "species" and "rainfall" as between-subject and within-subject factors. The same analysis was used to detect mixed afforestation effect on response variables for each plant species, with "plantation type" and "rainfall" as the between-subject factor. Furthermore, significant differences in fine root proportion for each soil layer (shallow, middle, and 255 deep) for each species between pure and mixed plantations were detected through independent-sample t-test. All of these analyses were calculated with SPSS 18 (IBM Inc., New York, US), after data normal distribution and homogeneity of variance analysis were tested. Variation in environmental parameters and plant fine root vertical distribution The rainfall amount during the study period (265.7 mm, DOY 152-273) was 11.8% lower than the average value during 2008-2017. Rainfall varied seasonally with 36 consecutive days had no rainfall event and 5 days had successive rainfall events (DOY 237-241) (Fig 1). The ET 0 (554.7 mm) was approximately twice the rainfall amount during the study period, with the higher and 265 lower values during the low and high rainfall event periods, respectively (Fig 1). The SW increased and https://doi.org/10.5194/hess-2021-351 Preprint. Discussion started: 30 August 2021 c Author(s) 2021. CC BY 4.0 License. subsequently decreased by different degrees following rainfall events, with shallow soil layer (0-30 cm) exhibited higher variation than the corresponding value below 30 cm in the three plantations (Fig 1). The coefficients of variation (CVs) in the shallow soil layer were 19.22%, 18.56%, and 16.61% in H. rhamnoides and P. davidiana pure plantations and the mixed plantation, respectively. The SW for 290 Daily normalized F d for H. rhamnoides and P. davidiana fluctuated with rainfall events in pure and mixed plantations (Fig 2). The variation of normalized F d for H. rhamnoides and P. davidiana in mixed plantation was higher than the specific species in pure plantations, with corresponding CVs of 30.99% and 34.88% in the mixed plantation, and 24. 64% and 27.44% in pure plantations (Fig 2). The relative response of water consumption to rainfall pulses was significantly influenced by both rainfall amount 295 and plant species (P < 0.001) (Fig 2, Table 1). Following large rainfall amounts (≥15.4 mm), the diurnal variation of sap flow was significantly higher than the value before rainfall (P < 0.05) for H. rhamnoides in pure plantation and for P. davidiana in both plantation types (Figs S3 and S4). The lowest rainfall amount (7.9 mm) that significantly increased the diurnal variation of sap flow was observed for H. rhamnoides in the mixed plantation ( Fig S3). Furthermore, in response to rainfall pulses, the SF R for H. 300 rhamnoides in pure (range 6.69 ± 1.22% to 106.34 ± 4.7%) and mixed (range 2.23 ± 0.54% to 190.89 ± 15.49%) plantations was significantly higher (P < 0.001) than corresponding values for P. davidiana: ranges 4.24 ± 0.52% to 60.28 ± 5.72% and 3.14 ± 0.53% to 83.04 ± 14.23% (Table 1). Mixed afforestation significantly enhanced SF R for both species (P < 0.001) ( Variations in plant water sources The soil water δ 18 O and δD for pure H. rhamnoides, pure P. davidiana, and mixed plantations showed large vertical variation following small rainfall events (≤ 7.9 mm), and exhibited relatively small vertical variations following large rainfall events (≥ 15.4 mm) (Fig S5). Generally, the isotopic values of soil water depleted from shallow to deep soil layers, and water isotopic values in shallow and middle soil layer were close to rainfall water in the three plantations following large rainfall events. Although no significant difference in RUP was observed between H. rhamnoides (14.2 ± 7.81%) and P. davidiana (12.43 ± 7.33%) in pure plantations (Fig 3, Table 2), the RUP was significantly higher for 325 H. rhamnoides (19.17 ± 8.6%) than P. davidiana (14.59 ± 5.86%) in the mixed plantation (P < 0.05) ( Table 2). In addition, H. rhamnoides mainly uptake water from the middle soil layer in pure and mixed plantations based on the MixSIR result, with corresponding average values of 36.27 ± 2.43% and 44.14 ± 3.06% (Fig 4). The water source for P. davidiana in pure and mixed plantations was mainly from the deep and middle soil layers, respectively, with corresponding average values of 41.4 ± 15. 18% and 330 40.17 ± 5.9%. In pure plantation, the water source from shallow and middle soil layers for H. rhamnoides was significantly higher than P. davidiana; however, the water source from the deep soil layer was significantly lower for the former species (P < 0.05) ( Table 3). No significant differences in water sources from each soil layer were observed between these species in the mixed plantation (Table 3). In addition, mixed afforestation significantly enhanced RUP and decreased the deep soil water 335 uptake proportion for H. rhamnoides and P. davidiana (P < 0.05) (Tables 2 and 3 Variations in plant leaf water potential In response to rainfall pulses, H. rhamnoides exhibited higher CV for Ψ pd , Ψ m , and Ψ pd −Ψ m than corresponding value for P. davidiana in both plantation types, except that H. rhamnoides exhibited lower CVs for Ψ pd than P. davidiana (12. 99% and 18.33%, respectively) in the mixed plantation (Fig 5). Compared with P. davidiana, H. rhamnoides exhibited significantly positive Ψ pd in pure plantation, negative Ψ m in the mixed plantation, and larger Ψ pd −Ψ m in both plantation types (P < 0.05) (Table 4). Meanwhile, mixed afforestation significantly reduced the Ψ m and increased the Ψ pd for H. rhamnoides 365 and P. davidiana (P < 0.05), respectively, and significantly increased Ψ pd −Ψ m for both species (Table 4). Table 4. Repeated ANOVA (ANOVAR) parameters for predawn (Ψ pd ), midday leaf water potential (Ψ m ), and leaf water potential gradient (Ψ pd −Ψ m ) for H. rhamnoides and P. davidiana (n = 30 indicate the mixed afforestation effect on leaf water potential for these species. Influence of water sources and Ψ pd −Ψ m on plant water consumption The SF R significantly increased with increasing RUP and decreasing Ψ pd −Ψ m for H. rhamnoides (P < 0.01) in both plantation types (Fig 6). Meanwhile, SF R significantly increased with decreasing Ψ pd −Ψ m 380 for P. davidiana in both plantation types (P < 0.05). However, a significant relationship between SF R and RUP was observed for P. davidiana in the mixed (P < 0.05) but not in pure plantations (Fig 6). Furthermore, SF R significantly increased with decreasing water uptake proportion from the deep soil layer for H. rhamnoides in both plantation types and P. davidiana in mixed plantation (P < 0.05) (Table S3). No significant relationship was observed between SF R and water uptake proportion from shallow or 385 middle soil layers for both species in both plantation types. Rainwater uptake enhances water consumption for H. rhamnoides but not P. davidiana in pure plantations Rainwater is the only replenished soil water source in the studied region, because plants cannot uptake 395 ground water of approximately 100 m depth below the surface (Wu et al., 2021). Small rainfall events generally only wet the soil surface and may evaporate before plant root uptake ( Zhao and Liu, 2010). However, large rainfall events are most likely recharge soil moisture and enhance the metabolic activity of plant fine roots (Hudson et al., 2018), thus enhancing plant water uptake. Similar to Salix psammophila and Caragana korshinskii in the studied region (Zhao et al., 2021), both H. rhamnoides 400 and P. davidiana exhibited plasticity in water sources in pure plantations (Fig 4), with H. rhamnoides exhibiting the greater plasticity. In pure plantations, the obviously lower SWC at all soil depths (Fig 1) and large water uptake proportion from the deep soil layer (Fig 4) after 3.4 mm of rainfall for these two species, suggested that this rainfall amount did not relieve the drought caused by 36 days (DOY 157-https://doi.org/10.5194/hess-2021-351 Preprint. Discussion started: 30 August 2021 c Author(s) 2021. CC BY 4.0 License. 192) of no rainfall. The RUP for H. rhamnoides but not P. davidiana significantly increased following 405 an increase in rainfall amount (P < 0.05) (Fig S6), indicating that water uptake was more sensitive to rainfall for H. rhamnoides. This may be mainly due to the greater proportions of fine root surface area distributed in the shallow soil layer for H. rhamnoides (40.85 ± 3.14%) compared to P. davidiana (21.94 ± 2.3%) (Fig S2). Rainwater uptake does not permit water consumption increase after rainfall pulses especially in 410 semiarid and arid environments (Dai et al., 2020;Grossiord et al., 2017;West et al., 2007), and the influence of water potential gradient (Ψ pd −Ψ m ) on plant water consumption should also be considered (Hudson et al., 2018;Kumagai and Porporato, 2012). For example, although Juniperus osteosperma, a deep rooted plant species, could uptake rainwater after large events in the west of the United States, the sap flux did not increase with increasing rainfall amount (West et al., 2007). The synchronization 415 between rainwater uptake and water consumption for J. osteosperma was mainly attributed to the uptake of rainwater by plant being unable to reverse the cavitation in its roots and stems (Grossiord et al., 2017;West et al., 2007). Our previous investigations in the studied region indicated that P. davidiana is relatively more vulnerable to cavitation than H. rhamnoides, with water potential at 50% loss of conductivity of −1.15 MPa (Zhang et al., 2013) and−1.49 MPa (Dang et al., 2017), respectively, 420 based on stem vulnerability curves. Being less vulnerable to stem cavitation allowed H. rhamnoides to experience a significantly lower Ψ m and larger Ψ pd −Ψ m compared with P. davidiana in response to soil water conditions after rainfall pulses. The large Ψ pd −Ψ m for H. rhamnoides was consistent with the high SF R and CVs of normalized sap flow, indicating that this species exhibited a rainfall sensitive mechanism. The relative constant Ψ pd −Ψ m for P. davidiana was consistent with the relatively small SF R 425 and CVs of normalized sap flow, indicating that this species exhibited a rainfall insensitive mechanism. Furthermore, after rainfall events, the SF R for H. rhamnoides but not for P. davidiana significantly increased following rainfall amount increases (P < 0.05) (Fig S6), also indicating that water consumption was more sensitive to rainfall for H. rhamnoides. The SF R was significantly influenced by RUP and Ψ pd and 7). However, the SF R was only significantly influenced by Ψ pd −Ψ m for P. davidiana (Fig 7), suggesting that its water use was mainly constrained by plant physiological characteristics. The ET 0 represents the atmospheric evaporative demand, and has been observed to influence plant water consumption in water limited (Li et al., 2021) and non-water limited regions (Iida et al., 2016). 435 However, in the present study, neither ET 0 after rainfall nor relative response of ET 0 significantly influenced SF R for either species in pure plantations ( Table S4). The influence of plant physiological characteristics (i.e. Ψ pd −Ψ m ) on SF R for both species, may partially contribute to the lack of atmosphere evaporative demand effect on plant water consumption in the studied region, although these species exhibited different rainfall pulse sensitivity. the bottom half of the schematic, with "increase", "decrease" or "enlarge" indicating a significant difference (P < 0.05) for a species between pure and mixed plantations. Mixed afforestation significantly enhanced RUP and plant water consumption, decreased Ψ m , and enlarged Ψ pd −Ψ m for H. 450 rhamnoides, and also significantly enhanced the RUP and water consumption, increased Ψ pd , and enlarged Ψ pd −Ψ m for P. davidiana. Rainwater uptake enhances water consumption for coexisting species in mixed plantation Spatial water resource partitioning is considered one of the essential plant strategies to maintain 455 coexistence in mixed plantations, especially in semiarid and arid regions (Munoz-Villers et al., 2020;Silvertown et al., 2015;Yang et al., 2020). However, water source competition has widely been observed among coexisting plant species according to the literature surveys by Silvertown et al. (2015) and Tang et al. (2018), regardless of annual average rainfall amount. In the present study, the non-significant differences in xylem δ 18 O and δD (P > 0.05) and plant water sources for the three soil 460 layers (Table 3, Fig 4) indicated water competition between these species in the mixed plantation, although the RUP was significantly higher for H. rhamnoides (Table 2). Generally, two types of adaptation can be adopted by plants to cope with resource competition: increased competition ability or minimized competition interactions (West et al., 2007). Consistent with the first adaptation type, mixed afforestation enhanced the RUP for H. rhamnoides and P. davidiana 465 (Figs 3 and 7, Table 2). Although mixed afforestation did not significantly alter the Ψ pd and Ψ m for H. rhamnoides and P. davidiana, respectively, significantly negative Ψ m and positive Ψ pd were observed for corresponding species (P < 0.01) ( Table 4). Mixed afforestation significant increased Ψ pd for P. davidiana, possibly due to the advantage of access to soil moisture recharged by rainwater through an increased root surface area in the shallow soil layer for this species in the mixed plantation ( Fig S2). 470 Thus, plant physiological (Ψ m ) and root morphological adjustments were adopted by H. rhamnoides and P. davidiana in the mixed plantation, respectively, to significantly enlarge Ψ pd −Ψ m and increase RUP (Fig 7). Similar to the result in pure plantations, no significant relationship between SF R and ET 0 after rainfall and relative response of ET 0 was observed for these species in the mixed plantation (Table S4). This result also confirmed the influence of physiological or morphological factors on water 475 consumption for these species in the mixed plantation in response to rainfall pulses. Furthermore, consistent with the second adaptation type, mixed afforestation significantly decreased the water uptake proportion from the deep soil layer for these species (Table 3). The increasing rainfall amount significantly decreased water source proportion from deep soil layer (P<0.05) for H. rhamnoides and P. davidiana in the mixed plantation (Table S3), with the corresponding values 480 decreasing from 43.13 ± 13.74% and 47.07 ± 5.39% (both after 3.4 mm), respectively, to 21.54 ± 8.9% (after 35.2 mm) and 28.66 ± 12.26% (after 24 mm) (Fig 4). Thus, both increased rainwater uptake and decreased water source competition from the deep soil layer were adopted by these species in the mixed plantation to minimize water sources competition under water limited conditions. Implications for plantation species and type selection based on rainwater uptake and consumption Rainwater uptake by plant and water consumption response to rainfall pulses may influence plant physiological process and the water cycle (Meier et al., 2018;Zhao et al., 2021). In pure plantations, H. rhamnoides rather than P. davidiana showed rainwater uptake advantage due to the large Ψ pd −Ψ m for 490 the former species, although both species exhibited plasticity in water sources. The excessive water uptake from the deep soil may desiccate deep soil (Wu et al., 2021), weakening plant resilience to drought stress and thus plant community sustainability in this Loess Plateau region (Song et al., 2018;Zhao et al., 2021). Whether rainwater uptake can reduce plant water uptake from deep soil layers is essential for plantation adaptation (West et al., 2012;Wu et al., 2021). In the present study, the 495 proportion of water sources from deep soil layers was significantly decreased with increased rainfall amount for these species in both pure and mixed plantations (P < 0.05), except for P. davidiana in pure plantation. Physiological (e.g., Ψ m ) and morphological (fine root distribution) adjustments were observed for H. rhamnoides and P. davidiana in the mixed plantation, respectively, to enlarge Ψ pd −Ψ m and enhance the rainwater uptake and water consumption (Tables 1 and 2; Fig 7). Mixed afforestation 500 also significantly decreased the deep soil water uptake proportion for both species (Table 3). (Table S5). Thus, rainfall pulse sensitive species in pure plantation, and plant species in mixed plantation that can adopt physiological or morphological adjustment to enhance rainwater uptake and reduce excessive 505 water uptake from deep soil layers, should be more considered for use in the studied region. Conclusions The influence of water sources and Ψ pd −Ψ m on water consumption in response to rainfall pulse was determined for H. rhamnoides and P. davidiana in the semiarid Loess Plateau region. In pure plantations, 510 the SF R was significantly influenced by RUP and Ψ pd −Ψ m for H. rhamnoides, but the SF R was only significantly influenced by Ψ pd −Ψ m for P. davidiana. Meanwhile, the lower value Ψ pd −Ψ m was consistent with the high SF R for H. rhamnoides, and the higher value Ψ pd −Ψ m was consistent with the low SF R for P. davidiana, in response to rainfall pulses. Thus, H. rhamnoides and P. davidiana exhibited sensitive and insensitive response to rainfall pulses, respectively. Furthermore, mixed afforestation 515 enhanced the rainwater uptake and water consumption for both species. Significantly lower plant Ψ m and increased fine root surface area were adopted by H. rhamnoides and P. davidiana in the mixed plantation, respectively, to enlarge Ψ pd −Ψ m and enhance rainwater uptake and decrease water source competition from the deep soil layer. The SF R was significantly influenced by RUP and Ψ pd −Ψ m for both species in the mixed plantation, and rainwater uptake enhanced plant water consumption in the 520 mixed plantation regardless of species sensitivity to rainfall pulses. Data availability The data that support the findings of this study are available from the corresponding author upon request. Author contribution YKT designed the study, performed the statistical analyses and wrote the original manuscript draft. Declaration of Competing Interest The authors declare that they have no conflict of interest.
2021-09-18T19:29:41.766Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "505cc8c68c51705455512d1c4e06dea1029f82a5", "oa_license": "CCBY", "oa_url": "https://hess.copernicus.org/preprints/hess-2021-351/hess-2021-351.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "505cc8c68c51705455512d1c4e06dea1029f82a5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
74434443
pes2o/s2orc
v3-fos-license
Giant inguinoscrotal hernia: An emergency presentation with life-threatening sepsis A 74-year-old male was brought by ambulance to the emergency department with a 12-hour history of an acutely painful scrotum with associated rapidly progressing discoloration of the overlying scrotal skin. This was on a background of a giant inguinoscrotal hernia present for approximately 12 years for which he had previously declined operative intervention. Other than well controlled hypertension, his past medical was unremarkable. Upon arrival in the emergency department, he was noted as being markedly hypotensive and displaying clinical signs consistent with septic shock. His scrotum was noted as being grossly enlarged with a black discoloration (Figure 1). Several fluids filled bullae were noted on the scrotal skin (Figure 2). On physical examination his abdomen was soft and non-tender. Initial resuscitation was provided in accordance with ATLS and sepsis guidelines. Anesthetic input was sought and the patient was transferred to the intensive care unit for continued resuscitation in the form of central venous pressure monitoring and inotropic support prior to emergency operative intervention. The patient’s condition was deemed too unstable for transfer to the radiology department for preoperative imaging. In theatre, initial incision of the scrotum and examination of the scrotal contents revealed necrotic loops of small and large bowel. Mesenteric thickening was noted at the neck of the hernia sac. In light of this, a midline laparotomy incision was made and the necrotic hernial contents were reduced into the abdominal cavity : A clinical photograph taken preoperatively showing the giant inguinoscrotal hernia with black discolored overlying scrotal skin. The white appearance is secondary to application of topical cream by the patient prior to presentation to the emergency department. to determine viable resection margins. Approximately, 180 cm of small bowel and 20 cm of cecum and ascending colon were deemed non-viable and were resected leaving approximately 120 cm of viable jejunum. An end jejunostomy was fashioned in the left iliac fossa in order to avoid a rapidly evolving cellulitis extending from the right groin into the right iliac fossa. The fascial layer of the laparotomy incision was then closed using a standard continuous absorbable suture technique. In addition to the involved necrotic bowel, all involved scrotal tissue was aggressively debrided, this included resection of a necrotic right testicle and the majority of the scrotal skin. This wound was left open and dressed with betadine soaked swabs. During the procedure, the patient displayed significant electrocardiographic changes which included several runs of non-sustained ventricular tachycardia. Postoperatively, he was transferred back to the intensive care unit intubated. Over the initial postoperative days, his inotropic requirement declined and was discontinued on the fourth postoperative day. Extubation occurred on fifth day and sips were introduced on sixth day. He was transferred back to the ward on post postoperatively day-11. His scrotal wound was healing well by secondary intention. Delayed reversal of the end jejunostomy and definitive management of the hernial defect is planned at a later date, if necessary. DISCUSSION Giant inguinoscrotal hernias are defined as those that extend below the midpoint of the medial thigh in the standing position [1]. These hernias are rare in the developed world and are generally seen in clinical practice after years or even decades of self-neglect [2]. Complications of these hernias in addition to the complications common to all hernias i.e., incarceration, obstruction and strangulation also include visceroptosis, reduction in mobility, intertrigo leading to ulceration of the scrotal skin, voiding difficulties, stretching of the spermatic cord which can lead to testicular atrophy and necrosis [3]. The combination of these complications has been shown to lead to psychological sequelae and social isolation [4]. Surgical repair of giant inguinoscrotal hernia, both in the elective and emergent setting poses a number of technical challenges. A multidisciplinary approach involving anesthetists, surgeons and plastic surgeons is required to provide the best possible outcome for these patients. These patients frequently have significant co-morbid conditions which have been sub-optimally managed due to a setting of personal neglect which can impact on both initial surgical decision making and postoperative morbidity and mortality [5]. Electively, consideration has to be given to how best to reduce hernial contents back into the abdominal cavity that have lost their ''right of domain'' in order to minimize the patient's risk of developing abdominal compartment syndrome. Abdominal compartment syndrome is characterized by respiratory and cardiac compromise due to splinting of the diaphragm and reducing venous return by compression of the inferior vena cava due to increased abdominal pressures. In the past, two general principles have been advocated. These include increasing the abdominal space by means of (a) progressive pneumoperitoneum, (b) abdominal wall separation or (c) combined mesh and flap techniques [6]. Tensor fascia lata musculocutaneous flaps and scrotal skin flaps are the most well described [2]. Alternatively, debulking of the hernia contents via limited resection prior to reduction into abdominal cavity has also been reported with some success [4]. In the emergent setting, principles of management include preoperative resuscitation, radical debridement of necrotic tissue, minimizing length of surgery and leaving wounds open with a view to performing staged procedures at a later date [6]. Postoperative issues following repair of giant inguinoscrotal hernias in addition to requiring intensive monitoring in the intensive care unit setting include high risk of recurrence and wound dehiscence due to increased intra-abdominal pressure. Redundant scrotal skin poses another problem. Some documented cases report good outcomes with primary resection of redundant scrotal skin however others advocate that that it acts as a safety net to allow contents back into the scrotum if respiratory compromise occurs [5]. In addition, the dartos muscle contracts and thus reducing the appearance of the redundant skin. If necessary, a planned cosmetic procedure can be performed at a later date [7]. CONCLUSION The surgical management of giant inguinoscrotal hernias, both electively and in the emergent setting pose a unique set of challenges to the surgeon. A multidisciplinary approach combined with careful preoperative optimization and intensive postoperative monitoring is required to achieve the best possible outcome for the patient.
2019-03-12T13:03:28.322Z
2014-05-02T00:00:00.000
{ "year": 2014, "sha1": "6756309efdfd0392357481d17b448444ba19c2cc", "oa_license": "CCBY", "oa_url": "http://www.ijcasereportsandimages.com/archive/2014/006-2014-ijcri/CL-10045-06-2014-burke/ijcri-1004506201445-burke.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "22b3c3cb6caeae5671cbec65415d9230e0e194d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117318618
pes2o/s2orc
v3-fos-license
Complementarity between collider, direct detection, and indirect detection experiments We examine the capabilities of planned direct detection, indirect detection, and collider experiments in exploring the 19-parameter p(henomenological)MSSM, focusing on the complementarity between the different search techniques. In particular, we consider dark matter searches at the 7, 8 (and eventually 14) TeV LHC, \Fermi, CTA, IceCube/DeepCore, and LZ. We see that the search sensitivities depend strongly on the WIMP mass and annihilation mechanism, with the result that different search techniques explore orthogonal territory. We also show that advances in each technique are necessary to fully explore the space of Supersymmetric WIMPs. Introduction Determining the identity of dark matter (DM) is one of the most pressing issues before us today. One promising class of dark matter candidates is Weakly Interacting Massive Particles (WIMPs), which predict the observed relic abundance through the simple mechanism of thermal freeze-out. WIMPs naturally appear in many extensions of the Standard Model (SM) that resolve the gauge hierarchy, with the most notable example being supersymmetry (SUSY). Several important classes of experimental techniques have been proposed to detect non-gravitational signatures of WIMP DM. These techniques include direct detection of WIMPs scattering off of nuclei, indirect detection of WIMPs by observing excesses of high-energy SM particles resulting from WIMP annihilation, and direct production of WIMPs in high energy colliders. In this paper, we seek to understand how these different techniques complement each other within the framework of the phenomenological Minimal Supersymmetric Standard Model (pMSSM). We find that the three techniques place orthogonal constraints on the parameter space and that advances in all three techniques are necessary to cover the supersymmetric WIMP sector. This paper presents results from the study described in [1]. In particular, detailed descriptions of the pMSSM and the constraints we apply can be found in that document and the references contained therein. It is well-known that R-parity conserving supersymmetry predicts a stable dark matter candidate in the form of the lightest SUSY particle (LSP). Cosmological observations require the LSP to have no electric or color charge. Models with the lightest neutralino, χ 0 1 , as the LSP satisfy these requirements and will be the focus of this study. The DM phenomenology of these models is determined not only by the composition of the LSP (whether it is mostly comprised of the superpartners of the U(1) or SU(2) gauge bosons or the neutral Higgses), but also in general on the other SUSY particles, which can alter the annihilation and scattering rates and are important for the model's discovery potential at the LHC. Unfortunately, the simplest SUSY scenario, the MSSM, has (∼100) parameters, making it far too large to explore in full generality. However, many of these parameters are restricted by the non-observation of large flavor violating effects. This allows us to simplify the parameter space by imposing the following experimentally-motivated assumptions: (i) no new phase appearing in the soft-breaking parameters, i.e., CP conservation, (ii) Minimal Flavor Violation at the electroweak scale such that the CKM matrix drives flavor mixing, (iii) degenerate first and second generation soft sfermion masses, and (iv) negligible Yukawa couplings and associated A-terms for the first two generations. These assumptions reduce the original space down to the 19-parameter pMSSM. We emphasize that no assumption about high-scale physics, such as the mechanism of SUSY breaking or unification of sparticle masses, has been applied to produce the pMSSM, and that it is therefore an "unprejudiced" approach to understanding TeV-scale supersymmetry. Despite these simplifications, 19 parameters is too large for a systematic grid approach. We therefore perform a random sample of the pMSSM, testing 3 million points against experimental and theoretical constraints. The result is 223256 parameter space points (which we will call "models") satisfying all pre-LHC experimental constraints. Note that only about 20% of the models predict the correct Higgs mass within the calculational uncertainty. However, we have found the LHC and DM constraints to be essentially dependent of m h for the range of Higgs masses in our model set. We assume that the LSP has its thermal relic abundance (calculated using micrOMEGAs 2.4 [2], and discard models for which the predicted abundance is larger than the upper limit from WMAP 7, Ωh 2 < 0.1234, with the one-sided limit allowing for the possibility that other particles (such as the QCD axion) comprise the remainder of the DM. The remaining constraints are described in [1]. As a result of our scan ranges for the electroweak gauginos (chosen for compatibility with LEP data and to enable phenomenological studies at the 14 TeV LHC), the LSPs in our model sample are typically very close to being in a pure electroweak eigenstate as the off-diagonal elements of the chargino and neutralino mass matrices are at most ∼ M W . Figure 1 presents some properties of the nearly pure eigenstate LSPs (defined here as a single electroweak eigenstate comprising over 90% of the mass eigenstate). The left panel displays the distribution of the LSP mass for nearly pure bino, wino, and Higgsino LSPs, while the right panel shows the corresponding distribution for the predicted LSP thermal relic density. Note that the LSP masses lie below ∼ 2 TeV in all models; this is due to our choice of scan ranges as the entire SUSY spectrum must be lighter than ∼ 4 TeV and heavier than the LSP (by definition), and this becomes increasingly improbable with increasing LSP mass. In addition, the relic density upper limit becomes increasingly difficult to satisfy at larger LSP masses. Similarly, due to LEP and relic density constraints, none of our models have LSP masses below ∼40 GeV. The fraction of models where the LSP is nearly a pure bino eigenstate is found to be rather low since such models lead to too high a value for the relic density unless they co-annihilate with another sparticle, happen to be close to a (Z, h, A) funnel region, or have a suitable Higgsino admixture. Note that only in the rightmost bin of the right panel is the relic density approximately saturating the WMAP/Planck thermal relic value. Figure 1: Distribution of the LSP masses (left) and predicted relic density (right) for the neutralino LSPs that are almost pure weak eigenstates in our model sample. Figure 2 shows the thermal relic density as a function of the LSP mass, with model points color coded by their electroweak eigenstate content. We define "pure" LSPs as having a single eigenstate fraction ≥ 90%. Points shown as bino-wino, bino-Higgsino, or wino-Higgsino mixtures have less than 2% Higgsino, wino, or bino fraction, respectively. "Mixed" points have no more than 90% and no less than 2% of each component. This plot clearly shows the different regions corresponding to different annihilation mechanisms: (i) The set of models with low LSP masses (forming 'columns' on the left-hand side of the figure) correspond to bino-Higgsino admixtures which annihilate resonantly through the Z, h funnels; note that these can be displayed as "pure" binos if the Higgsino fraction is below 10%. (ii) The bino-Higgsino LSPs saturating the relic density in the upper-left region of the figure are of the so-called 'well-tempered' variety. (iii) the pure bino models in the upper middle region of the Figure are bino co-annihilators (mostly with sleptons) or annihilate resonantly through the A funnel. (iv) The green (blue) bands are pure Higgsino (wino) models that saturate the relic density bound (using perturbative calculations which do not include the Sommerfeld enhancement effect 1 ) near ∼ 1(1.7) TeV and have very low relic densities for lighter LSP masses. Wino-Higgsino hybrids are seen to lie between these two cases as expected. (v) A smattering of models with additional (or possibly multiple) annihilation channels are loosely distributed in the lower right-hand corner of the Figure. As we will see, many of the searches for DM are particularly sensitive to one or more of these LSP categories. LHC Searches We begin with a short overview of the constraints from 7 and 8 TeV LHC data. In order to get a comprehensive picture of the LHC's impact, we simulate 37 SUSY searches at the 7 and 8 TeV LHC, representing every relevant ATLAS SUSY search publicly available as of the beginning 1 The Sommerfeld enhancement can significantly deplete the relic density of wino LSPs heavier than ∼ 1 TeV, while Higgsino and light wino LSPs are relatively unaffected [3]. Bino LSPs do not exhibit the effect because they can't exchange gauge bosons. Including the enhancement would increase the low-velocity annihilation cross section for heavy winos, lowering their predicted relic density but increasing their present-day annihilation cross section. Since the average velocity today is lower than during freeze-out, we would naively expect that including the enhancement would strengthen the limits on heavy wino LSPs. We will see that CTA is already able to exclude models with heavy winos in our perturbative calculation; we therefore expect that including the enhancement would minimally affect our conclusions. of March 2013, the more recent 20 fb −1 2-6 jets + MET analysis, the search for MSSM Higgses through di-tau production, and several CMS analyses. We find that the combined LHC searches exclude 45.5% of the pMSSM models. Currently, the sensitivity of the LHC comes mainly from its ability to produce colored sparticles with large rates. This is demonstrated by the left panel of Figure 3, which shows the fraction of models excluded in the gluino-lightest squark mass plane. We see that the LHC excludes a large majority of models with light squarks or gluinos below ∼ 1 TeV, but only a small fraction of models for which both the squarks and gluinos are heavier than ∼ 1.8 TeV. The lesson here is that models with light sleptons, neutralinos, and charginos do not yet face strong constraints from LHC searches. The other key factor affecting the LHC's sensitivity is the LSP mass, as shown by the right panel of Figure 3. We see that the fraction of models excluded drops precipitously as the LSP mass approaches the mass of the lightest colored sparticle, and that the fraction of models excluded with LSPs heavier than ∼ 700 GeV is very small. This is the well-known effect of spectrum compression -models with heavy LSPs produce soft decay products which are swamped by the large hadron collider backgrounds. In addition to these results from the 7 and 8 TeV data, we also include projections for the results of 14 TeV null searches. We find that a combination of the 14 TeV jets+MET analysis and zero-and one-lepton stop analyses with 300 (3000) fb −1 of data is expected to exclude 90.8% (97.2%) of models which have the correct Higgs mass and survive the 7/8 TeV searches. Direct Detection WIMP dark matter is generally expected to have significant spin-independent (SI) or spindependent (SD) interactions with target nuclei, which can be detected by nuclear recoil experi-ments. We therefore compare the scattering cross sections predicted by micrOMEGAs with the expected limits from the LZ experiment. For this comparison, we rescale the scattering cross section by the LSP abundance (since the LSP is not necessarily all of DM), and weaken the expected limits by a factor of four to account for uncertainties in the scattering cross section from nuclear form factors. LSPs with sizable bino and Higgsino contents can have large couplings to the CPeven Higgs (Z) bosons, leading to large SI (SD) cross sections, respectively. LSPs that are mostly bino or wino can also scatter through squark exchange, which contributes to both SI and SD scattering. However, this scattering rate can be very small if the squark masses are large (as will be increasingly required by null LHC results). Models with very pure LSPs can therefore have very low direct detection cross sections. Figure 4 displays the SI cross section for our pMSSM models, color coded according to the LSP composition (left) and the fraction of models that would be excluded by null results from SI + SD scattering at the LZ experiment (right). From the left panel, we see that the well-tempered neutralinos are entirely within reach of LZ, while pure LSPs (particularly pure winos) can have an undetectably small cross section. In the right panel, we see that SD scattering is generally sensitive only to models that are expected to be excluded by SI scattering. The exception is the Z/h funnel region, where SD scattering measurements are expected to exclude all of the models missed by SI scattering. Only a single model with a LSP lighter than ∼ 90 GeV is projected to survive both SI and SD searches; in this case the LSP is a highly pure bino that annihilates through a light stau. Indirect Detection Another important probe of our pMSSM models comes from indirect searches for the anni-hilation products of DM. We first consider the future impact of searches for gamma ray excesses in dwarf spheroidal galaxies by Fermi and in the galactic center by CTA. We calculate the annihilation spectrum for each model using DarkSUSY 5.0.5 [4], then compare it to the projected sensitivities. For Fermi, we assume a 10-fold improvement in the sensitivity of the current dwarf analysis, resulting from increased integration time and from additions to the dwarf galaxy sample from future surveys. For CTA, our projected sensitivity includes the US contribution and assumes a 500 hour exposure to the Galactic Center. Further details of how the constraints are calculated, including our treatment of e.g. halo profiles, can be found in [1]. Figure 5 shows the expected impact of these future measurements on the pMSSM models. In the left panel, we show the LSP mass -annihilation cross section plane, with the cross section now scaled by two powers of the LSP relic density. Comparing the region excluded by CTA with the region that would be excluded if the LSP annihilated only to bb or W + W − final states, we see that the single-channel limits provide a fairly accurate description of the pMSSM exclusion. Although not shown, the same is true for the Fermi pMSSM exclusion. The right panel of the figure shows the same plane, but now color coded by LSP type. We see that the well-tempered neutralinos predict a strong signature in indirect detection experiments as well as direct detection experiments, but that the Z and h funnel bino-Higgsino mixtures are now well out of reach, as are the heavier co-annihilating and A funnel binos. On the other hand, heavy winos and Higgsinos (which had low scattering rates) are well within the CTA sensitivity due to their large charginomediated annihilation rates. Note that light winos and Higgsinos also have large annihilation cross sections, but they suffer from suppressed relic abundances as shown in Figure 2, making them more challenging to observe. Figure 5: The LSP mass vs scaled annihilation cross section plane, color coded by the fraction of models that could be excluded by CTA (left) and by the LSP composition (right). Red lines represent the projected sensitivities for Fermi (peak sensitivity at low masses) and CTA (peak sensitivity at high masses) to DM annihilating exclusively into bb (dashed) and W + W − (solid) final states. Neutrino telescopes such as IceCube provide another potential discovery channel for WIMP dark matter. In this case, the searches rely on neutralino dark matter being captured by the sun, sink-ing to the solar core, and annihilating, producing high energy neutrinos either directly or through cascade decays. If the product of capture and annihilation cross sections is large enough, solar capture leads to an equilibrium density of DM particles in the solar core, whose annihilation rate is proportional to the DM-nucleon elastic scattering cross section. The left panel of Figure 6 shows the impact of the projected IceCube sensitivity in the LSP mass vs SD cross section plane. We see that IceCube is only sensitive to models with a large SD cross section (within the reach of LZ), since this is necessary to achieve a large enough capture rate. However, the exclusion is incomplete even for models with large SD cross sections, since the IceCube sensitivity also depends on the annihilation channel, and on the annihilation cross section in models where it is small enough to keep the model out of equilibrium. The right panel shows that the region with the best IceCube sensitivity corresponds mainly to well-tempered neutralinos, as we might expect from the fact that the IceCube signal relies on sizable scattering and annihilation rates. Figure 6: The LSP mass vs SD scattering cross section plane, color coded by the fraction of models that could be excluded by IceCube (left) and by the LSP composition (right). The red line represents the projected LZ sensitivity. Complementarity: Putting It All Together Now that we have examined the expected sensitivities of flagship experiments in each search category, we can look at the results that we might expect from combining the different experiments. The left panel of Figure 7 shows the analog of Figure 2 after null results from all of the experiments considered here. We see that many regions of the original parameter space have been removed entirely. In particular, bino-like LSPs in the Z/h funnel region have been removed by LZ, while CTA has removed all winos and Higgsinos with a relic abundance approaching the observed DM abundance. All DM experiments considered had a strong sensitivity to the well-tempered neutralinos, which are likewise completely excluded. Interestingly, the only remaining models which saturate the DM abundance are highly-pure binos which coannihilate with other sparticles or annihilate through the A funnel. Both scenarios can potentially be probed by future LHC data, although the sfermion coannihilation region will remain challenging (due to spectrum compression), as will searches for the pseudoscalar A in models with low tan β . The right panel of Figure 7 shows the regions where the different experiments are most sensitive in the LSP mass-SI cross section plane. Here we see the broader patterns of the experimental sensitivities, and their underlying complementarity. In particular, we see that the LHC is sensitive mainly to models with light LSPs, while indirect detection (specifically CTA) is very sensitive to heavy LSPs. Direct detection, by contrast, is essentially independent of the LSP mass. The three experiment classes therefore cover overlapping but orthogonal regions of the parameter space, showing a high degree of complementarity. Finally, we can ask how the 14 TeV LHC will affect our results. Although we are only able to simulate three of the many searches that will be performed at 14 TeV, our results indicate the qualitative change that may be expected. In particular, we see from Figure 8 that the LHC sensitivity is now expected to have a cutoff for LSP masses of about 1.3 TeV, as opposed to the 700 GeV cutoff seen in the 7/8 TeV results. (Note that Figure 8 shows only models with the correct Higgs mass as a result of the large computational effort required to simulate the high luminosity run). Despite the increased sensitivity to heavy LSPs, however, models without sizable colored production will remain viable regardless of LSP mass. This is demonstrated by the incomplete exclusion at low LSP masses in Figure 8. Overall, the increase in LHC energy will not change the basic complementarity between the LHC sensitivity, peaking at low LSP masses, and the CTA sensitivity, which extends to very high LSP masses, well beyond the range considered here. Conclusion After examining the effects of the different search techniques within our pMSSM framework, Figure 8: The fraction of models with the correct Higgs mass which are excluded by the combination of the 14 TeV jets + MET and the 0 + 1 stop searches with 300 fb −1 , shown in the LSP mass-scaled SI cross section plane. The red line shows the expected limit on the Xenon SI cross section from LZ. we conclude that each search category will provide essential sensitivity to important DM scenarios, and that the next generation of experiments will represent substantial progress in our exploration of this space. In particular, critical contributions will be made by the LHC, LZ, and CTA. Although IceCube and Fermi are generally not sensitive to models missed by the other experiments, they would be instrumental in providing an independent confirmation of a signal and in beginning the process of characterizing the DM. We look forward to this new chapter in WIMP searches, and hopefully to the discovery of WIMP DM!
2014-11-12T21:22:59.000Z
2014-11-12T00:00:00.000
{ "year": 2014, "sha1": "af82b0b5e471955a5c4d2add8aa74946fd9d88f9", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/218/027/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8e7fd1418a71700bbc5aaa329251a2fabf145a2f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8072956
pes2o/s2orc
v3-fos-license
Vigorous physical activity predicts higher heart rate variability among younger adults Background Baseline heart rate variability (HRV) is linked to prospective cardiovascular health. We tested intensity and duration of weekly physical activity as predictors of heart rate variability in young adults. Main body of the abstract Time and frequency domain indices of HRV were calculated based on 5-min resting electrocardiograms collected from 82 undergraduate students. Hours per week of both moderate and vigorous activity were estimated using the International Physical Activity Questionnaire. In regression analyses, hours of vigorous physical activity, but not moderate activity, significantly predicted greater time domain and frequency domain indices of heart rate variability. Adjusted for weekly frequency, greater daily duration of vigorous activity failed to predict HRV indices. Conclusions Future studies should test direct measurements of vigorous activity patterns as predictors of autonomic function in young adulthood. Background Heart rate variability (HRV) reflects central regulation of autonomic activity and is linked to current health status and longer-term health outcomes. Baseline measurements of heart rate variability in adulthood, for example, predict subsequent development of hypertension [1]. Baseline HRV, in turn, is impacted by amount and intensity of exercise. In a study of middle-aged civil servants [2], both moderate and vigorous physical activity predicted greater HRV with effects moderated by gender and overweight status. Among young adults, studies examining effects of exercise interventions on HRV have yielded inconsistent results [3]. A study that tested effects of habitual activity patterns in different age groups reported non-significant effects in young adults [4]. In contrast, a more recent study involving direct measurement of weekly physical activity identified a significant beneficial impact of achieving recommended levels of vigorous physical activity [5]. The present study tests whether self-reported amounts of weekly activity predict HRV among a group of healthy younger adults. The main question addressed is whether total minutes per week of moderate and vigorous physical activity predict higher HRV. Additionally, we test average daily duration of physical activity as a predictor of HRV. Methods Participants were recruited from a Biology course at Southern Oregon University during Winter term, 2016. Participants received course credit as incentive for participation and informed consent was obtained for all participants. The study protocol was approved by the Human Subjects Review Board. A total of 115 students agreed to participate and provided electrocardiographic (ECG) recordings. Exclusion criteria included medical conditions associated with altered autonomic function (e.g., arrhythmia and valve defects) and use of psychotropic medication known to impact autonomic function. Participants with previously diagnosed anxiety or depression were also excluded given potentially lasting effects on autonomic function [6]. Electrocardiographic recordings were collected using a BIOPAC MP-36 system (BIOPAC Systems Inc., Galeta, Ca). Three disposable, pre-gelled electrodes were attached with one just inferior to each clavicle and one inferior to the xiphoid process. Participants were instructed to abstain from alcohol or coffee for 8 h prior to recording and to consume only water for 2 h prior to recording. Following 5 min of quiet rest, 5-min recordings were collected with a sample rate of 1000 samples/s. Recordings were made between 0800 and 1030 hours in a temperature-controlled room. Participants were seated and were instructed to breathe normally with eyes closed. Each event series was first detrended using a high-pass digital filter with a 1-Hz (hertz) cutoff. Additional processing involved a firstderivative transform [7] to distinguish R waves against pronounced T waves. Root mean square of successive differences (RMSSD), low-frequency power (LFP; 0.04-0.15 Hz), and high-frequency power (HFP; 0.15-0.40 Hz) were calculated using BIOPAC Student Lab Pro software (BIOPAC Systems, Inc.) and values were natural log transformed to improve normality [8]. Height, weight, and blood pressure were collected following the ECG recording. Participants also completed a series of on-line questionnaires to assess demographic, health, and psychological data. The Perceived Stress scale [9] was administered to estimate stress exposure during the past month and physical activity was assessed using the International Physical Activity Questionnaire (IPAQ) [10]. IPAQ data were used to estimate minutes per week of both vigorous and moderate physical activity. Physical activity measures were tested as predictors of time and frequency domain indices of heart rate variability. Adjusted models were also tested that included covariates significantly associated with HRV measures. All statistics were calculated using SPSS (version 22). Results Eighty-two participants included in the study had complete data for all measures. The average age of sample was 23.1 years and 62% were female (Table 1). Thirty-five participants were overweight (BMI ≥25) and three reported smoking cigarettes. Sixty-two participants met American Heart Association (AHA) recommendations (>75 min) for weekly vigorous physical activity and 41 participants met AHA recommendations (>150 min) for weekly moderate physical activity [11]. Age was significantly negatively correlated with lnRMSSD (r = −0.23; p = .04) and with lnHFP (r = −0.25; p = 0.02) and diastolic blood pressure was negatively correlated with RMSSD (r = −0.24; p = 0.03). t tests revealed lower lnLFP in females (t = 2.8; p = 0.008). Perceived stress scores were not significantly correlated with HRV indices or with physical activity measures (p > 0.38 for all tests). In unadjusted regression models, minutes per week engaged in vigorous physical activity significantly predicted greater lnRMSSD, lnLFP, and lnHFP ( Table 2). After adjustment for age, gender, and diastolic blood pressure, this relationship remained significant for lnRMSSD and was marginally significant (p = 0.08) for lnHFP. In all analyses, moderate physical activity failed to significantly predict HRV measures. Collinearity among predictor variables was assessed using the variance inflation factor (VIF). The VIF was 1.07 for predictors (vigorous activity and moderate activity) in unadjusted regression models. In adjusted models, the VIF values were 1.20 (vigorous activity) and 1.17 (moderate activity). To test whether effects of vigorous physical activity differed between males and females, models were retested that included a gender-by-vigorous activity interaction term. The interaction was non-significant for all models (p > 0.60). An additional goal of the study was to test associations between daily duration of vigorous physical activity and heart rate variability. Bivariate correlations revealed that daily minutes of activity were correlated with lnRMSSD (r = 0.35; p = 0.001) and lnHFP (r = 0.30; p = 0.006). When daily duration of vigorous physical activity was coded as higher (<68 min) or lower (≥68 min) based on the sample median, t tests indicated significantly greater lnRMSSD (t = -2.5; p = 0.01) and lnHFP (t = -2.1; p = 0.04) for subjects in the higher duration group. Since daily duration of vigorous activity was significantly correlated with weekly frequency (r = 0.66; p < 0.001), analysis of covariance models were used to test differences in HRV indices across daily duration groups, adjusted for days per week of vigorous activity (Fig. 1). In these tests, higher daily minutes of vigorous activity failed to predict lnRMSSD (F = 1.4; p = 0.23), lnLFP (F = 1.1; p = 0.29), and lnHFP (F = 0.8; p = 0.36). Discussion In this study, minutes per week engaged in vigorous physical activity predicted greater time domain and frequency domain measures of HRV. For RMSSD, these effects persisted after adjustment for covariates. Greater daily duration of vigorous physical activity, however, failed to predict HRV when adjusted for weekly frequency. It is unlikely that observed associations were mediated by psychological stress exposure since recent stress was not correlated with HRV measures or with physical activity measures. Among athletes, age and the intensity or duration of an exercise program are critical influences on HRV [12]. Previous studies have documented positive effects of exercise on RR interval and HF and LF power in aerobically trained younger adults [13] but a non-significant effect of moderate intensity exercise [14]. Physical activity assessed through accelerometry reveals a positive effect on HRV for young adult subjects meeting recommended levels for vigorous activity compared to those meeting recommended levels of moderate activity [5]. In the present study, most subjects were physically active with 76% meeting recommendations for weekly vigorous activity. Within this physically active group of young adults, greater weekly vigorous activity predicted greater HRV indices across the observed range of activity. A question for future research is whether a saturation point is reached [3] beyond which additional exercise no longer increases HRV. High-frequency and low-frequency components of HRV are thought to reflect distinct neural regulatory mechanisms [15]. While both high-frequency power and time domain indices correlate strongly with pharmacologically measured vagal tone [16], low-frequency power may reflect central modulation of baroreceptor reflexes by both sympathetic and parasympathetic activities [17]. Fig. 1 Relationships between daily duration of physical activity and heart rate variability. Bars indicate marginal means (±1 standard error) for a lnRMSSD, b lnLFP, and c lnHFP, adjusted for weekly frequency Studies in rodents have suggested mechanisms by which exercise may impact high-frequency and low-frequency components of HRV. These include altered GABA-ergic signaling in the nucleus ambiguous [18] and nucleus of the solitary tract [19]. This study had several limitations that should be noted. Respiration was not controlled for during ECG recording. While respiratory rhythm can impact time and frequency domain indices, especially in certain experimental paradigms [20], recent research suggests that the impact on respiratory sinus arrhythmia is minimal for short-term, resting recordings [21]. In addition, physical activity was assessed by questionnaire and not through direct measurement. Questionnairebased measures may yield biased estimates of activity, especially for certain populations [22]. Finally, while recent stress exposure was assessed, other psychological factors such as depression that may impact heart rate variability [23] were not. Conclusions Results of the present study are in agreement with previous findings that link vigorous physical activity to higher measures of HRV. Lower levels of physical activity are associated with greater cardiovascular risk with effects mediated in part by autonomic dysfunction [24]. Reduced vagal function in particular is common to multiple risk factors that are predictive of cardiovascular disease [25]. While the present study identified significant effects for self-reported levels of overall activity, future research utilizing more detailed and direct measurements of daily activity could reveal stronger associations with autonomic function in young adulthood.
2017-06-27T20:01:34.694Z
2017-06-14T00:00:00.000
{ "year": 2017, "sha1": "09a7761c1d021cfe35e9f795b55fbf3be2bed0b5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40101-017-0140-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff634f8c08b8bb0b586d9f8250d7b610b3bb8bfc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
10591963
pes2o/s2orc
v3-fos-license
QCD Sum Rule Analysis of the Decays $B \to K \ell^+ \ell^-$ and $B \to K^* \ell^+ \ell^-$ We use QCD sum rules to calculate the hadronic matrix elements governing the rare decays $B \to K \ell^+ \ell^-$ and $B \to K^* \ell^+ \ell^-$ induced by the flavour changing neutral current $b \to s$ transition. We also study relations among semileptonic and rare $B \to K^{(*)}$ decay form factors. The analysis of the invariant mass distribution of the lepton pair in $B \to K^{(*)} \ell^+ \ell^-$ and of the angular asymmetry in $B \to K^* \ell^+ \ell^-$ provides us with interesting tests of the Standard Model and its extensions. I. INTRODUCTION Rare B-meson decays induced by the flavour changing neutral current b → s transition represent important channels for testing the Standard Model (SM) and for searching the effects of possible new interactions [1].As a matter of fact, these processes, that in SM do not occur in the Born approximation, are particularly sensitive to perturbative QCD corrections and to possible higher mass scales and interactions predicted in supersymmetric theories, two Higgs doublet and topcolor models, left-right models, etc.Such interactions determine the operators and their Wilson coefficients appearing in the low energy ∆B = 1 effective Hamiltonian H W that governs the b → s transition. From the experimental point of view, the radiative b → sγ decay has been observed and measured by CLEO II Collaboration both in the inclusive B → X s γ and exclusive B → K * γ modes; the experimental results B(b → sγ) = (2.32 ± 0.57 ± 0.35) 10 −4 [2] (1. have prompted a number of analyses aimed at restricting the parameter space of various extensions of the Standard Model [4].Similar analyses have also been proposed for the transition b → sℓ + ℓ − , that has not been observed, yet [5]; in this case, the invariant dilepton mass distribution and the asymmetry of the dilepton angular distribution, together with the total decay rate, can be used to study the features of the interaction inducing the decay.However, for the exclusive modes such as B → Kℓ + ℓ − and B → K * ℓ + ℓ − one has to face the problem of computing the matrix element of H W between the states B and K, K * , a problem related to the nonperturbative sector of QCD.For these matrix elements, either specific hadronization models [6,7] or information from two point function QCD sum rules [8] and from the heavy meson chiral theory [9], embedded in the vector meson dominance framework, have been used so far.The resulting theoretical predictions are characterized by a considerable model dependence; it should be noticed that, differently from the case of B → K * γ, where the hadronic matrix element must be computed only at one kinematical point, in correspondence to the on-shell photon, for B → Kℓ + ℓ − and B → K * ℓ + ℓ − the matrix elements must be known in a wide range of the invariant mass squared of the lepton pair: ; therefore, the vector meson dominance assumption has not negligible consequences on the theoretical outcome. An approach based on general features of QCD that allows us to compute the hadronic matrix elements in a range of M 2 ℓ + ℓ − is provided by three-point function QCD sum rules [10].This method, first employed to compute the pion form factor [11], has been widely applied to heavy meson semileptonic decays: for example, in the case of B → D, D * semileptonic transitions, it has been used to compute the Isgur-Wise universal function ξ(y) and the heavy quark mass corrections [12].Moreover, the decays B → D * * ℓν, where D * * are positive parity (cq) meson states, have been analyzed both for finite heavy quark masses [13] and in the limit m Q → ∞, with the calculation of the universal functions τ1 2 (y) and τ3 2 (y) analogous to the Isgur-Wise function [14].For the heavy-to-light meson transitions, such as D(B) → π(ρ)ℓν, the various matrix elements have also been computed [15,16]; in the case of B → K * γ, this approach, employed in [17][18][19], has provided us with the prediction R = B(B → K * γ)/B(b → sγ) = 0.17±0.05[17], that agrees with the central value obtained from the experimental data in eqs.(1.1)- (1.2). In this paper we want to apply the three-point function QCD sum rule method to compute the hadronic quantities appearing in the calculation of B → K ( * ) ℓ + ℓ − .We shall observe that the various form factors parametrizing the relevant matrix elements have common features with other heavy-to-light meson transitions, a behaviour whose origin is worth investigating in detail [20].We shall also compare the computed hadronic quantities to the findings of lattice QCD, even though these last results are obtained after extrapolations in the heavy quark mass and in the momentum transfer.Finally, we shall apply our results to predict the invariant mass distribution of the lepton pair in the decays B → K ( * ) ℓ + ℓ − and the forward-backward asymmetry for The work is organized as follows: in Sec.II we write down the (SM) effective Hamiltonian for the transition b → sℓ + ℓ − , and resume the available information on the Wilson coefficients.In Sec.III we compute by three-point function QCD sum rules the relevant hadronic quantities for B → Kℓ + ℓ − ; the same calculation is carried out for B → K * ℓ + ℓ − in Sec.IV.In Sect.V we study the relations derived by Isgur and Wise [21] and Burdman and Donoghue [22] between rare and semileptonic form factors.Such relations can be worked out in the infinite heavy quark mass limit m b → ∞, in the region of maximum momentum transfer t; a relevant problem is whether they are satisfied also in the low t region, as it has been argued by several authors.We investigate this hypothesis and comment on the role of the heavy mass corrections.In Sec.VI and VII we study the transitions B → Kℓ + ℓ − and B → K * ℓ + ℓ − , respectively.Finally, in Sec.VIII we draw our conclusions.Details concerning the calculations are reported in the Appendix. II. EFFECTIVE HAMILTONIAN The effective ∆B = −1, ∆S = 1 Hamiltonian governing in the Standard Model the rare transition b → sℓ + ℓ − can be written in terms of a set of local operators [23]: where G F is the Fermi constant and V ij are elements of the Cabibbo-Kobayashi-Maskawa mixing matrix; we neglect terms proportional to V ub V * us since the ratio ts is of the order 10 −2 .The operators O i , written in terms of quark and gluon fields, read as follows: The Wilson coefficients C i (µ) have been partially computed at the next-to-leading order in QCD by several groups [24][25][26].As discussed in ref. [25], in the analysis of B → X s ℓ + ℓ − at the next-to-leading logarithmic corrections must be consistently included only in the coefficient C 9 , since at the leading approximation O 9 is the only operator responsible of the transition b → s ℓ + ℓ − .The contribution of the other operators (excluding O 8 that, however, is not involved in the processes we are studying) appears only at the next-to-leading order, and therefore their Wilson coefficients must be evaluated at the leading approximation.Following [25] we use in our phenomenological analysis of the decays B → K ( * ) ℓ + ℓ − (within the Standard Model) the numerical values of the Wilson coefficients collected in Table I.We choose the scale µ = 5 GeV ≃ m b , Λ M S = 225 MeV and the top quark mass m t = 174 GeV from the CDF measurement [27].The coefficient C 9 , which is evaluated at the next-toleading order approximation, displays a dependence on the regularization scheme, as it can be observed in Table I comparing the result obtained using the 't Hooft-Veltman (HV) and the Naive Dimensional Regularization (NDR) scheme.Such dependence must disappear in the decay amplitude if all corrections are taken into account.We shall include in our analysis the uncertainty on C 9 as a part of the theoretical error.In Table I This rich structure justifies the interest for B → K ( * ) ℓ + ℓ − , where operators of different origin act coherently in determining rates, spectra and asymmetries.For example, it could be interesting to search for the effects of possible interactions that produce a coefficient C 7 with opposite sign [5,7].In this work we shall not analyze such new effects, limiting ourselves to studying the above processes within the theoretical framework provided us by the Standard Model.However, it is worth stressing that our results for the hadronic matrix elements of the operators appearing in (2.1) represent a complete set of quantities also for the analysis of the decays B → K ( * ) ℓ + ℓ − in a context different from the Standard Model. III. FORM FACTORS OF THE DECAY The matrix elements of the operators O 1 , O 2 and O 7 , O 9 and O 10 in eq.(2.2) between the external states B and K can be parametrized in terms of form factors as follows: The heavy-to-light meson form factors F 1 and F 0 appear in the calculation of two-body nonleptonic B → KX decays, if the factorization approximation is adopted; neglecting SU(3) F breaking effects, they govern the semileptonic decay B → πℓν.F 1 and F 0 have already been studied by three-point QCD sum rules [16,28].In the following we describe in detail the calculation of F T ; for the sake of completeness, we also report the results for F 1 (q 2 ) and F 0 (q 2 ) using a unique set of parameters and adopting a coherent numerical procedure, in order to have at our disposal a consistent set of form factors. To compute F T within the QCD sum rule approach we consider the three-point correlator [11] of the flavour changing quark current J µν = siσ µν b and of two currents J K α (y) and J B 5 (x) with the K and B quantum numbers, respectively: J K α (y) = q(y)γ α γ 5 s(y) and J B 5 (x) = b(x)iγ 5 q(x).The correlator Π αµν can be expanded in a set of independent Lorentz structures: where Π and Π (n) are functions of p 2 , p ′2 and q 2 , and a (n) αµν are other tensors set up using the vectors p and p ′ and the metric tensor g µν . Let us consider Π.To incorporate the quark-hadron duality, on which the QCD sum rule approach is based, we write down for Π(p 2 , p ′2 , q 2 ) a dispersive representation: in the variables p 2 and p ′2 corresponding to the B and K channel, respectively.In the region of low values of s, s ′ the physical spectral density ρ(s, s ′ , q 2 ) contains a double δ-function term corresponding to the transition B → K, and therefore the function Π can be written as where the residue R is given in terms of the form factor F T (q 2 ) and of the leptonic constants f K and f B , defined by the matrix elements The integration domain D in (3.6), where higher resonances with the same B and K quantum numbers contribute to the spectral density ρ, starts from two effective thresholds s 0 and s ′ 0 .Also the perturbative contribution to Π, computed for p 2 → −∞ and p ′2 → −∞, can be written as eq.(3.5).Moreover, considering the first power corrections of the Operator Product Expansion of the correlator (3.3) we get the following representation: ρ QCD (s, s ′ , q 2 ) is the perturbative spectral function; the two other terms in (3.7), expressed as a combination of vacuum expectation values of quark and gluon gauge-invariant operators of dimension 3 and 5, respectively: < qq > and < qσGq >=< g s qσ µν G a µν λ a 2 q >, parametrize the lowest order power corrections.The expressions for ρ QCD and d 5 can be found in Appendix A, eqs.(A2)-(A4); in this particular case d 3 vanishes.We now invoke the quark-hadron duality, i.e. we assume that the physical and the perturbative spectral densities are dual to each other, giving the same result when integrated over an appropriate interval.Assuming duality in the region D of the hadronic continuum we derive the sum rule for F T : where D ′ is the region corresponding to the low-lying B and K states: The effective thresholds s 0 and s ′ 0 can be fixed from the QCD sum rule analysis of two-point functions in the b and s channels.We get s 0 from the calculation of f B , and s ′ 0 from the expected mass of the first radial excitation of the kaon. An improvement of the expression in (3.9) can be obtained by applying to the left and right hand sides the SVZ-Borel transform, defined by both in the variables −p 2 and −p ′2 ; M 2 is a new (Borel) parameter.This operation has the advantage that the convergence of the power series is improved by factorials; moreover, for low values of M 2 and M ′2 the possible contribution of higher states in eq.(3.9) is exponentially suppressed.The resulting Borel transformed sum rule for F T reads From eq.(3.11) the form factor F T (q 2 ) can be derived, once the value of the Borel parameters M 2 and M ′2 is fixed.This can be done observing that, since M 2 and M ′2 are unphysical quantities, F T must be independent on them (stability region of the sum rule); moreover, the values of M 2 and M ′2 should allow a hierarchical structure in the series of the power correction, and a suppression of the contribution of the continuum in the hadronic side of the sum rule. In our numerical analysis we use the values for the quark condensates (at a renormalization scale µ ≃ 1 GeV ) [11]: with m 2 0 = 0.8 GeV 2 .Notice that the numerical results do not change sensitively if the condensates are evaluated at higher scales using the leading-log approximation for their anomalous dimension. As for the quark masses and leptonic constants, we use: Putting these parameters in eq.(3.11) we obtain the form factor F T depicted in fig. 1, where the different curves correspond to different choices of the thresholds s 0 and s ′ 0 .In the sum rule, the perturbative term is a factor of 4−5 times larger than the D = 5 contribution, and the integral of the spectral function over the region D ′ gives more than 60% of the result of the integration over the whole region of the dispersion relation (3.5).The duality window, where the results become independent of the Borel parameters M 2 and M ′2 , starts at M 2 ≃ 7 GeV 2 and M ′2 ≃ 1.7 GeV 2 ; varying M 2 in the range 7 − 9 GeV 2 and M ′2 in the range 1.7 − 2.5 GeV 2 the results change within the bounds provided by the different curves depicted in fig. 1. The same analysis can be applied to the form factors F 1 and F 0 using the flavourchanging vector current J µ = sγ µ b in the correlator (3.3) and studying the projection q µ Π αµ to derive F 0 .We report in Appendix A the relevant quantities appearing in the sum rules for F 1 and F 0 ; the difference with respect to [15], as far as F 1 is concerned, is that we keep all terms proportional to powers of the strange quark mass m s .In the calculation of both the form factors, the contribution of the perturbative term and of the D = 3 term have comparable size, whereas the D = 5 term is one order of magnitude smaller; the contribution of the resonance in the hadronic side of the rule is nearly equal to the contribution of the continuum.We obtain the form factors F 1 (q 2 ) and F 0 (q 2 ) depicted in fig. 1.Also in this case the Borel parameters can be varied in the range M 2 = 7−9 GeV 2 and M ′2 = 1.7−2.5 GeV 2 ; the results change within the region corresponding to the different curves depicted in fig. 1 for each form factor. We observe a different q 2 dependence for the various form factors.In the range of q 2 we are considering (0 ≤ q 2 ≤ 13 − 15 GeV 2 ) F 1 follows a simple pole formula: with F 1 (0) = 0.25 ± 0.03 and M P 1 ≃ 5 GeV .A fit to the formula (3.13) for F 0 gives the result M P 0 ≃ 7 GeV .The same formula, applied to F T would give F T ≃ −0.14 and M P ≃ 4.5 GeV .Therefore, only the dependence of the form factor F 1 (q 2 ) does not contradict the polar behaviour dominated by B * s , which is the nearest singularity in the t− channel, as we would expect by invoking the vector meson dominance (VMD) ansatz.The form factor F 0 increases softly with q 2 and, as already observed in [28], the fitted mass of the pole is larger than the expected mass of the physical singularity, in this case the J P = 0 + bs state.As for F T , the VMD ansatz would predict a polar dependence, with the pole represented by B * s ; on the other hand, we observe that F T can be related to F 1 and F 0 by an identity obtained by the equation of motion: eq.(3.14) is in agreement with the computed form factor F T displayed in fig. 1, and therefore we can use the double pole model: with F T (0) = −0.14 ± 0.03 and M P 1 and M P 0 given by the fitted values of the mass of the poles of F 1 and F 0 , respectively. It is interesting to observe that information on the possible form of the q 2 dependence of the form factors can be derived by studying the limit m b → ∞.In this limit, at the zero recoil point where the kaon is at rest in the B meson rest frame, it is straightforward to show that the parametric dependence of the form factors on the heavy meson mass M B is given by: F 1 (q 2 max ) ∼ √ M B and F 0 (q 2 max ) ∼ 1/ √ M B [21].Both these scaling laws are compatible with the constraint F 1 (0) = F 0 (0) and with a multipolar functional dependence if n 1 = n 0 + 1.Thus, in the limit m b → ∞, to a polar form factor F 1 (q 2 ) corresponds a nearly constant form factor F 0 (q 2 ).The outcome of QCD sum rules is in agreement with this observation [29]; the observed increasing of F 0 would be due to subleading terms contributing at finite m b . Let us now compare our results with the outcome of different QCD based approaches.In the channel B → π the form factor F 1 has been computed by light-cone sum rules [30], with numerical results in agreement, at finite b-quark mass, with the outcome of three point function sum rules. As for lattice QCD, both F 1 and F 0 have been computed at large q 2 [31], and data show that F 0 has a flat dependence on the momentum transfer, whereas F 1 increases with q 2 .The full set of form factors F 1 , F 0 and F T by these other methods is still missing; the complete comparison of our results with such different approaches could help in understanding the drawbacks and the advantages of the various methods; this would shed light on the issue of decays such as B → πℓν that are of interest as far as the measurement of V ub is concerned. The form factors parametrizing the hadronic matrix elements of the transition B → K * ℓ + ℓ − can also be computed by QCD sum rules by considering a three-point correlator with the interpolating current for K * represented by the vector current J K * α (y) = q(y)γ α s(y).Let us define the B → K * matrix elements: and A 3 can be written as a linear combination of A 1 and A 2 : The form factors T 1 (q 2 ) and T 2 (q 2 ) can be derived by the correlator with Jµ = sσ µν 1 + γ 5 2 q ν b.Expanding Παµ in Lorentz independent structures Παµ = iǫ αµρβ p ρ p ′β Π1 + g αµ Π2 + other structures in p, p ′ ( we get T 1 and T 2 from Π1 and Π2 , respectively.The sum rules have the same structure of eqs. (3.9), (3.11), with the perturbative spectral functions ρ(s, s ′ , q 2 ) and the power corrections d 3 and d 5 reported in Appendix B. The only difference with respect to the kaon case is the value of the K * leptonic constant, defined by the matrix element < 0|qγ µ s|K * (p, ǫ) >= f In fig. 2 we depict the form factors T 1 (q 2 ) and T 2 (q 2 ) obtained choosing the threshold s ′ 0 in the range 1.6 − 1.8 GeV 2 and the other parameters as in the previous section.In the sum rule for both the form factors the perturbative term does not dominate over the non-perturbative ones: at q 2 = 0 it represents 30% of the quark condensate contribution, and is nearly equal to the D = 5 term.However, it rapidly increases with the momentum transfer, and at q 2 = 15 GeV 2 it is equal to the contribution of the D = 3 term, whereas the D = 5 contribution is an order of magnitude smaller. Concerning the form factor T 3 , we observe that it contributes, together with T 1 and T 2 , to other invariant functions in (4.5) and, in principle, it also could be obtained by a sum rule.However, since it can be related to A 1 , A 2 and A 0 by applying the equation of motion: we prefer to use this expression to determine it, considering that this procedure is successful for F T (q 2 ).The form factors V and A i can be obtained by studying the correlator (4.4) with a vector J V µ = sγ µ b and an axial J A µ = sγ µ γ 5 b flavour changing current, considering the projection q µ J A µ to derive A 0 .We collect in Appendix B the complete expressions appearing in the relevant sum rules for all the form factors, excluding A 0 , whose expressions can be found in [32]; also in this case the difference with respect to [15] is that we include all powers of the strange quark mass. As it happens for T 1 and T 2 , also in the sum rules for V , A 1 and A 2 the perturbative term, at q 2 = 0, is smaller than the D = 3 contribution; the relative weights of the various contributions change with the momentum transfer, and at q 2 = 15 GeV 2 the D = 0 and D = 3 terms have comparable size.As it happens for the B → K form factors, the chosen values of M 2 and M ′2 , M 2 = 8 GeV 2 and M ′2 = 2 GeV 2 , are within the duality window where the results are independent of the Borel parameters.Also in this case, varying M 2 and M ′2 in the ranges M 2 = 7 − 9 GeV 2 and M ′2 = 1.7 − 2.5 GeV 2 , the final results change within the same uncertainty coming from the variation of the continuum threshold. Considering the results displayed in figs.2 and 3, we collect the form factors T i , V and A i in three sets, according to their functional dependence on the momentum transfer.In the first set we include T 1 , V and A 0 , that display a sharp increasing with q 2 .It is possible to fit them with a polar q 2 dependence eq.(3.13) (as observed also in [16,32]) with: T 1 (0) = 0.19 ± 0.03 and M P ≃ 5.3 GeV , V (0) = 0.47 ± 0.03 and M P ≃ 5 GeV , A 0 (0) = 0.30 ± 0.03 and M P ≃ 4.8 GeV (the difference with respect to the value T 1 (0) = 0.17 ± 0.03 in ref. [17] is due to the effect of the strange quark mass, that here has been included). The error on the mass of the pole is correlated to the error on the form factor at q 2 = 0, and it can be estimated of the order of 200−300 MeV .The relevant result is that the masses of the poles are not far from the values expected by the dominance of the nearest singularity in the t− channel: M P = M B * s for T 1 and V , M P = M Bs for A 0 .We stress that the fit is performed in a range of values of q 2 where the QCD calculation can be meaningfully carried out, therefore large momentum transferred [q 2 > 15 GeV 2 ] are not taken into account. The last form factor, A 2 , linearly increases with q 2 : A 2 (0) = 0.40 ± 0.03 and β = 0.034 GeV −2 .A fit to a polar dependence for this form factor would give M P ≥ 7 GeV for the mass of the pole. The parameters of all the form factors are collected in Table II.Albeit the form factors have been computed in a well defined range of momentum transfer, once their functional q 2 dependence has been fitted and the parameters determined, we extrapolate them up to q 2 max .This procedure cannot be avoided within the method of QCD sum rules, where large positive values of q 2 are not accessible since there is a region where the distance between the points x, y and 0 in the correlator, which is the initial ingredient of this approach, is large, and therefore the standard OPE cannot be used; this is shown by the occurrence of singularities in the correlator when q 2 is close to q 2 max .As for the computed dependence on the momentum transfer, is worth reminding that deviations from the VMD expectations for the form factors A 1 and A 2 have been already observed in the literature, first in the D → K * ℓν [15] channel and then for B → ρℓν [16].Here we find a kind of common feature, i.e. all form factors deviating from the polar dependence (excluding F T ) seem to depend linearly on the momentum transfer, with small (positive or negative) slopes. It is interesting that also for T 1 (q 2 ) and T 2 (q 2 ) we can use the argument developed in the previous section concerning the limit m b → ∞: since T 1 (q 2 max ) ∼ √ M B and T 2 (q 2 max ) ∼ 1/ M B , the constraint T 1 (0) = T 2 (0) can be fulfilled by a multipolar q 2 dependence if n 1 = n 2 + 1 in eq.(3.16). At zero momentum transfer our results numerically agree with those obtained by the method of light-cone sum rules [33], within the errors and taking into account the different choices of the input parameters.In [33] it has also been observed that T 1 , V and A 1 have different functional dependencies on q 2 ; the difference with respect to our case is that the slopes are larger than those obtained from three-point sum rules; in particular, the form factor A 1 increases with q 2 .The origin of this discrepancy should be investigated. The form factors T 1 and T 2 have been computed by lattice QCD [34,35] near the point of zero recoil and for the mass of the heavy quark smaller than m b , due to the finite size of the available lattices; therefore, the results at q 2 = 0 and for a realistic value of m b are obtained after an extrapolation in the momentum transfer and in the heavy quark mass.Also in this case, in the region of large values of q 2 , the form factor T 1 increases rapidly with the momentum transfer, whereas T 2 is quite flat.As for the analytic q 2 behaviour obtained from lattice calculations, it seems to us that larger lattices are needed to enlarge the range of momentum transfer where the measurements can be performed, in order to clearly disentangle different possible dependencies of T 1 and T 2 (e.g., dipole versus pole or pole versus constant). V. RELATIONS BETWEEN RARE AND SEMILEPTONIC B DECAY FORM FACTORS In the limit m b → ∞ Isgur and Wise [21] and Burdman and Donoghue [22] have derived exact relations between the form factors F T , T i in eqs.(3.2), (4.2) and the form factors F i , V , A i in eqs.(3.1), (4.1).These relations can be easily worked out observing that, in the effective theory where the b-quark mass is taken to the infinity, the equation γ 0 b = b is fulfilled in the rest frame of the B meson. In our parametrization such relations can be written as follows, near the point of zero recoil (q 2 ≃ q 2 max = (M B − M K ( * ) ) 2 ): (5.1) where λ is the triangular function. It has been argued by several authors that the relations (5.1)-( 5.4) could also be valid at low values of q 2 [22], although a general proof has not been found in support of this hypothesis. Using the form factors computed by QCD sum rules in the previous Sections, it is possible to check eqs.(5.1)- (5.4).In fig. 4 we plot the ratio R = F/F IW in the case of F T , T 1 , and T 2 , as a function of q 2 , in the range of momentum transfer where the calculation has been carried out.We observe that the relations between the various form factors are verified at different level of accuracy. In the case of F T the ratio R differs from unity at the level of 25 − 30%, including the uncertainty coming from the errors of the various parameters.In particular, at q 2 = 0 we have F T /F IW T = 0.7 ± 0.1.The situation is different for the ratios concerning T 1 and T 2 , that differ from unity at the level of 10 − 20%: at q 2 = 0 we have T 1 /T IW 1 = 0.94 ± 0.05 and T 2 /T IW 2 = 1.12 ± 0.05.These results support the argument put forward in [17] on the validity of the Isgur-Wise relations, in the limit m b → ∞ also at small values of q 2 ; they also can be well compared to the outcome of light-cone sum rules, obtained for T 1 at a finite m b [33].The conclusion is that the b quark is near to the mass shell also when the recoil of the light hadron is large with respect to m b , with 1/m b corrections that do not appear to overwhelm the effect. The relations (5.1)-(5.4)could be used to perform a model independent analysis of the decays B → K ( * ) ℓ + ℓ − employing experimental information (when available) on the form factors of the semileptonic transition B → ρℓν [36].In particular, since (5.1)-(5.4)are valid on general grounds in the large q 2 region, it has been proposed to perform the analysis in the range of large invariant mass of the lepton pair, e.g.M ℓ + ℓ − ≥ 4 GeV . Albeit in principle correct, we feel that, from the experimental point of view, the procedure of extracting the semileptonic B → ρ form factors near zero recoil will be rather difficult, with large uncertainties in the final result.The problem is not avoided by the possible choice of using the form factors of the semileptonic transition D → K * ℓ + ν, and then rescaling them according to the their leading dependence on the heavy mass, i.e. V Bρ (q 2 max ) etc. (neglecting SU(3) F and α s corrections).As a matter of fact, in such procedure the next-to-leading mass corrections could be large and not under control.Finally, as we shall see in the next section, the differential branching ratios of B → K ( * ) ℓ + ℓ − at large q 2 are small, and therefore the experimental errors are expected to be sizeable.For this reason we prefer to propose an analysis of the decay extended to the full range of q 2 , using hadronic quantities determined in a well defined theoretical framework.The dependence on the computational scheme will be reduced once the different form factors have been computed by different QCD calculations, and the whole information collected in a unique set of form factors. VI. DECAY B → Kℓ + ℓ − We can now compute the invariant mass squared distribution of the lepton pair in the decay B → Kℓ + ℓ − : The contribution of the operators O 7 , O 9 and O 10 is taken into account in the terms proportional to C 7 , C 9 and C 10 .The operators O 1 and O 2 provide a short distance contribution, with a loop of charm quarks described by the function h(x, s) x = m c /m b , s = q 2 /m 2 b [23,24]: if s < 4x 2 , and if s > 4x 2 ; the imaginary part in (6.3) comes from on-shell charm quarks.O 1 and O 2 also provide a long distance contribution, related to cc bound states (J/ψ, ψ ′ ) converting into the lepton pair ℓ + ℓ − [37,38].This contribution can be described in terms of the J/ψ and ψ ′ leptonic decay constants < 0|cγ µ c|ψ i (ǫ, q) >= ǫ µ f ψ i M ψ i and of the full J/ψ and ψ ′ decay widths Γ ψ i .We derive f ψ i from the experimental branching ratio ψ i → ℓ + ℓ − ; in this way the whole contribution of O 1 and O 2 can be taken into account by modifying the coefficient C 9 into C ef f 9 : If the nonleptonic B → Kψ i transition is computed by factorization, the parameter k is ; the sign between the short distance and the long distance term in (6.4) can be fixed according to the analyses in ref. [38].In ref. [5] the value of k is appropriately chosen in order to reproduce the quantity: This can be done by choosing k ≃ (1.5 ÷ 2) × 3 α 2 .Notice that, since the J/ψ and ψ ′ resonances are narrow, their contribution modifies the dilepton spectrum only in the region close to M 2 ℓ + ℓ − = M 2 J/ψ , M 2 ψ ′ .As input parameters we choose the ratio m c /m b = 0.27 − 0.29 and the value of the CKM matrix element |V ts | ≃ 0.04; a different value for |V ts | only modifies the prediction of the branching ratio, leaving unchanged the shape of the spectrum [40]. We depict in fig. 5 the obtained invariant mass squared distribution of the lepton pair in B → Kℓ + ℓ − .In the same figure we also plot the spectrum obtained considering only the short distance contribution, that gives the branching ratio (using τ B = 1.5 10 −12 sec for the B− meson lifetime) B(B → Kℓ + ℓ − )| sd ≃ 3 × 10 −7 |V ts /0.04| 2 , to be compared to the experimental upper limit (obtained excluding the region near J/ψ and ψ ′ ) B(B − → K − µ + µ − ) < 0.9 × 10 −5 (at 90% CL) [41,42].The uncertainty coming from the two possible values of C 9 in Table I is less then 1% and does not have relevant consequences on the predicted branching ratio and on the invariant mass distribution. From the experimental point of view, the measurement of the spectrum in fig. 5 is a non trivial task; hopefully, it will be possible to obtain experimental results from the future dedicated e + e − colliders.The important point to be stressed is that, in the distribution depicted in fig. 5 the theoretical uncertainty connected to the hadronic matrix element is reduced to a well defined QCD computational scheme (QCD sum rules), so that in the studies of the effects of interactions beyond the Standard Model the hadronic uncertainty plays no more a major role. A great deal of information can be obtained from the channel B → K * ℓ + ℓ − investigating, together with the lepton invariant mass distribution, also the forward-backward (FB) asymmetry in the dilepton angular distribution; this may reveal effects beyond the Standard Model that could not be observed in the analysis of the decay rate. A FB asymmetry in the dilepton angular distribution is a hint on parity violation.Since the decay B → K * ℓ + ℓ − proceeds through γ, Z and W intermediate bosons, we expect a different behaviour in the various q 2 kinematical regions.In the region of low q 2 , the photon exchange dominates, leading to a substantially vector-like parity-conserving interaction; as a consequence, we expect a small asymmetry.On the other hand, when q 2 is large, the contribution of Z and W exchange diagrams becomes important, and the interaction acquires the V-A parity violating structure, leading to a large asymmetry.As already observed in ref. [43] this pattern strongly depends on the value of the top quark mass, and the penguin diagrams with Z exchange and the W box diagram are expected to overwhelm the photon penguin diagram in correspondence to the measured m t .Moreover, since the FB asymmetry is sensitive not only to the magnitude of the Wilson coefficients, but also to their sign [5], it can be used to probe the values predicted by the Standard Model. Let us define θ ℓ as the angle between the ℓ + direction and the B direction in the rest frame of the lepton pair.Since, in the case of massless leptons, as we assume, the amplitude can be written as sum of non interfering helicity amplitudes, the double differential decay rate reads as follows: where A L corresponds to a longitudinally polarized K * , while A L(R) +(−) represent the contribution from left (right) leptons and from K * with transverse polarization: We obtain and where λ = λ(M 2 B , M 2 K * , q 2 ).The terms A,C,B 1 ,D 1 contain the short distance coefficients, as well as the form factors: The FB asymmetry is defined as thus we have .12) A F B (q 2 ) is depicted in fig.6; it is consistent with the prediction of low asymmetry in the small q 2 region and high asymmetry for large q 2 .The analysis of the individual shapes of the helicity amplitudes (neglecting the long distance contribution) shows that A L + and A R + have comparable size, and therefore there is a cancellation of their contribution in eq.(7.12); moreover, they are small with respect to A L,R − .In the region of large M 2 ℓ + ℓ − , A L − dominates over A R − , whereas the situation is reversed for low dilepton invariant mass squared, and this is the reason of the small positive asymmetry appearing in fig.6 It is interesting to observe that such positive asymmetry depends on C 7 , and that it disappears if C 7 has a reversed sign. The invariant mass squared distribution of the lepton pair is depicted in fig.7, where the short distance contribution is separately displayed.The predicted branching ratio is to be compared to the experimental upper limit: CL) obtained excluding the region of the resonances J/ψ and ψ ′ [41,44], [39].Also in this case the uncertainty on C 9 does not have relevant consequences. The interesting observation is that, for low values of the invariant mass squared, the distribution is still sizeable, an effect that could be revealed at future B-factories such as the Pep-II asymmetric e + e − collider at SLAC. VIII. CONCLUSIONS In this paper we have analyzed some features of the rare decays B → Kℓ + ℓ − and B → K * ℓ + ℓ − within the theoretical framework provided by the Standard Model, using an approach based on three point function QCD sum rules to compute the relevant hadronic matrix elements. Albeit QCD sum rules have their own limitations (finite number of terms in the Operator Product Expansion of the correlators, values of the condensates, validity of the local duality assumption), we believe that the obtained results are meaningful from the quantitative point of view. There is a quite good agreement with independent QCD methods (lattice QCD, lightcone sum rules) for few quantities computed by the various approaches.The calculations of the remaining quantities (F 0 , T i , A 0 ) by the other two methods is required in order to complete the overview on the various results. We have used our results to test some relations among the computed form factors which hold in the infinite heavy quark limit, but that are expected to hold also for low values of q 2 and for finite b mass.We have found that the different form factors satisfy with different accuracies these relations, which can be explained by a different role of the 1/m b corrections. As for the decays we have analyzed in the present paper, within the Standard Model they are expected with branching ratios of the order 10 −7 (B → Kℓ + ℓ − ) and 10 −6 (B → K * ℓ + ℓ − ), with peculiar shapes of the invariant mass of the lepton pair and of the FB asymmetry.Any deviation from the above expectations would be interpreted as a signal of deviation from the Standard Model.Interesting experimental data are therefore expected from current and future e + e − colliders in this exciting sector of the heavy flavour physics. In the formulae for the coefficients of the non perturbative contributions, reported in this Appendix and in the following one, we have omitted all terms that vanish after the double Borel transform. Fig. 4 Momentum dependence of the ratio between rare and semileptonic form factors R = F i (q 2 )/F IW i (q 2 ); F IW i are obtained from eqs.(5.1-5.3). Fig. 5 Invariant mass squared distribution of the lepton pair for the decay B → Kℓ + ℓ − : the dashed line refers to the short distance contribution only. Fig. 6 Fig. 6Forward-backward asymmetry in the decay B → K * ℓ + ℓ − ; the dashed line refers to the short distance contribution only. Fig. 7 Fig. 7Invariant mass squared distribution of the lepton pair for the decay B → K * ℓ + ℓ − : the dashed line refers to the short distance contribution only. it can also be observed that the coefficients of O 3 − O 6 are small (O(10 −2 )); therefore, the contribution of such operators can be neglected, and the analysis can be carried out considering only the operators O 1 , O 2 and to O 7 , O 9 and O 10 .The various extensions of the Standard Model, such as models involving supersymmetry, multiHiggs and left-right models, induce two kind of changes in the low energy Hamiltonian (2.1): first, the values of the coefficients C i are modified as an effect of additional virtual particles in the loop diagrams describing the b → s transition, and, second, new operators can appear in the operator basis, such as operators with different chirality of the quark current with respect to O 7 − O 10 , e.g., O ′ 7
2018-04-03T00:16:09.510Z
1995-10-25T00:00:00.000
{ "year": 1995, "sha1": "28f346fc3dfc49c86553035af71c8ecfec1e0cf0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "28f346fc3dfc49c86553035af71c8ecfec1e0cf0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
214599778
pes2o/s2orc
v3-fos-license
Synthesis of acridone derivatives via heterologous expression of a plant type III polyketide synthase in Escherichia coli Background Acridone alkaloids are heterocyclic compounds that exhibit a broad-range of pharmaceutical and chemotherapeutic activities, including anticancer, antiviral, anti-inflammatory, antimalarial, and antimicrobial effects. Certain plant species such as Citrus microcarpa, Ruta graveolens, and Toddaliopsis bremekampii synthesize acridone alkaloids from anthranilate and malonyl-CoA. Results We synthesized two acridones in Escherichia coli. Acridone synthase (ACS) and anthraniloyl-CoA ligase genes were transformed into E. coli, and the synthesis of acridone was examined. To increase the levels of endogenous anthranilate, we tested several constructs expressing proteins involved in the shikimate pathway and selected the best construct. To boost the supply of malonyl-CoA, genes coding for acetyl-coenzyme A carboxylase (ACC) from Photorhabdus luminescens were overexpressed in E. coli. For the synthesis of 1,3-dihydroxy-10-methylacridone, we utilized an N-methyltransferase gene (NMT) to supply N-methylanthranilate and a new N-methylanthraniloyl-CoA ligase. After selecting the best combination of genes, approximately 17.3 mg/L of 1,3-dihydroxy-9(10H)-acridone (DHA) and 26.0 mg/L of 1,3-dihydroxy-10-methylacridone (NMA) were synthesized. Conclusions Two bioactive acridone derivatives were synthesized by expressing type III plant polyketide synthases and other genes in E. coli, which increased the supplement of substrates. This study showed that is possible to synthesize diverse polyketides in E. coli using plant polyketide synthases. Background Natural compounds are valuable in cosmetics, food, and pharmaceutical industries [1]. Therefore, natural and nature-inspired, chemically synthesized compounds have extensively been developed and exploited for countless industrial purposes. Phytochemicals are typical natural compounds that have additional biological, nutritive, and/or pharmacological value. Among the diverse phytochemicals, secondary metabolites such as alkaloids, phenylpropanoids, and terpenoids have been extensively studied, and some of them have been employed in various fields [2]. Acridones are heterocyclic alkaloids that contain a tricyclic ring with nitrogen at the 10th position and a carbonyl group at the 9th position [3]. Acridone alkaloids are secondary metabolites that are generally found in the plant family, Rutaceae [4]. Various acridone derivatives (glyforine, acronycine, thioacridones, and substituted 9-aminoacridines, etc.) have been reported to exert a wide range of chemotherapeutic effects including anticancer, antimicrobial, antimalarial, antipsoriatic activities [5][6][7][8]. The synthesis of acridone alkaloids in plants (Rutaceae family) was reported several decades after the discovery of acridine as a derivative of coal tar [9]. Microorganic biosynthetic platforms have emerged as the leading platforms for the production of natural and synthetic value-added compounds, such as flavonoids, alkaloids, polyketides, and various chemicals. Due to its well-established genetics and physiology, Escherichia coli has become one of the representative microorganisms in biosynthetic platforms [15]. One of the secondary metabolic pathways of E. coli, the shikimate pathway has received considerable attention as it is a major pathway for the production of aromatic compounds [16]. Biosynthetic pathways for aromatic amino acid production (l-tryptophan, l-tyrosine and l-phenylalanine) including the shikimic acid pathway provide the chemical building blocks for the synthesis of various chemicals through specific intermediates, such as chorismate and shikimate [17][18][19][20][21]. We synthesized two acridones (1,3-dihydroxy-9(10H)acridone [DHA] and 1,3-dihydroxy-10-methylacridone [NMA]) using engineered E. coli and two substrates, namely anthranilate, and malonyl-CoA. To optimize the substrate supply for the synthesis of acridone, we prepared several sets of constructs; the first set for the synthesis of anthranilate using genes coding for proteins involved in the shikimate pathway and the second set for the synthesis of malonyl-CoA by overexpressing acetylcoenzyme A carboxylases (ACCs). For the synthesis of NMA (1,3-dihydroxy-10-methylacridone), we additionally introduced the N-methyltransferase gene (NMT) to supply N-methylanthranilate by using endogenous anthranilate. The overall scheme of the biosynthesis of these two compounds is shown in Fig. 1. Through a combination of these genes along with ACS, badA, and pqsA, which are involved in CoA utilization or substrate cyclization, we were able to synthesize 17.3 mg/L DHA and 26.0 mg/L NMA. Screening of constructs to synthesize DHA and NMA DHA and NMA are synthesized from anthranilate or N-methylanthranilate and malonyl-CoA, respectively. Anthranilate and N-methylanthranilate are activated by coenzyme A. We tested two CoA ligases, badA-encoding benzoate coenzyme A ligase-and pqsA encoding anthranilate coenzyme A ligase. Two ACSs, RgACS, and CmACS were tested. E. coli-harboring each of the four constructs pC-RgACS-badA, pC-CmACS-badA, RgACS-pqsA or pC-CmACS-pqsA-was exposed to 100 μM anthranilate or N-methylanthranilate. A new peak was observed in culture filtrates from E. coli strains harboring RgACS-badA or pC-CmACS-badA when they were supplied with anthranilate (Fig. 2d, e). The molecular mass of the synthesized product was 227.06 Da, which corresponded to the predicted mass of DHA. However, E. coli cells harboring RgACS-pqsA or pC-CmACS-pqsA that were supplied with N-methylantrhanilate synthesized a new product whose molecular mass was 240.87 Da, which is the predicted mass of NMA (Fig. 2e, g). Based on the structure-using nuclear magnetic resonance spectroscopy (NMR)-we confirmed that the two compounds were DHA and NMA, respectively, (see "Methods"). These results indicated that badA could potentially convert anthranilate into anthraniloyl-CoA and that pqsA is responsible for the conversion of N-methylanthranilate into N-methylanthraniloyl-CoA. Escherichia coli strains harboring RgACS synthesized 11.80 mg/L DHA (51.96 μM) when 100 μM anthranilate was supplied, and synthesized 17.52 mg/L (72.62 μM) NMA when 100 μM N-methylanthranilate was provided. This yield exceeded that obtained using E. coli harboring CmACS, which synthesized 1.4 mg/L DHA and 6.0 mg/L NMA. In addition, the amount of byproduct such as 2,3-dihydroxyquinoline (DHQ) were found more in E. coli harboring CmACS and the unreacted N-methylanthranilate was observed in E. coli harboring CmACS. This result indicates that RgACS effectively synthesizes DHA and NMA. We observed the synthesis of 2,4-dihydroxyquinline (DHQ) in E. coli strains harboring RgACS-badA or CmACS-badA. DHQ also used anthranoyl-CoA and malonyl-CoA. Two molecules of malonyl-CoA instead of three, are used to synthesize DHQ. The amount of the synthesized DHQ was 2.6 mg/L in E. coli harboring CmACS-badA and 3.6 mg/L in E. coli harboring RgACS-badA, while the amount of DHA was 1.3 mg/L in CmACS and 10.5 mg/L in RgACS. The synthesis of N-methylquinoline (NMQ) was observed in the culture filtrate of E. coli harboring CmACS-pqsA. Nevertheless, we could not observe any detectable NMQ in E. coli harboring RgACS-pqsA. Enzymatic reactions with N-methylanthranilate using CmANS revealed that the synthesized products resulted from the incorporation of two (N-methylquinolone) or three molecules (N-methylacridone) of malonyl-CoA with a preference towards N-methylacridone synthesis [14]. However, the enzymatic reaction using RgACS with N-methylanthranilate produced only NMA (but not N-methylquinolone) [22]. These results indicate that RgACS is better than CmACS at synthesizing DHA and NMA. Therefore, we selected constructs containing RgACS for further experiments. Synthesis of NMA N-methylanthranilate is the building block of NMA, but E. coli does not synthesize N-methylanthranilate. Anthranilate NMT was employed to synthesize NMA. In order to increase the substrate for NMT, trpE was overexpressed. The second substrate of NMA synthesis is malonyl-CoA. The effects the four constructs that reportedly affect intracellular malonyl-CoA were individually tested with respect to NMA synthesis. Three of them (PDHm, acs, and ackA-pta) increased the level of acetyl-CoA [23,24] and one of them (acc) synthesized malonyl-CoA from acetyl-CoA [24]. We engineered five the overexpression of gene involved in acetyl-CoA or malonyl-CoA increased the synthesis of NMA and the enhancement of malonyl-CoA synthesis by acc is more effective in the synthesis of NMA than the increase of acetyl-CoA by pta-ackA, PDHm, or acs. We also tried to increase endogenous anthranilate levels by overexpressing aroG and the feedback-inhibitionfree version of aroG (aroG f ). Two more E. coli strains (B-NMA-8 and B-NMA-9) were tested. However, we could not detect the synthesis of NMA. Only the accumulation of anthranilate and N-methylanthranilate was observed. The unreacted anthranilate and N-methylanthranilate in B-NMA-8 were 16.0 and 35.0 mg/L, respectively; only 7.2 mg/L N-methylanthranilate was observed in B-NMA-3, whereas anthranilate was not observed. The rapid synthesis of anthranilate or N-methylanthranilate seemingly inhibited the synthesis of NMA. Notably, higher copy number plasmids containing RgACS and pqsA did not further increase NMA synthesis. Likely, the activities of these two downstream proteins got saturated when converting the synthesized N-methylanthranilate into NMA. Fine-tuning of the whole process is critical to increasing the final yield of the product [25,26]. Using the strain B-NMA3, we monitored the synthesis of NMA and N-methylanthranilate for 27 h. The synthesis of both NMA and N-methylanthranilate showed a similar pattern (Fig. 4) , g). P1 and P3 were DHQ. P2 and P4 were determined to be DHA by NMR. S was unreacted N-methylanthranilate. P5 and P6 were determined to be NMA by NMR. U was unidentified product which seemed to be an intermediate of DHA and its retention time was slightly different from that of NMA Synthesis of DHA Anthranilate and malonyl-CoA are substrates for DHA. Endogenous levels of these two compounds are probably critical determinants of the final yield. To increase DHA synthesis, we used two strategies. The first strategy was to increase endogenous anthranilate. The shikimate pathway synthesizes anthranilate. Genes in this pathway were overexpressed. The second strategy was to use a plasmid with different copy number to express RgACS and badA. We constructed eight different E. coli strains. The levels of synthesized DHA increased from 2.56 in B-DHA3 to 6.39 mg/L in D-BHA5, and the strain D-BHA6 produced approximately 3.98 mg/L of DHA. Importantly, the levels of unreacted anthranilate continued to increase, and were 0.72 mg/L in B-DHA3 and 593.40 mg/L in B-DHA6. It seemed that higher production of anthranilate inhibited the synthesis of DHA, and that the conversion of the synthesized anthranilate into DHA was critical for increasing the yield of DHA. In order to augment the conversion of anthranilate, greater and better involvement of downstream genes (badA and RgACS) seems necessary. Therefore, we tested the strain B-DHA7-10. The synthesis of DHA increased from 1.12 mg/L in B-DHA7 to 17.3 mg/L in B-DHA10 (Fig. 5). In particular, the strain(s) that were expected to synthesize more anthranilate produced more DHA. Besides, the levels of unreacted anthranilate in these strains were less than those in corresponding strains harboring a lower copy of badA and RgACS. Taken together, the higher copy number of RgACS and badA facilitated the synthesis of DHA. We also tested the four constructs that were supposed to increase intracellular levels of malonyl-CoA. (Fig. 6). Discussion In our present works, we successfully synthesized two acridone derivatives, 1,3-dihydroxy-9(10H)-acridone and 1,3-dihydroxy-10-methylacridone, using engineered E. coli. Genes coding for proteins in the shikimate pathway and TrpE encoding anthranilate synthase were tested and selected for the synthesis of the 1st substrate, anthranilate. Acetyl-CoA-carboxylase from P. luminescens was introduced to increase the available levels of the 2nd substrate, malonyl-CoA (for ACS). We tested ACS from R. graveolens and C. microcarpa to select the one that was better with respect to the synthesis of DHA and NMA. The results of in vitro enzymatic efficacy tests showed that ACS from R. graveolens outperformed that from C. microcarpa [11,14,22]. Sometimes, in vitro enzymatic results did not correlate with the in vivo results due to the presence of unknown substrates in vivo, which expectedly inhibit or divert the enzymatic activity [27]. Therefore, we tested the in vivo synthesis of acridone using both genes. In this study, in vivo biosynthesis of acridones by RgACS showed better productivity than CmACS. Based on the in vitro properties of ACS and on the in vivo acridone biosynthesis experiment, we could identify a positive correlation between enzyme properties and acridone biosynthesis. In order to increase the final yield of the two acridones, we tested the genes coding for proteins involved in the shikimate pathway. We observed a dramatic increase in the levels of intermediates, such as anthranilate, instead of an increase in DHA levels during DHA synthesis. Importantly, during the synthesis of DHA, the rate-limiting step was likely the conversion of anthraniloyl-CoA into DHA by PKS. However, the conversion of anthranilate into N-methylanthranilate and/or the conversion of N-methylanthraniloyl-CoA into NMA were limiting steps during the synthesis of NMA. Exposure of E. coli harboring CoA ligase and PKS to either anthranilate or N-methylanthranilate resulted in no further synthesis of DHA and NMA, (~ 500 μM of anthranilate and 300 μM of N-methylanthranilate). The endogenous levels of anthranilate upon expressing the genes coding for proteins in the shikimate pathway increased more than 500 μM (Fig. 5b), a concentration at which the synthesis of DHA is likely inhibited. These findings indicated that PKS was probably a rate-limiting step. In case of NMA synthesis, we found that the conversion of anthranilate into N-methylanthranilate was a limiting step [21]. In vitro enzymatic study using the purified ACS from R. graveolens also showed that ACS was inhibited by 250 μM N-methylanthraniloyl-CoA [28] The construct that minimizes the accumulation of the anthranilate appeared to be the best for the synthesis of NMA and DHA. Fine-tuning of the overall pathway is critical to enhancing the final yield of the product. Aerobic growth of E. coli produces ATP, ubiquinol-8, CO 2 and a considerable amount of acetic acid as a byproduct through the acetate producing pathways [29]. The production of acetic acid could be a negative influence on the synthesis of acridone derivatives. The synthesized acetic acid is neutralized upon converting acetic acid into acetyl-CoA by acetyl-CoA synthase (acs) [30]. Acetyl-CoA is then converted into malonyl-CoA by ACC. Overexpression of acs or acc enhanced the production of DHA and NMA because it resulted not only in the supply of the second substrate (malonyl-CoA), but also in the reduction of the byproduct, acetic acid. These results agreed with the previous reports that overexpression of acs, acc, or PDH enhanced the synthesis of flavonoids and triacetic acid lactone [23,24,31]. Conclusions We synthesized two acridones (DHA and NMA) in Escherichia coli using two substrates, namely anthranilate and malonyl-CoA. Towards this, plant acridone synthase (ACS) and anthraniloyl-CoA ligase genes were transformed into E. coli. To optimize the substrate supply for acridone synthesis, we prepared several sets of constructs; the first set for the synthesis of anthranilate using genes coding for proteins involved in the shikimate pathway-major pathway for the production of aromatic compounds-and the second set for the synthesis of malonyl-CoA by overexpressing acetyl-coenzyme A carboxylases (ACCs). For the synthesis of NMA, we additionally introduced the N-methyltransferase gene (NMT) to supply N-methylanthranilate by using endogenous anthranilate. Through a combination of these genes along with ACS, badA and pqsA, which are involved in CoA utilization or substrate cyclization, we were able to synthesize 17.3 mg/L DHA and 26.0 mg/L NMA. NMT from Ruta graveolens had been cloned previously [20]. In order to prepare the pC-aroG-NMT-trpE construct, trpE was amplified using a forward primer containing a BamHI site and a reverse primer containing a XhoI site, and was subcloned into pCDF-duet1 (BglII/BamHI) (pC-trpE). NMT was amplified with a forward primer containing a BamHI site, and a reverse primer containing an AflII site, following which it was subcloned into pC-trpE (BamHI/AflII). The resulting construct was digested with BamHI/XhoI, and was then subcloned into pCDF-duet1 (BglII/XhoI) (pC-NMT-trpE). aroG or aroG f were amplified using primers containing EcoRI (forward primer) and NotI sites (reverse primer) and were subcloned into pC-NMT-trpE (EcoRI/NotI). The constructs and the strains used in this study were listed in Table 1. Production and analysis of DHA and NMA in E. coli The overnight cultures of E. coli transformants were inoculated into a fresh LB containing appropriate antibiotics and growth at 37 °C until OD 600 = 1. Cells were harvested and resuspended in M9 medium containing 2% glucose, 1% yeast extract, antibiotics, 1 mM isopropyl β-d-1-thiogalactopyranoside (IPTG) in a test tube except that the synthesis of NMA and DHA was monitored for 27 h in a flask. The cells were grown at 30 °C with shaking for 24 h. The culture supernatant was extracted with three volumes of ethyl acetate (EA). The upper layer-after centrifugation-was collected and dried. The dried sample was dissolved in 60 μL dimethyl sulfoxide (DMSO). To analyze the formation of DHA and NMA, Thermos Ultimate 3000 HPLC (high performance liquid chromatography) equipped with a photodiode array (PDA) detector and a Varian C18 reversed-phase column (Varian, 4.60 × 250 mm, 3.5 μm particle size) was used [21]. The synthesized DHA was purified using HPLC. The mobile phase consisted of water and acetonitrile (7:3, v/v), and no gradient was applied. The structure of the purified compounds was determined using proton nuclear resonance spectroscopy (NMR). DHA (1,3-dihydroxy-9(10H)-acridone), 1 To determine the structure of NMA (1,3-dihydroxy-10-methylacridone), thin layer chromatography (TLC, silica gel 60 F254, Millipore) was used to purify the putative NMA. Ethyl acetate and hexane (2:1 (v/v)) were used as developing solvents. The purified sample was dissolved in acetone-d 6 . The chemical shifts for 1 H and 13 C NMR data were referenced to that of tetramethylsilane (TMS). In order to verify the structure, COSY, TOCSY, NOESY, 1 H-13 C HMQC, and 1 H- 13 C HMBC were used. The mixing time for TOCSY and NOESY was 60 ms and 1 s, respectively. The delay in the evolution of long-ranged couplings was 70 ms in HMBC. In the 1 H spectrum, six peaks were observed in the aromatic region, while a single peak was observed at 3.90 ppm. All peaks in the aromatic region were assigned using COSY and TOCSY. The N-attached methyl group was thought to be responsible for the peak at 3.899 ppm as it showed two cross-peaks with H-5 and H-4 in NOESY.
2020-03-22T13:05:00.541Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "e344039e0c686f654c1e08affba947d4bcd9b038", "oa_license": "CCBY", "oa_url": "https://microbialcellfactories.biomedcentral.com/track/pdf/10.1186/s12934-020-01331-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09c323de17888beaf48bd3c9066ca0490e7d8d86", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233425868
pes2o/s2orc
v3-fos-license
The Molecular Basis of Alcohol Use Disorder (AUD). Genetics, Epigenetics, and Nutrition in AUD: An Amazing Triangle Alcohol use disorder (AUD) is a very common and complex disease, as alcohol is the most widely used addictive drug in the world. This disorder has an enormous impact on public health and social and private life, and it generates a huge number of social costs. Alcohol use stimulates hypothalamic–pituitary–adrenal (HPA) axis responses and is the cause of many physical and social problems (especially liver disease and cancer), accidental injury, and risky sexual behavior. For years, researchers have been trying to identify the genetic basis of alcohol use disorder, the molecular mechanisms responsible for its development, and an effective form of therapy. Genetic and environmental factors are known to contribute to the development of AUD, and the expression of genes is a complicated process that depends on epigenetic modulations. Dietary nutrients, such as vitamins, may serve as one these modulators, as they have a direct impact on epigenomes. In this review, we connect gathered knowledge from three emerging fields—genetics, epigenetics, and nutrition—to form an amazing triangle relating to alcohol use disorder. Introduction Alcohol use disorder (AUD) is a complex, multifaceted psychiatric condition. Its development and regulation are believed to be products of both genetic and environmental influences on the human brain. These factors may also increase susceptibility to the development of alcohol addiction. Recent research on alcohol exposure suggests that genome chemical modifications, called epigenetic mechanisms, are important molecular factors that may be helpful in the discovery of the pathogenesis of AUD. Alcohol addiction is also genetically complex and includes genetic heterogeneity at the level of neurobiological vulnerability, polygenicity, phenocopies, and gene and environment interaction [1]. As shown in Figure 1, mutual associations among the abovementioned factors seem to play an important role in the development of addictions. In the selection of articles, we were directed by our areas of specialization, our clinical work, and the aim of our future study, which is interdisciplinary, including epigenetic DNA modifications, dietetics, and addiction. Our articles are most frequently searched for in the National Center for Biotechnology Information (PubMed.gov, accessed on 10 February 2021). [2]). The expression of a phenotype, both on a cellular and organismal level, is not only dependent on the hereditary luggage but may be also modulated by nutritional and environmental factors. This is the interplay of nonmodifiable variables describing the genotype and modifiable variables describing nutritional and environmental factors. Alcohol abuse modifies the structure of chromatin and modulates gene expression through epigenetic changes. In a feedback loop, this neural remodeling conversely reinforces the abuse of alcohol. This is hypothesized to move alcohol abuse through the stages that eventually lead to addiction. Alcohol Metabolism and Genetic Polymorphisms in Alcohol Addiction The identification of the genes that predispose individuals to alcohol addiction is very important. The products of these genes are responsible for the human answer to alcohol exposure (ethanol metabolizing enzymes) and clinical treatment, which may modulate the interaction with environmental factors. The process of addiction involves a cellular molecule network, in which hundreds of genes play very important roles. Products of these genes act as neurobiological factors in processes like rewards, behavioral control, and stress reactions. While they are also an important part of the development of mental diseases, a group of genes strictly involved in alcohol metabolism has also been detected, and their polymorphisms may lead to serious health consequences [3]. Genes and Enzymes Involved in Alcohol Metabolism Alcohol is metabolized in the human body by various mechanisms. The oxidative metabolism pathway needs two main enzymes, including alcohol dehydrogenase (ADH), which oxidizes ethanol to the highly toxic acetaldehyde. Then, the aldehyde dehydrogenase (ALDH) converts this into the nontoxic acetate and eventually into acetyl CoA. Acetyl CoA is metabolized to the water and carbon dioxide for easy elimination [4]. Through this process, depending on the nutritional, hormonal, and energetic conditions, the acetyl CoA may be converted into CO2, ketone bodies, fatty acid, and cholesterol. Apart from these two very important enzymes, ethanol is metabolized by others, such as cytochrome P4502E1 (CYP2E1) and catalase, which also take part in the production of acetaldehyde from ethanol oxidation. The microsomal ethanol-oxidizing system mostly involves the cytochrome CYP2E1 and two others-CYP1A2 and CYP3A4. Microsomal ethanol oxidization represents the main non-ADH ethanol metabolism liver system [5]. The alternative ethanol metabolism pathway is nonoxidative. The first pathway reaction is catalyzed by the enzyme fatty acid ethyl ester (FAEE) synthase and leads to the formation of FAEEs. Fatty acid ethyl esters (FAEEs) have been implicated as mediators of ethanol-induced organ damage. It has been shown that FAEE synthase is present [2]). The expression of a phenotype, both on a cellular and organismal level, is not only dependent on the hereditary luggage but may be also modulated by nutritional and environmental factors. This is the interplay of nonmodifiable variables describing the genotype and modifiable variables describing nutritional and environmental factors. Alcohol abuse modifies the structure of chromatin and modulates gene expression through epigenetic changes. In a feedback loop, this neural remodeling conversely reinforces the abuse of alcohol. This is hypothesized to move alcohol abuse through the stages that eventually lead to addiction. Alcohol Metabolism and Genetic Polymorphisms in Alcohol Addiction The identification of the genes that predispose individuals to alcohol addiction is very important. The products of these genes are responsible for the human answer to alcohol exposure (ethanol metabolizing enzymes) and clinical treatment, which may modulate the interaction with environmental factors. The process of addiction involves a cellular molecule network, in which hundreds of genes play very important roles. Products of these genes act as neurobiological factors in processes like rewards, behavioral control, and stress reactions. While they are also an important part of the development of mental diseases, a group of genes strictly involved in alcohol metabolism has also been detected, and their polymorphisms may lead to serious health consequences [3]. Genes and Enzymes Involved in Alcohol Metabolism Alcohol is metabolized in the human body by various mechanisms. The oxidative metabolism pathway needs two main enzymes, including alcohol dehydrogenase (ADH), which oxidizes ethanol to the highly toxic acetaldehyde. Then, the aldehyde dehydrogenase (ALDH) converts this into the nontoxic acetate and eventually into acetyl CoA. Acetyl CoA is metabolized to the water and carbon dioxide for easy elimination [4]. Through this process, depending on the nutritional, hormonal, and energetic conditions, the acetyl CoA may be converted into CO 2 , ketone bodies, fatty acid, and cholesterol. Apart from these two very important enzymes, ethanol is metabolized by others, such as cytochrome P4502E1 (CYP2E1) and catalase, which also take part in the production of acetaldehyde from ethanol oxidation. The microsomal ethanol-oxidizing system mostly involves the cytochrome CYP2E1 and two others-CYP1A2 and CYP3A4. Microsomal ethanol oxidization represents the main non-ADH ethanol metabolism liver system [5]. The alternative ethanol metabolism pathway is nonoxidative. The first pathway reaction is catalyzed by the enzyme fatty acid ethyl ester (FAEE) synthase and leads to the formation of FAEEs. Fatty acid ethyl esters (FAEEs) have been implicated as mediators of ethanol-induced organ damage. It has been shown that FAEE synthase is present selectively in the organs commonly damaged by ethanol abuse [6]. The second involves the formation of a phospholipid, known as a phosphatidyl ethanol (PEth), which has become a specific and sensitive alcohol biomarker [7]. 2.1.1. The Alcohol Dehydrogenase 1B (ADH1B) and Aldehyde Dehydrogenase 2 (ALDH2) Genes A convincing example of the role of genes in alcohol use disorder are two genes mentioned above, the products of which are strongly linked to alcohol metabolism: the alcohol dehydrogenase 1B (ADH1B) and aldehyde dehydrogenase 2 (ALDH2) genes. In humans, on the basis of structural properties and kinetics, the enzyme ADH has been categorized into five classes, but the main isoforms involved in ethanol metabolism are ADHs from I, II, and IV classes. Two of the three ADH1 enzymes, ADH1B and ADH1C, show genetic polymorphisms (Table 1). Genetic polymorphisms in the ADH1B and ADH1C gene locations are associated with a different enzyme activity [8]. Among the many isozymes of ALDH, only the cytosolic ALDH1 and mitochondrial ALDH2 can metabolize acetaldehyde (Table 2). A significant role in acetaldehyde oxidation has been observed only for ALDH1A1, ALDH1B1, and ALDH2 isoforms. However, mitochondrial ALDH2 plays a central role in human acetaldehyde metabolism [3]. Genetic polymorphisms of ADH and ALDH have been shown to be linked with alcohol consumption habits, as well as susceptibility to the development of alcohol abuse and alcohol addiction. Two variations of the ALDH1 enzyme, ALDH1A1*2 and ALDH1A1*3, may be associated with alcohol addiction in Afro-Americans [10]. Special attention was paid to the genetic variants of both the ADH1B and ADH1C genes. The ADH1B2, ADH1B3, and ADH1C1 variants have a faster rate of ethanol oxidation, so they lead to acetaldehyde accumulation. ADH1B*1 is the predominant allele in all populations. In some Asian populations, ADH1B*2 may be found in 90% of the population. In Caucasian populations, ADH1C*1 and ADH1C*2 appear with equal frequency [12], whereas the ADH1C*1 allele is present in about 50% of Europeans. Allele ADH1B*3 occurs mainly in Africans, African-Americans, and some native Americans [13]. The ADH1B*2 allele has been shown to decrease the occurrence of alcohol abuse and alcohol addiction in Asians and in the Caucasian and Jewish populations [14]. Interestingly, several studies have reported that ADH1B allele frequencies differ between alcoholics and non-alcoholics, with a higher occurrence of the atypical form, ADH1B*2, in non-alcoholics [15]. Meta-analyses have suggested that the ADH1B*1 allele is associated with a three-fold increase in the risk of alcoholism relative to the ADH1B*2 [16], which is considered as a protective function against alcohol abuse and alcoholism. While this protection effect was believed to cause unpleasant symptoms connected with acetaldehyde accumulation after alcohol consumption [17], the frequency of facial flushing (a typical result of acetaldehyde accumulator) was shown to be similar in individuals carrying different ADH1B alleles [18]. Additionally, the primary flushing effect is believed to be a result of the higher ADH1B activity and lower ALDH2 activity and was also found to be independent of the ADH1B*2 and ALDH2*2 alleles [19]. Some reports have underlined that the most significant functional gene loci are the His47Arg polymorphisms in the ADH1B gene, where Arg47 is the overactive version, and the ALDH2 Glu487Lys, in which the Lys487 allele deactivates ALDH2 [1]. In the Chinese and Japanese as well as in Jewish populations, when both His47 and Lys487 are plentiful, some people carry genotypes that protect them from the development of alcohol addiction. Finally, the precise mechanism of gene polymorphism associated with alcoholism susceptibility or resistance needs to be clarified. The Microsomal Ethanol Oxidizing System This system mostly involves CYP2E1 and also the cytochromes CYP1A2 and CYP3A4. CYP2E1 is active only after the consumption of a large amount of alcohol. After chronic ethanol consumption, the activity of the microsomal ethanol-oxidizing system (MEOS) increases, with an associated rise in P-450 cytochromes, especially CYP2E1. When alcohol is metabolized by CYP2E1, highly reactive oxygen species (ROS) are produced. According to many discrepancies among studies concerning the association between CYP2E1 functional polymorphism and alcohol addiction, this polymorphism is not believed to be related to alcohol abuse [3]. Genetic catalase polymorphism also still needs further investigation. As has been shown, a common polymorphism in the promoter region of the catalase gene, CAT c.-262C > T, has an impact on alcohol dependence and its severity [20]. It was found that CAT levels were significantly higher in subjects carrying the CAT0262 t allele [20]. Interestingly, some studies have observed that blood catalase activity significantly correlates with alcohol consumption, and human brain catalase activity modulates alcohol consumption urges [21]. The role of catalase in alcohol use disorder is supported by the results of a study showing that subjects with a family history of alcoholism have a higher catalase activity than a control group [22]. Due to the aforementioned information, pharmaceutical manipulation of ethanol metabolism in humans is based on ALDH inhibition. ALDH inhibitors, such as disulfiram (Antabuse) and calcium carbimide (Abstem, Temposil), are used as the basis for treating chronic alcoholics. Due to the limited efficacy of current medications, personalized treatment is required in the clinical management of patients. The next medicine, the mechanism of which is strictly related to the genotype of patients, is naltrexone. Naltrexone is a µopioid receptor antagonist that is also used in the treatment of alcoholics with a moderate efficacy, and the gene encoding the µ-opioid receptor1 (OPRM1) is becoming the focus of studies, describing a functional polymorphism within the coding sequence of this gene (Asn40Asp) [23]. Moreover, in this case, there are conflicting study data, although many seem promising, showing that the functionally significant OPRM1 Asp40 allele predicts naltrexone treatment response in alcoholic individuals, so that OPRM1 genotyping in alcoholic individuals might be useful for the selection of treatment options [24]. On the other hand, the response to alcohol consumption may be different due to changes in opioid receptors. Gene and Environmental Interactions The development of alcohol addiction is a very complex process, in which the interaction between genes and the environment plays a crucial role. Environmental factors include the availability of alcohol, parental attitudes, childhood maltreatment, and peer pressure. Gene and environment interactions (GxE) occur when the effect of exposure to an environmental factor on a person's health is conditioned by the genotype. The geneenvironment effects within psychiatric disorders have been described for many genes, such as monoamine oxidase A (MAOA), the serotonin transporter (HTT), COMT, the corticotrophin-releasing hormone receptor 1 gene, and the dopamine transporter [25][26][27]. The Catechol-O-Methyltransferase Gene (COMT) One of the genes that are important in the development of AUD is catechol-Omethyltransferase (COMT). This enzyme metabolizes dopamine, norepinephrine, and other catecholamines. Val158Met is a common functional polymorphism, located in the human COMT gene coding sequence, in which the amino acid at codon 158 may be either valine (Val) or methionine (Met). The Val158 allele is three to four times more active than Met158, and the alleles act co-dominantly [28,29]. There are no clear data on these two alleles' effects in alcohol addiction, as it was found in certain addicted populations (e.g., polysubstance abusers) that Val158 is associated with addiction, whereas in others, such as late-onset alcoholics in Finland and Finnish social drinkers, increased risk seemed to be connected with the Met158 allele [30,31]. Studies have confirmed that the changes in the dopaminergic system due to polymorphisms of the DRD2 and DRD4 genes are connected with a higher risk of alcohol consumption [32]. Many studies have also shown that different variants of gene polymorphisms of receptor DRD2 rs1799732 and DRD4 VNTR determine temperament traits, such as impulsivity, which is conducive to the deterioration of self-control (e.g., difficulty controlling alcohol consumption and abstinence) [33]. The Monoamine Oxidase A Gene (MAOA) MAOA is an X-linked gene-encoding monoamine oxidase A, a mitochondrial enzyme that metabolizes monoamine neurotransmitters, including norepinephrine, dopamine, and serotonin. A well-known polymorphism, called the MAOA-linked polymorphic region (MAOA-LPR), is a variable-number tandem repetition (VNTR), which consists of a number of different copies of a 30 base pair (bp) repeated sequence, with three and four repeat alleles being by far the most common. Alleles with four repeats are transcribed more efficiently than three repeat copies and are associated with a higher MAOA activity [34]. It has been shown that in women, the effect of childhood sexual abuse on the risk of developing alcohol addiction was connected with the MAOA-LPR genotype. Sexually abused women who were homozygous for the low-activity MAOA-LPR allele had higher rates of problem drinking, especially antisocial alcohol use, compared with those who were homozygous for the high-activity allele [35]. The Serotonin Transporter Gene (HTT) HTT is responsible for serotonin re-uptake and works as a crucial regulator of serotonin availability in the synaptic cleft. A common polymorphism of the HTT promoter region (5-HTTLPR) affects expression, with the major alleles involving 16 (L) or 14 (S) copies of a 20-30 bp imperfectly repeated sequence [36]. It has also been shown that 5-HTTLPR is actually a functionally tri-allelic locus due to functional A > G substitution within the L allele. While the s allele, with a low transcription rate, has been related to anxiety and alcohol addiction, the effect of this allele on behavior appears to be stronger when there is stress exposure. 5-HTTLPR has been shown to reassure some brain functions in parts that are critical for emotional regulation and response to environmental changes and may moderate the impact of stressful life events on the risk of depression and suicide [37,38]. Additionally, macaque orthologous rs-5HTTLPR polymorphism was observed to influence alcohol consumption and stress response, depending on the conditions of upbringing. Carriers of the genotype, with a low expression, who had been separated from their mother at an early age displayed a higher stress reactivity and ethanol preference [39]. Oxidative Stress in Alcohol Use Disorder Reactive oxygen species (ROS) are generated during ethanol metabolism. ROS are highly reactive and capable of damaging various molecules, including proteins, lipids, carbohydrates, and DNA. The status of biochemical parameters and antioxidants were measured in 28 patients with alcohol dependence in a clinical study in Taiwan [40]. Malondialdehyde (MDA) is a marker of lipid oxidation, and its increased level was found in a group of patients. Furthermore, the duration of alcohol dependence was significantly correlated with MDA levels. The superoxide dismutase (SOD) and glutathione peroxidase (GPX) activity were lower in that group, which emphasized the impairment of antioxidant defense and oxidative stress occurrence [40]. While oxidative damage to most molecules may be easily repaired, oxidative DNA damage may lead to serious biological changes. Among the various forms of oxidative DNA damage is a reaction with the C-8 position of the guanine base on DNA, resulting in the generation of 8 oxo-guanine (8-oxoGua) and its nucleoside 8-oxo-7,8-dihydro-2 -deoxyguanosine (8-oxodG) or 8-hydroxy-2 deoxyguanosine (8-OHdG), the most widely studied and best recognized marker of oxidatively modified DNA [41,42]. Oxidative DNA damage, measured as a level of 8-hydroxy-2 deoxyguanosine (8-OHdG), was shown to be higher in alcohol-dependent patients (79 persons) than in a control group (63 healthy persons). In addition, authors have observed that this damage persisted after 1 week of detoxification, and alcohol withdrawal syndrome (AWS) was correlated with the level of oxidative DNA damage [43]. The authors have concluded that there are some limitations of their results, such as the short period of abstinence, the dependence of oxidative DNA damage on many factors, and the observation of an increased level of this damage in other central disorders. However, their study has certainly shown that alcohol-dependent persons are susceptible to an excessive production of free radicals and, consequently, its harmful effects. The study of animal models has shown that a high concentration of ethanol connected with a vitamin-depleted diet increased the level of 8-oxoGua and its repair activity in the liver and esophagus, which may be a risk factor in the development of cancer. However, authors have found that in animals treated with carcinogen, a lower level of ethanol decreased 8-oxoGua and its repair activity in the analyzed organs. On the basis of the experiments conducted, the authors concluded that the effect of ethanol consumption on cancer risk, including the generation of 8-oxoGua, depends on the ethanol concentration and diet [44]. Another study on ethanol-fed pigs and a control group has revealed that the effect of ethanol on oxidative stress intensification is not obvious, as no significant differences in the 8-oxodG and MDA levels between analyzed groups were found [45]. This small experiment conducted on pigs (n = 4 in every group) has shown that alcohol consumption for 39 days may not cause oxidative damage to DNA and lipids. According to the authors, the critical determinants of ethanol toxicity may be the duration of alcohol uptake and alcohol-induced nutritional deficiency. Some authors have indicated that chronic ethanol exposure up-regulates the production of ROS and NO in human neurons, and chronic oxidative stress initiates neuronal injury [46]. In addition, they have observed that both alcohol-metabolizing enzymes (ADH and CYP2E1) are active in human neurons, and their activities are higher after EtOH exposure. Our study has shown that the level of 8-oxoguanine in cerebrospinal fluid and urinary excretion of oxidative DNA damage repair products were higher in mixed Alzheimer disease/vascular dementia (MD) patients than in a control group [47], which supported the observation that oxidative stress is one of the mechanisms leading to neural dysfunction. While oxidative stress and oxidative DNA damage have been implicated in the progression of many neurodegenerative disorders, and numerous studies have reported that a large number of detoxified alcoholics have cognitive or memory disturbances [48], one cannot exclude a possible link to the loss of neural plasticity and other changes observed in the brain of an alcoholic. All this information underlines the role of oxidative stress/oxidative DNA damage in alcohol-dependent disorders, especially in the context of cancer development and alcohol-induced oxidative stress in the central nervous system. Epigenetics Alcohol addiction is a chronic, relapsing brain disorder, which is characterized by a compulsion to seek alcohol, loss of control in limiting alcohol intake, and negative emotional state during withdrawal [49], in which genetic and environmental factors interact and appear to be equally important with respect to its development [50]. Recent studies have shown that every cell under the influence of environmental stressors may express a new phenotype without genetic changes. This also takes place in several nervous system nuclei. Thus, in addition to environmental stressors, epigenetic modifications can lead to chronic changes in gene expression and, as a consequence, to vulnerability to addiction [2]. Moreover, it was observed that both stress and addiction can induce similar epigenetic modifications and underlying changes in neurochemical pathways and synaptic plasticity, which suggests a link between alcohol use disorder and stress-related disorders [51]. The term epigenetics refers to the chemical modifications occurring within a genome that may modulate gene expression, without changing the DNA sequence [52]. It is common knowledge that epigenetic mechanisms play a crucial role in regulating gene expression, as these mechanisms can transiently or stably manipulate this expression. Their main pathways involve DNA methylation and covalent modifications of histones, which may undergo methylation, acetylation, phosphorylation, or ubiquitination reactions. In the case of DNA modification, the main enzymes are methyltransferases (DNTMs), which relocate a methyl group from S-adenosyl-methionine (SAM) to the target cytosine. These DNMTs are abundant in fully differentiated adult neurons and are believed to play a crucial role in the regulation of gene expression [53]. Histones may be altered through a plethora of enzymes, such as histone acetyltransferases (HATs), histone deacetylases (HDACs), histone methyltransferases (HMTs), and histone demethylases (HDMs) [54]. Histones and DNA modifications can result in the remodeling of the structure of protein-DNA complexes, thereby regulating the access of transcriptional machinery to the DNA and, finally, cellular gene expression [55]. DNA Methylation The best-known epigenetic DNA modification is the methylation of cytosine at the C5 position in CpG dinucleotides, located mostly in the promoter regions of DNA, with the creation of 5-metylcytosine (5-mCyt). More than 28 million CpG sites are distributed across the human genome, and 70-80% of them can be methylated [56]. Methylation is usually associated with gene transcription silencing and, in general, is believed to be a stable modification [55]. While for years, there was only certainty about native demethylation, recent data have shown that active demethylation processes exist and could be related to the pathogenesis of many diseases, such as cancer [57][58][59]. Accumulating evidence indicates that DNA methylation is reversible, especially in the brain, which may be crucial in relation to genes associated with addiction. In the active demethylation reaction cascade, one key element is the participation of the ten-eleven-translocation enzymes, TET 1-3. These enzymes mediate the conversion of 5-methylcytosine (5-mCyt) into 5hydroxymethylcytosine (5-hmCyt) and perform further oxidation reactions that generate 5-formylcytosine (5-fCyt) and 5-carboxycytosine (5-caCyt) [60,61]. Then, the activated base excision repair (BER) pathway, using thymine DNA glycosylase (TDG), replaces this modification with cytosine. Thus, 5-mCyt oxidation is a plausible DNA demethylation mechanism [62]. Some studies have confirmed the observation that 5-hmCyt is most abundant in the brain, compared with the other organs [63], and emphasized its role in neural function [53]. Our study confirmed this observation, as we found a higher level of 5-mdC and 5-hmC in mice brain tissue in comparison with the kidney and liver [64]. A person's susceptibility to alcoholism-related brain damage may be associated with his or her age, gender, drinking history, and nutrition, as well as with the vulnerability of specific brain regions [65]. The intermediate products of the DNA demethylation pathway have been analyzed in our laboratory using online automated isotope-dilution two-dimensional ultra-performance liquid chromatography with tandem mass spectrometry (2D-UPLC MS/MS). In recent years, the results of our analyses have shown the strong impact of epigenetic modifications in diseases and human conditions, as well as in the NF-kappa B signaling pathway in Cu,Zn-SOD-deficient mice [64,66,67]. Studies of patients with alcoholism have shown a significant increase in genomic DNA methylation in the whole blood, which was associated with decreased DNMT-3a and DNMT-3b mRNA levels. This observation suggests a feedback regulation of these enzymes by the increased DNA methylation [68]. In addition, in this study, the authors observed a significant negative correlation between DNMT-3b expression and blood alcohol concentration. An increased DNA methylation in the medial pre-frontal cortex and decreased expression of proteins involved in synaptic neurotransmitter release were also shown in alcohol-dependent rats. Additionally, the authors have observed that the administration of a DNA methyltransferase inhibitor prevented increased drinking behavior post-abstinence, which suggests the therapeutic potential of DNA methyltransferases inhibitors [69]. DNA demethylases, DNMTa, DNMT3a, and 3b, are widely expressed in the nervous system. It has been shown in multiple studies that drugs modify the expression of DNMTs [70,71]. Other studies have indicated that MeCP2 (an epigenetic factor that binds methylated cytosine and act as a transcriptional repressor) mediates behavioral responses to alcohol and addictive cocaine properties by changing the BDNF expression in specific brain regions [72,73]. Family, twin, and adoption studies have shown that heritable factors play an outstanding role in determining individuals' vulnerability to AUD. Many AUD-associated genetic variants have been identified by genome-wide association studies (GWASs). In genomewide studies, methylation was found to be an important process in connection with alcohol abuse. In comparing alcoholics with their non-alcoholic siblings, the authors have found that several genes have altered methylation signatures, such as ALDH1L2 (aldehyde dehydrogenase gene), GABRP (GABA receptor gene), and GAD1 (glutamate decarboxylase gene), which are linked to the alcohol tolerance dopamine beta-hydroxylase gene (DBH) [74]. In their study, Bruckmann et al. have confirmed a genome-wide report of hypomethylation in the ganglioside-induced differentiation-associated protein 1 (GDAP1) gene and the association between the DNA methylation of this gene and the disease severity of 49 AUD patients [75]. The authors have also observed that the hypomethylation of GDAP1 in patients was reversed during a short-term alcohol treatment program, and this may suggest that GDAP1 DNA methylation could serve as a potential biomarker for treatment outcomes. Another study has shown a significant association of hypermethylation in the 3 -protein-phosphatase-1G gene (PPM1G) with alcohol use disorder, as well as two established AUD risk factors-adolescent escalation of alcohol intake and impulsivity [76]. The authors carried out a genome-wide analysis of the DNA methylation of 18 monozygotic twin pairs discordant in terms of alcohol use disorders and provided information on the association of the observed changes with brain mechanisms and behaviors that underline future problems associated with alcohol abuse. While genome-wide association studies (GWASs) have identified many AUD-related genetic variants, they only explain a small part of this puzzle. The genome-wide polygenic score (GPS) seems to be useful for identifying the risk of harmful and hazardous alcohol use [77], and because alleles do not change during an individual's lifetime, GPS can be used to indicate an individual's behavioral predispositions from birth. GPS based on a GWAS of alcohol-related behaviors has been shown to efficiently predict alcohol consumption. Using GPS based on the genome-wide association study and sequencing consortium of alcohol and nicotine use (GSCAN), with a cohort study of 3390 subjects, the authors have observed that the utility of GPS is limited in terms of the prediction of individual levels of alcohol use [77]. They have observed an increase in the predictive validity of a GPS for alcohol use from age 16 to 22 years by 5% for alcohol consumption, 90% for alcohol intake frequency, and 11% for hazardous drinking, with a generally small effect size, and they concluded that the clinical utility of GPS for alcohol use seems to be limited. DNA methylation studies on AUD are still developing, as the role of DNA methylation in other diseases, such as cancer, is relatively easier to assess. There are also limited tissues to analyze in AUD, including mainly blood, saliva from living persons, and post-mortem brain. The potentiality of peripheral blood global DNA methylation as an AUD marker was analyzed by Bonsch and Kim [78,79]. Bonsch et al. examined the relationship among global methylation, plasma homocysteine, and AUD in a case control sample, which consisted of 90 AUD patients and 89 healthy controls. They found an increase in global methylation in AUD patients, who had higher levels of homocysteine. Kim et al. reported elevated methylation levels of the repetitive element, Alu, in peripheral blood DNAs in an AUD group, when comparing 135 AUD patients and 150 healthy controls. The DNA methylation of promoter regions of genes has also been examined. In the work of Hillemacher et al. [80], arginine-vasopressin (AVP) and atrial natriuretic peptide (ANP) promoter DNA methylation were compared. Their study was composed of 111 AUD subjects and 57 controls, and they observed a significant increase in AVP promoter DN methylation but also a decrease in the ANP promoter DNA methylation in AUD persons. An important element of alcohol dependence studies is the identification of links between epigenetic factors and addiction risk (e.g., the intensity of alcohol cravings and dealing with it or differences in the tendency to relapse) and the possible relationship with predictors of abstinence or alcohol consumption control. Interestingly and importantly in terms of addiction therapy, Lesh's subtypes have described the causes of alcohol cravings; for example, in subtype I, the cause is alcohol consumption, stress in subtype II, depressed mood in subtype III, and in subtype IV, it is compulsive seeking [81]. The studies of Hillemacher et al. and Nieratschker et al. showed a significant negative correlation between the methylation of the dopamine transporter gene (DAT) and alcohol cravings [82,83]. The Polish study indicated a connection between the subtype I of alcohol dependence of Otto Michael Lesh and the trait of temperament, "novelty seeking". Moreover, the "novelty seeking" trait is considered to be a predictor of alcohol consumption relapse and the worst prognostic indicator of abstinence. Neurobiological concepts of alcohol dependence describe addicted patients, with significant emphasis on the "novelty seeking" trait, as impulsive and disordered, and behaviors and emotions of this kind are conditioned by the dopaminergic system [84]. Moderate hyperhomocysteinemia is common in chronic heavy alcohol drinking. Due to the gradation of alcohol dependence and polyetiology of addiction (multigeneity), the relationship between the epigenetic changes (observed as DNA methylation within homocysteine-induced endoplasmic reticulum (ER) protein promoter (Herp)) and Lesh's alcohol dependence typology was also noticed [85,86]. Additionally, other gene-specific investigations have observed changes in neuronal tract neuromodulators connected with alcohol cravings, such as proopiomelanocortin (POMC) and alpha-synuclein (SNCA) [87]. Studies have also highlighted a cluster of DNA methylation site alterations within the POMC promoter, which were correlated with alcohol cravings, both prior to (pre-exposure) and after alcohol consumption (post-exposure), in alcohol use disorders [88]. Analysis of the SNCA promoter in 84 AUD persons has shown hypermethylation in alcoholics in the acute exposure and post-exposure withdrawal phases, when comparing them with 93 controls [89]. Philibert et al. observed a meaningful association between the degree of alcohol dependence and the methylation status of monoamine oxidase A (MAOA) gene methylation in female patients (96 subjects) but not in men (95 subjects) [90]. Despite the many discrepancies, all these observations are promising and suggest the discovery of biomarkers in the future, indicating a strong need to expand research on the role of epigenetic mechanisms and, especially, DNA methylation in alcohol addiction, which, in the future, may serve as the basis for epigenetic control in these patients. Methylation and Acetylation of Histones Histone proteins are composed of a central globular domain and N-terminal tails, which are a subject of many chemical modifications. From the epigenetic point of view, the most important modifications include acetylation and methylation [54]. Histone acetylation is established by histone acetyltransferase (HAT) activity and leads to the addition of the acetyl group to lysine. Conversely, histone deacetylases (HDACs) remove the acetyl groups from the histone tails. Both of these groups of enzymes have been linked to psychiatric disorders and part of the addiction mechanism [91]. Studies on animal models of addiction have shown that many psychoactive substances may induce changes in histone modification in the central nervous system [91,92], and alcohol is one of them, as acute alcohol intoxication can decrease HDAC levels and histone acetylation in mouse amygdala [93]. One of the first studies in this area, which focused on epigenetic modulation due to the effect of alcohol, was carried out on rat hepatocytes treated in vitro with ethanol in a dose-and time-dependent manner. The authors observed an increase in the acetylation of the lysine K9 of histone H3 (H3K9ac) [94], the transcription activity of which plays an important role in apoptosis. Further studies have confirmed the role of the modification of HAT activity in addiction. In the nucleus accumbens (NAc), an increase in the global histone H3 and H4 acetylation was observed, and according to the authors, these regions possibly experience transcription activation at important and specific genomic locations that are relevant to AUD [95]. It has also been observed that binge-like exposure in adolescent rats induced HAT activity in the pre-frontal cortex and resulted in H3Kac and H4Kac in the promoters of genes, which are important in synaptic plasticity and transcriptional mechanisms [96]. The modulation in H3K9ac was also observed in rats after the acute administration of ethanol in vivo in the liver, lung, and spleen tissues [97]. Studies using HDAC inhibitors have shown a link between addiction and epigenetic modifications. The administration of HDAC inhibitors, such as 5-azacitidine (5-AzaC), decreases ethanol consumption and self-administration and inhibits behavioral responses due to ethanol consumption [98]. While all these findings are important, they are not universal, and of course, they are highly dependent on many changes, such as the protocol of administration during animal model experiments. The methylation of histones is similarly regulated by two sets of enzymes: histone methyltransferase and demethylase, which are responsible for adding and removing the methyl group from amino acids. The methylation of histones is much more complicated, as it may occur at different sites-17 lysine and 7 arginine residues-and may occur as mono-, di-, or trimethylation [99]. Histone methylation plays an important role in regulating psychiatric disorders, including AUD, and many studies have been undertaken to explore this issue. For instance, it has been shown that the activation of H3K4me2 mark was increased alongside the histone acetylation at the promoters of transcription factors (TFs) in adolescent rats, after binge-like alcohol exposure [96], and acute EtOH exposure in mice significantly increased the levels of H3K4me3 in the cortex [100]. In spite of this, opposite effects have also been observed, and acute EtOH exposure in the amygdala of adult rats has resulted in a reduction in H3K4me3 at the promoters implicated in alcohol dependence [101]. In humans, a similar observation has been made. A study of the amygdala and frontal cortex of post-mortem tissues in alcoholics has shown that a globally increased H3K4me3 was observed [102]. Epigenetic Changes of BDNF Approximately 20 years ago, Koob and colleagues proposed an "allostasis model" of alcohol addiction. According to this model, prolonged and excessive alcohol exposure may produce adaptive changes in brain function. These changes may lead to aberrations in the brain's homeostatic system. Allostasis refers to integrative adaptive processes maintaining stability during changes, but this stability is not within a normal homeostatic range [103]. There are some important neurotransmitters, which are the most important players in this allostasis process, and many are part of the stress-response system [104]. Among them, one can find the corticotrophine-releasing factor (CRF), dynorphin (DYN) and its receptor (the κ opioid receptor), substance P, norepinephrine, and brain-derived neurotrophic factor (BDNF). BDNF is a neurotrophin that regulates neuronal growth, survival, and function during the development of the adult brain. This factor regulates synaptic transmission and plasticity and induces an increase in the cytosolic calcium content, which can affect the vesicle exocytosis of several neurotransmitters in the synaptic space [105]. It also modulates the cAMP-responsive element-binding (CREB) protein activity, activates the mitogen-activated protein kinase (MAPK) cascade, and influences gene expression, such as the activity-related cytoskeleton-associated (Arc) protein, which regulates synaptic plasticity [93,106]. Dysregulation of BDNF expression, most likely due to epigenetic changes, is observed in alcohol-dependent individuals [107]. Moreover, it has been shown that alcohol exposure can increase the phosphorylation of CREB, CREB-binding protein (CBP) levels, and the expression of BDNF and Arc in the specific parts of the brain of experimental rats [108]. These results underline that in amygdaloid circuitry deficit in CREB, signaling may lead to chromatin remodeling and a reduction in the BDNF and Arc expression, promoting excessive alcohol consumption and a heightened anxiety-like behavior. On the other hand, BDNF knockdown animals exhibited a depressive-like behavior and consumed higher amounts of alcohol [109]. In humans, the valine 66 to methionine (Val66Met) polymorphism within the BDNF sequence is associated with psychiatric disorders [110]. In addition, it was shown that mice carrying the Val68Met polymorphism (a mouse homolog of human Val66Met) are at a higher risk of developing uncontrolled and excessive alcohol consumption patterns [111]. These data support the hypothesis that deficiency in BDNF due to the Met68BDNF polymorphism in mice has an essential role in promoting alcohol consumption and suggest that this polymorphism plays a role in excessive and compulsive alcohol consumption. It is possible that "allostase" is a result of chronic alcohol consumption, and it can be maintained by an inappropriate diet, which is preferred by addicted individuals. Some studies suggest that diet modifications may have a great influence on neurotransmitter balance [112]. MicroRNAs An important epigenetic process regulating gene expression also works through microRNA (miRNA). miRNAs are small, conserved noncoding RNA molecules consisting of 21-24 nucleotides that act at the post-transcriptional level to regulate the expression of their respective target messenger RNA (mRNA) and encoded proteins [113]. In the case of addiction to many substances, such as alcohol, methamphetamine, and nicotine, differences in the miRNA profile between substance users and a healthy control group have been observed [114][115][116]. It was shown that in animal models, the depletion of miRNA in central nervous system structures (due to the suppression of Drosha and Dicer enzymes) can alter neuronal growth and maturation [117,118]. A substantial number of studies have observed that specific microRNAs are modulated in alcohol abuse [119]. Selected members of the microRNA (miRNA) family are affected by alcohol, resulting in an abnormal miRNA profile in the liver and circulation in ALD [120]. In addition, there is increasing evidence that miRNAs responsible for inflammation regulation and cancer-promoting lipid metabolism are affected by excessive alcohol administration in mouse alcoholic liver disease (ALD) models. In addition, some studies have assessed the role of miRNAs in ALD and non-alcoholic fatty liver disease [121]. Interestingly, some studies have also indicated that moderate levels of alcohol consumption may have beneficial health effects and have shown that moderate voluntary alcohol consumption in Wistar rats affect many changes to the gene expression related to colonic inflammation and antioxidant enzymes. In the alcohol-treated animal group, a lower level of 8-oxo-deoxyguanosine was found, which strongly suggests a decrease in oxidative stress [122]. There was also a lower level of alanine aminotransferase and lactate dehydrogenase, as well as a decreased level of cyclooxygenase-2 gene expression, which is an inflammatory marker. The findings show that the alcohol-consumption group had an increased expression of glutathione-S-transferase-M1 and aldehyde dehydrogenase 2, indicating that moderate levels of alcohol consumption may provide beneficial effects in terms of reducing colorectal cancer (CRC) risk [122], which is consistent with the results of some human studies [123]. Nutrition Environmental epigenetic effect factors include behavior, nutrition, and chemical and industrial pollutants. For example, bioactive food components may trigger life-protecting epigenetic modifications. Understanding the molecular effects of behaviors, nutrition, and pollutants has become relevant in the development of preventative strategies and personalized health programs [124]. "Environmental epigenetics" refers to how environmental exposure affects epigenetic changes. Nutrition is one the most important environmental epigenetic factors. Nutritional epigenetics is a quite recent subfield of epigenetics, so current knowledge on the precise effects of food components on epigenetics and their association with phenotypes is still elusive. As was observed, bioactive food components, specific nutrients, and dietary patterns may have highly beneficial effects and may also mitigate the negative impact of life behaviors, such as smoking, alcohol abuse, or exposure to certain chemicals. Nutrition in Early Life Nutrition in early life induces long-term changes in DNA methylation, which affect one's health later in life. Nutrients may act directly by inhibiting epigenetic enzymes, such as DNMT, HDAC, and HAT, or may alter the accessibility of substrates important for these enzymes. This may result in the modification of the expression of genes critical for our health and longevity [125]. One brilliant example of the effect of early diets on epigenetics, with effects on the phenotype, is that of honeybees. Epigenetic changes in DNA methylation determined by a larval diet constitute the most important trigger. Larvae destined to become queens are fed exclusively with royal jelly, which contains epigenetically active ingredients that silence a key inhibiting "queen gene". Royal jelly is a concentrated mixture of proteins, essential amino acids, unusual lipids, vitamins, and other less characterized compounds, and it is produced by the head glands of "nurse" workers. In honeybees (Apis mellifera), genetically identical female larvae have been shown to change their development path depending on nutrition factors, such as royal jelly, and become queens, instead of workers, as required [126,127]. Epigenetic Effect of Diet on Health or Disease Many studies have reported the epigenetic effect of diet on phenotypes and susceptibility to disease. The additional effect of nutrition can be considered when one focuses on vitamins. The epigenetic effects of 13 vitamins were recently described in terms of DNA methylation, histone modification, and ncRNA expression [128]. It is worth mentioning that sirtuin 1, a NAD+-dependent HDAC, the substrate specificity of which includes histone proteins, may be activated by dietary components, such as resveratrol. Sirtuin 1 mediates some dietary restriction effects by acting on DNA methylation [129]. Micronutrients in Epigenetic Modifications-The Vitamin B Group The essential micronutrient folates, vitamin B6 and vitamin B12, are critically involved in homocysteine metabolism. This metabolism is linked to phenotypic changes through DNA methylation due to its role as a source of one-carbon for the synthesis of SAM (S-adenosyl methionine), which is crucial for DNA methylation [124] and nucleotide synthesis. Folic acid (B9) is an essential B vitamin, which plays a pivotal role in brain development as folate supplementation during early pregnancy, protecting against neural tube defects [130]. DNA and histone methylases are directly influenced by the accessibility of the methyl group derived from diet (choline/betaine, methyl folate, or methionine), which is needed for cytosine methylation in DNA or lysine in histones [131]. Folate and vi-tamin B12 are required to re-methylate homocysteine into methionine, while B6-dependent enzymes take part in converting homocysteine into cysteine. Other methyl donor nutrients, such as choline, can also impact DNA methylation status. Choline is a required nutrient, and foods such an eggs and meats contain more choline than plant sources. It has been observed that a low choline level in diets in pregnant rodents may result in changes in methylation. These changes are especially important, as a minimum level of maternal dietary choline is essential for normal brain development. Moreover, disruption in choline metabolism may affect DNA methylation by deleting the Bhmt gene [132]. In human maternal choline intake-modulated epigenetic placenta readings, women with a higher intake of choline had a higher placental promoter methylation in the corticotrophin-releasing hormone (CRH) [133]. In addition, babies born to women who consume more choline during pregnancy have been shown to have a better visuospatial memory at age 7 [134]. Equally important for methylation is the proper level of vitamins, as recently shown in the article by Tanwar et al. It was indicated that maternal vitamin B12 deficiency in rats alters fetal DNA methylation in metabolically important genes, and this impact may be reversed by B12 rehabilitation of mothers at conception [135]. This observation is promising in relation to therapy and prevention. A robust example of epigenome-modifying chemicals is bisphenol A (BPA), which is commonplace in the manufacture of numerous plastic products, including containers. It has been observed that the pups of BPA-fed adult mice were more likely to have an unhealthy phenotype, compared to those born to BPA-fed mothers supplemented with methyl-rich nutrients, such as folic acid and vitamin B12. As indicated, a good diet rich in fruits and vegetables and other high-quality foods may counteract the negative effects of chemical exposure. Methyl-donating nutrients act as co-substrates for methyl-group transfers. The pool of available methyl donors is a significant factor in DNA and histone methylation [136]. Alcohol has been shown to modulate vitamin B and folate synthesis in the body [137,138]. Therefore, it seems to be reasonable that alcohol may indirectly modulate DNA methylation through diet. Apart from this, alcohol metabolites, such as acetaldehyde, have been shown to modulate DNA methylation by inhibiting DNA methyl transferases [43]. Vitamin A It was also shown that epigenetic memory, such as methylation, may be erased to produce naïve pluripotent stem cells, and vitamin A may reduce the DNA levels of 5-methylcytosine [139] by increasing ten-eleven translocation (TET) enzymes. Retinoid acid (RA), the active form of vitamin A, could induce differential gene expression through the DNA methylation of homeobox transcription factor A1 (HOXA1) and potential oncogene mucin 4 (MUC4) genes in two cancer breast lines [140]. The role of vitamin A is underlined by the function of a few ALDH isoforms (ALDH1A1, ALDH1A2, and ALDH1A3) in retinoic acid (RA), signaling by oxidizing retinal to retinoic acid (RA) [140]. According to Balmer and Blomhoff, there are over 500 genes, the expressions of which are up-or down-regulated by RA. Therefore, RA may modulate a variety of biological processes [141]. Vitamin E was also found to be an epigenetic factor due to the association between leukocyte methylation status and blood vitamin level in a Parkinson's disease cohort [142]. It was shown that alcoholic liver function impairment leads to decreased serum vitamin A and vitamin E levels, although in relation to liver function impairment, a decrease in vitamin E seems to be more dependent on nutritional status and irregular eating habits. According to this study, both were related to data on brain atrophy and cerebellar shrinkage [143]. Vitamin D Vitamin D is a steroid hormone that controls more than 1000 genes. Genes responsive to vitamin D may be categorized in a few groups consisting of genes involved in bone metabolism, anabolism, and resorption, mineral homeostasis, cell life, and immune-system modulation and metabolism. A deficiency of vitamin D is quite common due to restricted exposure to sunlight and/or a decreased dietary intake. Vitamin D binds to the receptor (VDR) that drives the expression of VDR-responsive genes. Furthermore, vitamin D interacts with epigenomes on many levels. Several studies have also shown the effect of vitamin D on the synthesis pathways of dopamine, serotonin, and a number of neurotrophic factors. In a study on the effect of vitamin D3 in children with attention-deficit/hyperactivity disorder (ADHD), it was found that the level of 25D3 and dopamine increased in the supplemented group, while the serum BDNF and serotonin levels did not change significantly [144]. Moreover, vitamin D is being reconsidered as a neuroactive steroid. The reported neuroprotective effects of vitamin D include the in vitro biosynthesis of neurotropic factors, the inhibition of nitic oxide synthase, and an increased level of brain glutathione. Vitamin D is a potent in vitro inducer of NGF mRNA expression in neural brain cells, and BDNF is a protein related to NFG, a central player in synaptic and cognitive plasticity [145]. In Pozzi et al. [146], the authors observed that after vitamin 25D3 supplementation, the level of NGF and BDNF in serum was lower than at the start of the trial, while the level of 25D3 had increased. They also observed a strong positive effect on memory and cognitive function, measured by the Wechsler Memory Scale. They conclude that the decreased level of NGF and BDNF after vitamin D supplementation may be connected, and supplementation plays a crucial role in the modulation of neurotrophic factors. Critical genes in the vitamin D signaling pathway, such as those coding for 25-hydroxylase (CYP2R1), 1α-hydroxylase (CYP27B1), and 24-hydroxylase (CYP24A1), as well as the vitamin D receptor (VDR), have a large CpG island in their promoter regions, which may be a potential methylation site. Additionally, VDR protein affects coactivators and corepressor proteins, which are in contact with HATs, HDACs, HMTs, and chromatin remodelers. There is also some evidence that certain VDR ligands have DNA-demethylating effects [147]. The epigenetic effects of vitamin D are connected with histone acetylation. For example, it was shown that the VDR/RXR dimer interacts with HATs to induce transcriptional activation [148]. Some authors have also proposed that vitamin D can alter DNA methylation in the promotion of certain genes. Tapp and colleagues showed the negative association between the serum level of 25D3 and CGI methylation in the adenomatous polyposis coli (APC) promoter region [149]. They also suggested that, in healthy people, the age-related CGI-methylation of human rectal mucosa was influenced by, among other things, the vitamin D status. In non-malignant and malignant prostate epithelial cells, after treatment with 1,25-D3, clear changes were observed in the site-specific methylation of the p21 promoter in a cell line-specific manner [150]. The precise mechanism of vitamin D as an epigenetic modifier needs further investigation. Vitamin C The epigenetic role of vitamin C has generally been proven. Vitamin C plays a pivotal role in remodeling epigenomes by enhancing the catalytic activity of Jumonji-C domaincontaining histone demethylases (JHDMs) and the ten-eleven-translocation proteins (TETs). The ability of vitamin C to potentiate the activity of histone and DNA demethylating enzymes also has clinical applications in cancer treatment [139,151]. A variety of dietary factors are potential HDAC and HAT modulators. Among these, sulforaphane, found in broccoli sprouts, or diallyl disulfide in garlic have been shown to act as HDAC inhibitors [125]. All these examples have emphasized the role of diet and fresh air exercise in relation to the development of disorders, such as addiction, and Table 3 summarizes how individuals' diets may influence epigenomes. It remains unclear why some epigenomes are established early, whereas others are modifiable in later life. It is well known that perinatal environmental conditions are very important and exert lasting effects on the brain function and structural development, as well as on the susceptibility to abuse and psychopathology later in mature life [156]. Nutrition is one of the most important elements of the parent-child relationship, which may affect offspring's brain development and function. There is strong evidence suggesting a direct association between early-life stress (ELS) and the incidence of psychiatric disorders and cognitive impairment [157,158]. The quality of early nutrition has major effects on adult cognitive function. Feeding behaviors and metabolism are closely regulated by the neuroendocrine mechanism, which is affected by stressful events, and malnutrition also affects the stress system. The programming of the human brain is a very complex process, in which nutrition and stress play crucial roles. There is growing evidence that ELS and EL nutrition affect the hippocampal structure, plasticity, and function. It is common knowledge that the hippocampus is particularly sensitive to the EL environment due to its postnatal development between the last trimester of gestation and 16 years of age, and it is rich in stress-hormone receptors [159]. Proper brain development requires an adequate supply of energy and micro and macro nutrients. Even minor dietary insufficiencies may have a big impact, especially during critical stages of development. For example, children subjected to bad perinatal nutrition exhibit cognitive deficits and an increased risk of psychopathologies in adulthood [160,161]. While many nutrients are essential for neuronal growth and brain development, during the perinatal period, the intake of zinc, selenium, iron, folate, iodine, vitamin A, vitamin B6 and B12, long-chain-polyunsaturated fatty acids, choline, and proteins is of particular importance. There is some evidence that perinatal manipulation of nutritional status may induce an alteration in hippocampal neurogenesis and other structural changes in animals [162]. Moreover, vitamin B6 and B12 deficiencies during gestation and lactation persistently impair hippocampal structure and function, and protein malnutrition may result in a reduced neuronal DNA and RNA content, as well as altered fatty acid profiles, which may ultimately lead to serious changes in neuronal function, the number of synapses, and/or dendritic arborization [163,164]. Nutrition and Cognitive Functions It is well known that maternal care, stimulation, and nutrient availability are the most important factors in early development, and stress hormones and neuropeptides are particular components that should be considered in combination, as they mediate the long-lasting effects of early life experiences. One of the most important responses of the human organism to stress stimuli is the HPA-axis. Food intake and HPA-axis activity are closely linked to neuronal pathways that react to and integrate nutritional and stressful stimuli. The importance of this connection is confirmed in some studies, which show that basal HPA-axis activity and stress responsiveness are altered in genetically obese rats and in rodents fed a high-fat diet and subjected to perinatal food restriction [165,166]. The HPA-axis is sensitive to modulation by metabolic signals, including leptin, insulin, glucose, and ghrelin [167,168]. As was shown, maternal separation reduces plasma glucose and leptin and increases ghrelin levels in offspring [169]. All this suggests that metabolic signals are an important element of the HPA-axis response. Patients with an alcohol use disorder often eat abnormally and irregularly. They experience nutrient deficit symptoms, alcohol withdrawal syndrome (vomiting, diarrhea, and sweating), and disturbed bacterial intestinal flora and nutrient absorption systems, resulting in vitamin, protein, and electrolyte deficiencies [170][171][172], which may lead to health damage and also the initiation of epigenetic processes [173]. Cannabinoids Many studies have analyzed the influencing substances in food that may affect the nervous system and appetite reactions. These substances modulate the nervous system and nutritional behavior. Researchers have emphasized the role of the endogenous cannabinoid neuronal system in the regulation of food intake [174]. This system includes receptors in areas of the central nervous system, such as the lateral hypothalamus, arcuate nucleus (nucleus caudatus), and paraventricular nucleus (nucleus paraventricularis), and in the reward system, e.g., experiences of hunger modulated by hormones, such as leptin, orexin, and endogenous opioids [175][176][177][178][179]. Cannabinoid receptors, CB1 and CB2, are located in two metabolic long-and short-term systems of nutrition regulation connected with the repletion of the digestive system [180]. Cannabinoids intensify the rewarding potential of food intake. Animal model surveys have proved that the application of the receptor CB1 antagonist (Rimonabant, SR141716A) decreased the reward potential of eating sweets and alcohol consumption [181]. Interestingly, it has been proved that not only do human or animal organisms include cannabinoids but that these substances are also components of food (exogenous cannabinoids, e.g., cacao, chocolate, and milk) [182][183][184]. This leads to the conclusion that food may include additional biochemical substances that may induce specific physiological reactions and behavior. It seems to be also possible that these substances may indirectly increase the risk of alcohol consumption by eliciting the desire to use alcohol or that, on the contrary, these substances may be protection factors supporting abstinence. Studies have also indicated that DNA methylation may occur due to the use of psychoactive substances, e.g., alcohol and exogenous cannabinoids [177]. Researchers have emphasized interactions between ethanol and the cannabinoid system. Moreover, recent studies have shown that epigenetic changes that occur after alcohol consumption, together with cannabinoids, may act synergistically and lead to DNA methylation or histone modification. This in turn may lead to a modulation of apoptosis and synaptic plasticity [185]. Some reports reveal the ability of cannabinoids to modify the neuronal and immune system via histone modification, such as H3 lysine methylations or the alteration of DNA methylation [94]. Conclusions The role of epigenetic modification, nutrition, and addiction, such as alcohol abuse, is an emerging and promising field of research. It seems to be highly reasonable to look for new connections between them, especially nowadays, when nutrigenomics and nutrigenetics are becoming important elements in disease and addiction therapies. Changes in human diet may be the easiest first stage of therapy or may fulfil a protective role in specific disorders, such as addiction. In light of all these observations, it is important to address whether the knowledge resulting from studies of the influence of epigenetic factors (e.g., the environment and alcohol) help to initiate preventive action, leading to a modification of patients' environment, therapy, and diet.
2021-04-29T05:22:54.590Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "6014da41bb121f35a5c5501ca9b1574c82a09e24", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/8/4262/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6014da41bb121f35a5c5501ca9b1574c82a09e24", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15176281
pes2o/s2orc
v3-fos-license
Association of the lupus low disease activity state (LLDAS) with health-related quality of life in a multinational prospective study Background Systemic lupus erythematosus (SLE) is associated with significant impairment of health-related quality of life (HR-QoL). Recently, meeting a definition of a lupus low disease activity state (LLDAS), analogous to low disease activity in rheumatoid arthritis, was preliminarily validated as associated with protection from damage accrual. The LLDAS definition has not been previously evaluated for association with patient-reported outcomes. The objective of this study was to determine whether LLDAS is associated with better HR-QoL, and examine predictors of HR-QoL, in a large multiethnic, multinational cohort of patients with SLE. Methods HR-QoL was measured using the Medical Outcomes Study 36-item short form health survey (SF-36v2) in a prospective study of 1422 patients. Disease status was measured using the SLE disease activity index (SLEDAI-2 K), physician global assessment (PGA) and LLDAS. Results Significant differences in SF-36 domain scores were found between patients stratified by ethnic group, education level and damage score, and with the presence of active musculoskeletal or cutaneous manifestations. In multiple linear regression analysis, Asian ethnicity (p < 0.001), a higher level of education (p < 0.001), younger age (p < 0.001) and shorter disease duration (p < 0.01) remained significantly associated with better physical component scores (PCS). Musculoskeletal disease activity (p < 0.001) was negatively associated with PCS, and cutaneous activity (p = 0.04) was negatively associated with mental component scores (MCS). Patients in LLDAS had better PCS (p < 0.001) and MCS (p < 0.001) scores and significantly better scores in multiple individual SF-36 domain scores. Disease damage was associated with worse PCS (p < 0.001), but not MCS scores. Conclusions Ethnicity, education, disease damage and specific organ involvement impacts HR-QoL in SLE. Attainment of LLDAS is associated with better HR-QoL. Electronic supplementary material The online version of this article (doi:10.1186/s13075-017-1256-6) contains supplementary material, which is available to authorized users. Background Systemic lupus erythematosus (SLE) is a chronic multisystem autoimmune disease resulting in significant morbidity and reduced quality of life. With the improvement in overall survival of patients with SLE compared to historical outcomes [1], a growing number of young adults face the burden of chronic disease, which includes not only the activity of the disease itself, the adverse effects of treatment and the complications such as organ damage [2], but also the impact of disease on physical function, quality of life and employment. Healthrelated quality of life (HR-QoL) is a multi-dimensional construct that evaluates different health perceptions and self-reported functional status, and is often included as a key patient-reported outcome (PRO) in studies of chronic disease. Both generic and disease-specific instruments have been developed to facilitate measurement of PROs, resulting in an increase in the number of studies assessing HR-QoL in SLE [3][4][5][6]. PROs are increasingly recognized as an integral part of assessment in clinical trials and in routine practice [7,8], as they measure domains not captured by physician-assigned disease activity scores. Patients with SLE perform poorly on HR-QoL measures when compared to the general population [9], especially those with concomitant fibromyalgia [10] or fatigue [6,11]. The effects of SLE on HR-QoL are comparable to other chronic diseases such as chronic heart failure, coronary artery disease, end-stage airways disease, human immunodeficiency virus and rheumatoid arthritis [12][13][14]. In addition, it has been reported that patients with SLE feel misunderstood by their families, the community and even the specialists treating them [15]. Consequently, patients feel that their quality of life needs are not being met by treating teams [16,17]. As recently highlighted, measures of a treatment outcome status for use in clinical trials, or in treat-to-target strategy studies, have been lacking in SLE [18,19]. Definitions of remission may be too stringent for use in routine practice or clinical trials [20], highlighting the need for a definition of low disease activity [18,19]. Recently, we reported the definition and preliminary validation of a lupus low disease activity state (LLDAS), combining disease activity and treatment domains, attainment of which was shown in a longitudinal cohort study to be protective against damage accrual [21]. For such a measure to have value in clinical practice and clinical trials, it should be associated not only with physician-applied measures of disease activity and damage, but also with PROs. The objectives of this study were to determine whether LLDAS is associated with better HR-QoL, and to determine other predictors of HR-QoL in a large multiethnic multinational cohort of patients with SLE. Study population Ten centers from seven countries took part in this study. Patients over the age of 18 years, who fulfilled the classification criteria for SLE (either the 1997 American College of Rheumatology (ACR) criteria [22] or the 2012 Systemic Lupus International Collaborating Clinics (SLICC) criteria [23]) were eligible. The study centers are members of the Asia Pacific Lupus Collaboration (APLC), involved in a multicenter prospective longitudinal study of SLE outcomes; data reported here represent all patients with complete data acquisition from the enrollment visit. Data collection took place between May 2013 and August 2015, during the routine ambulatory care of each patient, using either a standardized paper or electronic case report form. Measurement of HR-QoL HR-QoL was measured using the Medical Outcomes Study 36-item short form health survey (SF-36v2) [24], a generic instrument validated in a number of SLE observational cohorts and clinical trials, and validated in each of the languages used by patients in this study [3,4,10,13,25,26]. The SF-36 comprises eight domains including physical function (PF), role physical (RP), bodily pain (BP), general health (GH), vitality (VT), social function (SF), role emotional (RE) and mental health (MH), and two summary scores defined as the physical component score (PCS) and mental component score (MCS). The individual domain scores are expressed on a scale of 0 to 100, and the component summary scores are standardized around a USA normal population mean of 50, with higher scores representing better HR-QoL. Other variables Demographic information, disease characteristics and data on clinical variables were collected from each patient at the study visit date. Demographic variables included gender, ethnicity (self-reported based on the Australian Standard Classification of Cultural and Ethnic Groups [27]), date of birth, year of SLE diagnosis, smoking status, and highest-attained education level. Disease manifestations were determined from the ACR and SLICC classification criteria [22,23], recorded at study entry on an ever-present basis. Current doses of glucocorticoids and immunosuppressive medications were recorded for each patient. Disease activity was measured using the SLE disease activity index (SLEDAI-2 K) [28], with specific organ system activity derived from components of the SLEDAI-2 K. Additional disease status measures included a physician global assessment (PGA) of disease activity on a scale of 0 to 3 [29], and fulfillment of the criteria for LLDAS [21]. The operational definition of LLDAS is fulfilled when all of the following criteria are met: (1) SLEDAI-2 K ≤4, with no activity in major organ systems (renal, central nervous system (CNS), cardiopulmonary, vasculitis or fever) and no hemolytic anemia or gastrointestinal activity; (2) no new features of lupus disease activity compared to the previous assessment; (3) a Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI PGA (scale 0-3) ≤1; (4) a current prednisolone (or equivalent) dose ≤7.5 mg daily and (5) well-tolerated standard maintenance doses of immunosuppressive drugs and approved biologic agents, excluding investigational drugs. Disease flares compared to the previous visit were measured using the SELENA-SLE flare index (SFI) [29]. Irreversible disease damage was measured using the SLICC damage index (SLICC-DI) [30]. Data analysis Pooled cross-sectional data from all centers were analyzed using STATA v13 (StataCorp, College Station, TX, USA). Individual domain and component summary scores are expressed as median and interquartile range, as the data were not normally distributed. To allow for linear regression analysis, domain and summary scores were log-transformed prior to inclusion into models in order to fulfill the assumption of a normal distribution. The exponentiated regression coefficients (coeff ) are reported in results for ease of clinical interpretation. This represents (coeff-1)*100% increase or decrease in PCS or MCS scores for every one-unit change in continuous independent variables or a change in category for categorical independent variables. Variables with a p value ≤0.1 in simple linear regression analysis were checked for multicollinearity prior to inclusion into backward stepwise multiple linear regression models for PCS and MCS scores. LLDAS is a composite measure comprising the SLEDAI, PGA, flare index, prednisolone dose and medication use. In addition to assessing the relationship between LLDAS and HR-QoL (model 1), a separate multiple linear regression model was used to ascertain to what degree individual LLDAS components contributed to this relationship (model 2). A third model of the LLDAS components was also tested, but using organ system activity rather than the total SLEDAI-2 K score (model 3). Model adequacy was evaluated using adjusted R 2 , residual and normality plots. Demographic and disease characteristics A total of 1422 patients were studied. The majority of patients were female (93%), with a mean (±SD) age at diagnosis of 31.2 (±12.2) years and mean (±SD) disease duration of 9.2 (±7.7) years. Caucasians formed 8% of the sample, with the rest of the patients representing Asian ethnicities native to the region (Table 1). Other demographic characteristics are also shown in Table 1. More than half of patients had a history of malar rash, arthritis, hematologic or immunologic manifestations, and 46% had a history of renal disease (Additional file 1: Table S1). The median score in the SLEDAI-2 K was 4 (IQR 2-6). There were 369 patients (26%) with active renal disease, 273 (19%) with cutaneous activity and 119 (8.4%) with musculoskeletal activity; 593 patients (42%) fulfilled criteria for LLDAS ( Table 1). The median SLICC-DI score was 0 (IQR 0-1), with 498 patients (35%) having some damage (SLICC-DI >0). Determinants of HR-QoL Significant differences in the scores for individual SF-36 domains were seen in relation to ethnicity, education, damage and active disease manifestations. Patients of Asian ethnicity had higher (better) scores in domains including role physical, bodily pain, general health, vitality, and social function ( Fig. 1a; Additional file 1: Table S2). Higher education was also associated with higher domain scores, while the presence of damage, or active musculoskeletal or cutaneous manifestations, were associated with lower (worse) scores across multiple domains (Fig. 1b, c, d; Additional file 1: Table S2). The presence or absence of renal activity did not significantly impact on SF-36 domain scores. Higher disease activity as measured by the SLEDAI-2 K and PGA, and higher prednisolone dose, were each significantly associated with lower (worse) PCS and MCS scores in simple linear regression analysis (Table 3). With regard to organ domains of disease activity as measured using SLEDAI-2 K, patients with active musculoskeletal manifestations had significantly poorer PCS scores (coeff 0.89, p < 0.001), whereas patients with cutaneous manifestations had significantly worse MCS (coeff 0.94, p < 0.001). Neither PCS nor MCS scores were significantly different between patients with or without active renal disease. The presence of damage was associated with significantly worse PCS scores, but no differences in MCS scores were observed. Older age at diagnosis (coeff 0.997, p < 0.001) and longer disease duration (coeff 0.997, p < 0.001) were also associated with poorer PCS but not MCS scores. We also analyzed the effect of country of study site and education level as variables. Australian patients recorded the worst PCS scores (43.5, 36.1-52.3), and Chinese patients the worst MCS scores (44.9, 38.5-55.8). In simple linear regression analysis, Asian patients had significantly better PCS scores than their Caucasian counterparts (coeff 1.22, p < 0.001) regardless of the country of residence. Both PCS and MCS scores were significantly higher in patients with higher levels of education (Table 3). In backward stepwise multiple linear regression, multiple variables remained significantly associated with PCS ( Table 4). The presence of damage remained negatively associated with PCS scores (p < 0.001). In contrast, shorter disease duration, younger age at diagnosis, Asian ethnicity, and higher level of education remained significantly positively associated with PCS. Patients with tertiary education (p < 0.01) had better MCS scores. The model set-up and properties are shown in Table 4. Association between LLDAS or disease activity measures and HR-QoL Patients who fulfilled criteria for LLDAS had significantly higher scores in individual SF-36 domains including role physical, bodily pain, general health, vitality, social function, role emotional and mental health (Fig. 2). The only domain not significantly higher (better) in patients who met the criteria for LLDAS was physical function. Patients in LLDAS also had higher PCS and MCS scores (Table 3). After backward stepwise multiple linear regression adjustment for other variables, patients in LLDAS retained higher PCS scores (p < 0.001) and MCS scores (p < 0.001) (model 1, Table 4). These findings support the utility of LLDAS and its association with HR-QoL. Analysis of LLDAS individual components in multiple linear regression (model 2, Table 4) showed that a higher SLEDAI-2 K score (p = 0.05), PGA (p < 0.001) and prednisolone dose (p = 0.01) remained negatively associated with PCS scores, whereas disease flares did not have a significant association. Only the PGA (p = 0.02) remained significantly negatively associated with MCS scores. Assessing individual organ activity instead of total SLEDAI-2 K score (model 3, Table 4) showed after adjustment that musculoskeletal activity (p < 0.001) remained negatively associated with PCS scores, and active cutaneous disease (p = 0.04) remained negatively associated with MCS scores. Discussion The ability to define an achievable treatment goal that is predictive of improved outcomes is essential for the implementation of treat-to-target strategies in SLE, and potentially has utility in the analysis of trials of current and novel therapies [19,31]. Recently, the need to define treatment goals for SLE has received increased attention [20], consequent upon which we reported the definition of a low disease activity treatment outcome state, LLDAS [21]. When disease activity and treatment domains are combined, both of which have been shown to contribute to an adverse long-term outcome in SLE, sustained attainment of LLDAS is associated with protection from accrual of damage over time, as measured using the SLICC-DI, in retrospective analysis of prospectively collected data [21]. Whether LLDAS is associated with measures of HR-QoL has not previously been assessed. An important finding in the present study is the association between LLDAS and better HR-QoL, even after adjustment for other variables that were associated with HR-QoL. The LLDAS definition represents a composite tool with which patients with clinically diverse phenotypes can be stratified in a binary fashion, as either meeting criteria for LLDAS or not. This "reductionistic" approach takes advantage of the fact that the heterogeneity of disease expression in active SLE is, by definition, lessened as the disease activity lessens [18]. By combining different measures of clinical activity, and those of medication burden, the LLDAS is an encompassing measure of the overall clinical state of the patient, and emerging data confirm that the domains of LLDAS contribute independently to the stringency of the measure [32]. This means that LLDAS, rather than simply representing a description of mild disease, represents a composite treatment target state. Non-attainment of LLDAS could therefore reflect flare, refractory disease or insufficient treatment intensity, just as is the case with low disease activity definitions in RA. Given that improvement in HR-QoL is recognized as an important outcome measure in clinical trials [3,8], the association between LLDAS and better SF-36 scores further supports its utility as a treatment target. Prospective studies showing that attainment of LLDAS is associated with improvements in HR-QoL over time are required, and are in progress. In order to scrutinize the effects of the LLDAS components on HR-QoL, we utilized separate multiple linear regression models. SLEDAI-2 K, PGA and prednisolone dose (potentially a surrogate for activity) were each significantly and negatively associated with PCS scores, but only the PGA was negatively associated with MCS scores. Interestingly, disease flares as measured by the SFI were not significantly associated with either PCS or MCS scores. Of note, due to the cross-sectional nature of the analyses in this study, the SFI was used as a surrogate for the third criterion of LLDAS, which is that there must be no new features of lupus disease activity compared to the previous assessment [21]. It is possible that with longitudinal analysis, this LLDAS criterion may be significantly associated with HR-QoL. The relationship between disease activity and HR-QoL in SLE remains controversial in the published literature [12,25,[33][34][35], likely due to a combination of varying study designs, an inherently heterogeneous disease, different measures of activity and fluctuating disease states. Our study is the first to analyze HR-QoL in relation to individual organ system activity based on the SLEDAI. We observed a negative association between active musculoskeletal disease and poorer PCS, and between active cutaneous disease and poorer MCS scores. We consider that it makes clinical sense that active joint and muscle disease affects physical function, while cutaneous disease influences mental wellbeing; young women with SLE who comprise the majority of patients are known to suffer from poor body image [36]. An effect of renal activity on HR-QoL has been described by Appenzeller et al., who reported that patients with active renal disease had slightly poorer physical function, albeit with wide confidence intervals [37]. In contrast we found no significant association between active renal disease and any domains of the SF-36. Some organ involvement, such as lupus nephritis, may be inherently clinically silent in terms of HR-QoL, despite reflecting a serious threat to health. Although undertaken in order to evaluate the association between LLDAS and HR-QoL, this is one of the largest studies to date of HR-QoL in patients with SLE, and as such it affords the opportunity to investigate other factors associated with HR-QoL in SLE. Patient characteristics, such as ethnicity, have previously been shown to be associated with various aspects of disease burden in SLE [38,39], with Caucasian patients having lower disease activity but reporting poorer HR-QoL compared to their non-Caucasian counterparts [35,40]. Studies from individual countries within the Asia Pacific region report poorer HR-QoL in patients with SLE compared to national averages [33], and negative associations with poorer socioeconomic status [26]. However, to date, between-country comparisons have been lacking. We have demonstrated important regional and ethnic differences in HR-QoL. In our study, compared to Caucasians, patients of Asian ethnicity reported better PCS, even when adjusted for other variables, but no significant differences in MCS scores. Similar findings have been reported in different ethnic groups in Canada and the USA, with white ethnicity associated with poorer physical, but not mental function [4,35]. The SF-36 has been cross-culturally validated to allow global comparisons, but it is unlikely that it is sensitive to all cultural and ethnic nuances. The significant difference in PCS and MCS scores between countries in our cohort, even when adjusted for ethnicity and disease factors, further highlights the importance of cultural differences in perception of the impact of disease and patients' coping strategies, which have been suggested to be just as important as disease states in determining HR-QoL in SLE [41]. The ability to cope better with illness was potentially reflected in the association between higher education and better summary scores, a finding supported by previous studies [4,33]. However, this may also be indicative of patients with higher levels of education being employed in less manually labor-intensive jobs, therefore with potentially a less noticeable impact on physical function. Studies assessing the association between organ damage and HR-QoL have reported discrepant results. We identified significant association between greater damage and PCS scores, but not MCS scores, which is also seen [33]. In contrast, in a predominantly Caucasian population with low damage accrual over 8 years, no disease features were associated with decline in physical functioning except for the presence of fibromyalgia [35]. The lack of measurements to identify fibromyalgia and other comorbidities is one of the limitations of this study, as pain and fatigue have been shown to independently influence HR-QoL in patients with SLE [6,10,11]. Two domains of the SF-36, bodily pain and vitality, are potential surrogate measures for pain and fatigue respectively. Patients in LLDAS had significantly higher (better) scores in both of these domains, with the inference that LLDAS may be associated with a reduction in pain and fatigue. A disease-specific HR-QoL tool could further address the additional issues pertinent to patients with SLE and assess the effect of LLDAS on these; however, the currently available disease-specific instruments have not been validated in all the spoken languages of this multicultural cohort of patients. Additionally, clear evidence of superiority is lacking among the multiple disease-specific HR-QoL tools [5]. The cross-sectional nature of the analyses does not allow the assessment of changes in HR-QoL with fluctuating disease states. However, given that the SF-36 is designed to capture HR-QoL in the preceding 4 weeks, the same time frame as the evaluation of disease activity, it should be relevant to disease activity measures captured at the same time. A longitudinal study is underway, which will enable analysis of the association between LLDAS and transitions in HR-QoL measured by the SF36. Assessment of the effect of LLDAS on other PRO measures, such as patient assessment of disease activity, could form the basis of future validation studies. Conclusions In summary, we have shown for the first time that LLDAS is associated with better HR-QoL. This supports the validity of this definition of treatment outcome state for potential use in clinical practice, treat-to-target studies and clinical trials. This conclusion would be further supported by longitudinal studies, of which at least one is underway. In addition, we have described important ethnic, socioeconomic and disease-specific associations with HR-QoL in one of the largest multiethnic SLE cohorts ever studied. Attention to reversible or preventable precipitants of poor HR-QoL should be included in the management of SLE. Additional file Additional file 1: Table S1. Disease manifestations ever present. Authors' contributions VG made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data and drafting and revising the manuscript. RKR made substantial contributions to analysis and interpretation of data and to revising the manuscript critically for important intellectual content. AYBH made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. MH made substantial contributions to analysis and interpretation of data and revising the manuscript critically for important intellectual content. WL made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. YA made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. ZGL made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. SFL made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. SS made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. CSL made substantial contributions to conception and design, acquisition of data and revising the manuscript critically for important intellectual content. MYM made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. AL made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. KF made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. SM made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. STVN made substantial contributions to conception and design, acquisition of data and revising the manuscript critically for important intellectual content. LZ made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. YJW made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. LH made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. MC made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. SON made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. FG made substantial contributions to acquisition of data and to revising the manuscript critically for important intellectual content. MN made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data and drafting and revising the manuscript. EFM made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data and drafting and revising the manuscript. All authors have given approval for the final version of the manuscript to be published. All authors have agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
2018-01-26T21:34:48.218Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "d5e285e4f4031b311d7a985973db8ec67bad7fcb", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-017-1256-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5e285e4f4031b311d7a985973db8ec67bad7fcb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158046925
pes2o/s2orc
v3-fos-license
Unbounded Derivations in Algebras Associated with Monothetic Groups Given an infinite, compact, monothetic group $G$ we study decompositions and structure of unbounded derivations in a crossed product C$^*$-algebra $C(G)\rtimes\Z$ obtained from a translation on $G$ by a generator of a dense cyclic subgroup. We also study derivations in a Toeplitz extension of the crossed product and the question whether unbounded derivations can be lifted from one algebra to the other. Introduction Derivations naturally arise in studying differentiable manifolds, in representation theory of Lie groups and in their noncommutative analogs. They also appear in mathematical aspects of quantum mechanics, in particular in quantum statistical physics. Additionally, derivations are important in analyzing amenability and other structures of operator algebras. Good overviews are in B [1] and also in S [14]. In this paper we study classification and decompositions of unbounded derivations in C *algebras associated to an infinite, compact, monothetic group G, which, by definition, is a Hausdorff topological group with a dense cyclic subgroup. A group translation on G by a generator of a cyclic subgroup is a minimal homeomorphism and one algebra associated with G is the crossed product C * -algebra B := C(G) ⋊ Z determined by the translation. This algebra can be naturally represented in the ℓ 2 -Hilbert space of the full orbit. If we consider the analogous algebra on the forward orbit only, we obtain a Toeplitz extension A of the algebra B. When the group is totally disconnected those algebras are precisely Bunce-Deddens and Bunce-Deddens-Toeplitz algebras considered in KMRSW2 [9]. The main objects of study in this paper are unbounded derivations d : A → A which are defined on a subalgebra A of polynomials in generators of A. Similarly, we study derivations δ : B → B, where B is the image of A under the quotient map A → A/K = B. The first of the main results of this paper is that any derivation in those algebras can be uniquely decomposed into a sum of a certain special derivation and an approximately inner derivation. The special derivations are not approximately inner, and can be explicitly described. It turns out that any derivation d : A → A preserves the ideal of compact operators K and consequently defines a factor derivation [d] : B → B in B. It is an interesting and non-trivial problem to describe properties of the map d → [d]. For any C * -algebra it is easy to see that bounded derivations preserve closed ideals and so they define derivations on quotients. It was proved in P [12] that for bounded derivations and separable C * -algebras the above map is onto, i.e. derivations can be lifted from quotients. In non-separable cases this is not true in general. We prove here that lifting unbounded derivations from B to A is always possible when G is totally disconnected, answering positively a conjecture in KMRSW2 [9]. However we give a Date: March 18, 2022. simple counterexample of a special derivation in the algebra B for G = T 1 that cannot be lifted to a derivation in the algebra A. Instead, we conjecture that for any compact, infinite, monothetic group approximately inner derivations in B can be lifted to approximately inner derivation in A. The paper is organized as follows. In section 2 we review monothetic groups and discuss their properties. We also describe a crossed product C * -algebra that is associated to a monothetic group and that algebra Toeplitz extension, as well as discuss a Toeplitz map from one algebra to another. In section 3 we classify all unbounded derivations on polynomial domains in the C * -algebras from section 2. Finally, in section 4 we consider lifting derivations from a crossed product C * -algebra to its Toeplitz extension. We prove that all derivations can be lifted for totally disconnected, compact, infinite, monothetic groups and provide an example that shows that not all derivations can be lifted in general. [11], that if G is a locally compact monothetic group, then G ∼ = Z or G is compact. In this paper we only consider the case of compact G. It follows immediately that G is Abelian and separable. We first describe the structure of such groups following HS [5]. The key tool is the character (dual) group and Pontryagin duality, which translates properties of groups into properties of their duals. Monothetic Groups and Let S 1 be the unit circle: and let G denote the dual group G, the group of continuous homomorphisms from G to S 1 equipped with compact-open topology. It is well known that if G is compact then G is discrete. We typically use additive notation for an abelian group, however we use multiplicative notation for the dual group. Given a monothetic group G, let x 1 be a generator of a dense cyclic subgroup, and we set x n = nx 1 for n ∈ Z, so that x 0 := 0 is the neutral element of G. Then we can identify the dual group G of G with a discrete subgroup of S 1 via the map given by: Conversely, using Pontryagin duality, if H is a discrete subgroup of S 1 , then H is the dual group of a compact monothetic group, namely H, see HS [5]. To better understand the structure of monothetic groups we look at the torsion subgroup of its dual group. Given a monothetic G, the torsion subgroup of G tor of G is given by: There are two extreme cases: we say G is of pure torsion if G = G tor . We also say G is torsion free if G tor = {0}. The following statements describe basic properties of monothetic groups. We provide short or outlined proofs with references. A good, concise book on Pontryagin duality is M [11]. First we look at the case of torsion free G. [11], which only requires G to be compact, Abelian. We have the following remarkable result proved in HS [5]. m_sep_mono Theorem 2.2. Every connected compact separable Abelian topological group is monothetic. The n-dimensional torus, T n = R n /Z n is an example of a compact, connected, separable, Abelian group and thus by Theorem con_com_sep_mono 2.2 is monothetic. Consider an element [11], since an element of the discrete group G is compact (i.e. the smallest closed subgroup containing it is compact) if and only if it has finite order. Before we state the next structural result we need to introduce odometers. Further details on odometers can be found in D [3]. The standard definition of an odometer (that inspired the name) uses a sequence of positive integers b := (b m ) m∈N such that b m ≥ 2 for all m, called a multibase. The odometer is then identified (as a set) with the direct product: but addition is defined with the carry over rule. Equipped with the product topology G(b) becomes a compact, totally disconnected topological group. It is easy to see that the cyclic subgroup generated by x 1 = (1, 0, 0, 0, . . .) is dense and so G(b) is a monothetic group. An alternative representation of the odometer G(b) uses scales, and this is the description that is used in the proof of Theorem lift_theo 4.3. Let s = (s m ) m∈N be a sequence of positive integers such that s m divides s m+1 and s m < s m+1 . There are natural homomorphisms between the consecutive finite cyclic groups Z/s m+1 Z → Z/s m Z, namely congruence modulo s m . Thus the inverse limit: is well defined as the subset of the countable product m Z/s m Z consisting of sequences (y 1 , y 2 , y 3 , . . .) such that y m+1 ≡ y m (mod s m ). Addition in this representation is coordinatewise, modulo s m in each coordinate m. G s becomes a topological group when endowed with the product topology over the discrete topologies in Z/s m Z. Obviously, with our assumptions, this group is infinite because s is unbounded. The relation between the two definitions of an odometer is as follows. Given a multibase b = (b m ) m∈N define a scale s = (s m ) m∈N by s 1 = b 1 , s 2 = b 1 g 2 , s 3 = b 1 b 2 b 3 and so on. Equivalently, we have: Then the map gives an isomorphism of the groups. In the scales representation of odometers the generator x 1 of a cyclic subgroup is given by x 1 = (1, 1, 1, 1, . . .). With the above definitions it is not transparent when two odometers are isomorphic, so we describe yet another way to define odometers that we used in KMRSW2 [9]. A supernatural number N is defined as the formal product: If ǫ p < ∞ then N is said to be a finite supernatural number (a regular natural number), otherwise it is said to be infinite. If is another supernatural number, then their product is given by: A supernatural number N is said to divide M if M = NN ′ for some supernatural number N ′ , or equivalently, if ǫ p (N) ≤ ǫ p (M) for every prime p. Given a supernatural number N let J N be the set of finite divisors of N: Then (J N , ≤) is a directed set where j 1 ≤ j 2 if and only if j 1 |j 2 |N. Consider the collection of cyclic groups {Z/jZ} j∈J N and the family of group homomorphisms Then the inverse limit of this system can be denoted as: In particular, if N is finite the above definition coincides with the usual meaning of the symbol Z/NZ, while if N = p ∞ for a prime p, then the above limit is equal to Z p , the ring of p-adic integers, see for example Robert [13]. Given a scale s = (s m ) m∈N we define the corresponding supernatural number N to be the "limit" of s m : in the sense that each prime exponent ǫ p (N) of N is defined to be the supremum of the prime exponents ǫ p (s m ), m ∈ N. It follows that s m 's are divisors of N and for every j ∈ J N there is a natural number m(j) such that j|s m(j) . Consequently, a sequence (z j ) ∈ lim which gives an isomorphism Z/NZ ∼ = G s . It turns out that odometers are classified by the supernatural number N, see D [3]. As before, generates a dense cyclic subgroup. In general, we have the following simple consequence of the Chinese Reminder Theorem: Since the space Z/NZ is a compact, abelian topological group, it has a unique normalized Haar measure µ. Also, if N is an infinite supernatural number then Z/NZ is a Cantor set W [15]. We are now ready to state the next structural result about compact monothetic groups. Let G be a compact totally disconnected monothetic group. In HS [5], between Theorems II ′ and III on pages 256-257, the authors show that G is isomorphic to a direct product of groups G p i where p i runs over all primes and where G p i isomorphic to the zero group, the cyclic group of order p ǫ i i for some ǫ i or the group of p-adic integers, the last case corresponds to ǫ i = ∞. In general, for arbitrary G we have the following structure for compact monothetic groups. Proposition 2.5. Let G be a compact monothetic group. If G 0 ≤ G is the connected component of the neutral element 0, then G 0 is a connected separable compact Abelian group and G/G 0 is a totally disconnected monothetic group. Moreover, there are natural isomorphisms: Proof. This proposition is not formally stated but appears as a note in HS [5], see also Corollary 3 of Theorem 30 of M [11]. The first part follows from the previous propositions. Recall that the annihilator A(G 0 ) of G 0 is given by: By Pontryagin duality, Theorem 27 of M [11], we have: Notice the right-hand side of the equation is Abelian, discrete and of pure torsion. Thus given χ ∈ A(G 0 ), it defines a character class Therefore, we have: hence χ has finite order and thus A(G 0 ) ≤ G tor . Let χ ∈ G tor , then since G 0 is connected and thus χ ∈ A(G 0 ). Therefore we have A(G 0 ) = G tor and hence The second isomorphism relation follows from Pontryagin duality: and the proof is complete. 2.2. Minimal Systems. By a topological dynamical system (X, ϕ), we mean a topological space X and a continuous map ϕ : X → X, see KH [6]. A topological dynamical system (X, ϕ) is called topologically transitive if there exists a point x ∈ X such that its orbit {ϕ n (x)} n∈Z is dense in X. (X, ϕ) is called minimal if every orbit is dense in X. We say and write ϕ is minimal for brevity. Other equivalent characterization of minimal maps is as follows Then, ϕ is minimal if X does not contain any non-empty, proper, closed ϕ-invariant subset. If in addition X is assumed to be Hausdorff and compact, then a minimal map ϕ must be surjective. Moreover, if (X, ϕ) is topologically transitive then there is no ϕ-invariant nonconstant continuous function on X. Suppose that G is a compact monothetic group with x 1 the generator of a dense cyclic subgroup. Then we define the map ϕ : G → G by the formula: It follows that (G, ϕ) is a minimal system. Let us remark that for metrizable spaces a minimal, equicontinuous, dynamical systems coincide with translations by a generator of a dense cyclic subgroup of a compact monothetic groups, see Theorem 2.4.2 in K [10]. We now turn our attention to the algebras that are present in this paper. Let G be a compact infinite monothetic group, C(G) the complex-valued continuous functions on G and µ a normalized Haar measure on G. Recall the notation for the elements of the cyclic subgroup generated by x 1 : x n = ϕ n (0) = nx 1 , for n ∈ Z. The set {x n } n∈Z is the full orbit of 0 under ϕ and {x n } n≥0 is the forward orbit. As mentioned above, since ϕ is a minimal homeomorphism, the forward orbit {x n } n≥0 is dense in G. Consider the algebra of trigonometric polynomials on G: We state below two simple but useful properties of F that we will need later in the paper. First we have the following observation. Proof. If f ∈ F then f has the following decomposition: where χ j are characters on G. Notice that we have which means that G f dµ = 0 if and only if χ j = 1 for all j. Let χ be a nontrivial character, then the goal is to find a g ∈ F such that Notice that for a nontrivial character we must have χ(x 1 ) = 1. Otherwise, if χ(x 1 ) = 1, then χ(x n ) = χ(nx 1 ) = 1 which in turn implies that χ = 1 on a dense set, and thus χ ≡ 1, which is a contradiction. Therefore, we can choose which clearly satisfies ( cocyc 2.2). Now that we can find a function g(x) that solves ( cocyc 2.2) for a nontrivial character, we just take finite linear combinations of such functions for the general case of a trigonometric polynomial, thus completing the proof. Next we describe another useful property of the space F . property_G Proposition 2.7. For any nonzero n ∈ Z, there exists a trigonometric polynomial Proof. The key property of the characters is that they separate points of G, see Theorem 14 of M [11]. Therefore, if n = 0, we can pick χ such that: As in the previous proposition, the general case is handled by linearity and the proof is complete. 2.3. C * -algebras. Let G be an infinite, compact, monothetic group. We will describe now two types of C * -algebras that can be naturally associated with such groups. They are defined as concrete C * -algebras of operators in the following Hilbert spaces. The first Hilbert space is the ℓ 2 space of the full orbit: , which is naturally isomorphic with ℓ 2 (Z). Let {E l } l∈Z be the canonical basis in H. The second Hilbert space is the ℓ 2 space of the forward orbit: which is naturally isomorphic with the Hilbert space ℓ 2 (Z ≥0 ). We also let {E + k } k∈Z ≥0 be the canonical basis on H + . The C * -algebras associated to G are defined using the following operators. Let V : H → H be the shift operator on H: We also need the unilateral shift operator on H + : Notice that V is a unitary while U is an isometry. We have: where P 0 is the orthogonal projection onto the one-dimensional subspace spanned by E + 0 . For a continuous function f ∈ C(G) we define two operators M f : H → H and M + f : H + → H + via formulas: They are diagonal multiplication operators on H and H + respectively. Due to the density of the orbit {x k } k∈Z ≥0 , we immediately obtain: The algebras of operators generated by M f 's or by M + f 's are thus isomorphic to C(G) so they carry all the information about the space G, while operators U and V reflect the dynamics ϕ on G. The relation between those operators is: Similarly we have: There is also another, less obvious relation between U and M + f 's, namely: We define the algebra B to be the C * -algebra generated by operators V and M f : We claim that B is isomorphic with the crossed product algebra: Indeed, observe that Z is amenable, the action of Z on G given by ϕ is a free action, and ϕ is a minimal homoemorphism, thus the crossed product is simple and equal to the reduced crossed product, see F [4]. Clearly, the operators V and M f define a representation of C(G) ⋊ ϕ Z, which must be isomorphic to it, by simplicity of the crossed product. The algebra B has a natural dense * -subalgebra B of polynomials in V , V −1 , and the M χ 's, where χ is a character of G. Equivalently, we have: Next we define the other algebra that is of the main interest in this paper, a Toeplitz extension of B. We define the algebra A to be the C * -algebra generated by operators U and M + f : To proceed further we need the following label operators on H and H + respectively: The algebra A has a natural dense * -subalgebra A of polynomials in U, U * , M + χ 's, where χ is a character of G, which can be equivalently described as follows, using Proposition 3.1 from KMRSW1 [8] and also Proposition n_cov_set_eq 2.11 below: where the sums above are finite sums and c 00 (Z ≥0 ) is the space of sequences that are eventually zero. Notice that if a ∈ A and x ∈ c 00 (Z ≥0 ) ⊆ H + , then ax ∈ c 00 (Z ≥0 ), an observation that is often used below. Next we establish the key relation between the two algebras A and B. Let P + : H → H + be the following map from H onto H + given by We also need another map s : H + → H given by: T is known as a Toeplitz map. It has the following properties. Proof. For the first statement, if h ∈ H + then we have the following calculation: For the second statement we apply T (bV n ) to the basis elements E + k of H + . We have A similar calculation shows the other equality T (V −n b) = (U * ) n T (b). Finally, for the last statement, we apply T (bM f ) and T (M f b) to the basis elements to get: This completes the proof. The next result describes the main relation between the two algebras A and B. Proof. Notice first that we have: It follows that the operators P k,l := U k P 0 (U * ) l are also in A. Thus, all finite rank operators with respect to the basis {E + k } belong to A as they are finite linear combinations of P k,l . Moreover, since all compact operators in B(H + ) are norm limits of these finite rank operators and A is a C * -algebra, it follows that K ⊆ A. It is clear that K is an ideal in A. Verifying that the map given by equation ( Useful tools in classifying derivations on A and B are 1-parameter groups of automorphisms of A and B respectively that are given by the following equations: where θ ∈ R/2πZ. We have the following formulas: ρ K θ (U) = e iθ U, ρ K θ (a(K)) = a(K), and similarly for ρ L θ . It immediately follows that ρ K θ : A → A and that ρ L θ : B → B. The automorphisms define natural Z-gradings on A and B given by the spectral subspaces: We call the elements of these sets the n-covariant elements of A and B respectively. When n = 0 we call those elements invariant. Let c 0 (Z ≥0 ) be the space of sequences that converge to zero. The n-covariant elements of A and B can described in detail. cov_set_eq Proposition 2.11. We have the following set equalities: , f ∈ C(G)} when n < 0. Similarly, we have: Proof. Consider the invariant elements in A, that is ρ K θ (a) = a. It follows from the definition of ρ K θ that these elements are precisely the diagonal operators in A. Moreover, we have the following unique decomposition, which is analogous to Proposition 2.4 in where a(k) ∈ c 0 (Z ≥0 ) and f ∈ C(G). Next we consider the n-covariant elements for n = 0. Without loss of generality we only consider n > 0. Since we have: for a(k) ∈ c 0 (Z ≥0 ) and f ∈ C(G), one containment follows immediately. On the other hand, if a ∈ A n then a(U * ) n is an invariant element and thus by the above has the form a(U * ) n = a(K) + M + f for some a(k) ∈ c 0 (Z ≥0 ) and f ∈ C(G). The other direction now follows. The same argument also works for B n , completing the proof. Similarly, we consider n-covariant elements from A and B: A n = {a ∈ A : ρ K θ (a) = e inθ a} and B n = {b ∈ B : ρ L θ (b) = e inθ b}. As in Proposition n_cov_set_eq 2.11, a ∈ A n if and only if a has the same element decomposition but with a(k) ∈ c 00 (Z ≥0 ) and f ∈ F . Again, there is an analogous result for b ∈ B n . Classification of Derivations As in KMRSW2 [9], one of the main goals in this paper is to classify unbounded derivations in A and B. We begin with recalling the basic concepts. Let M be a Banach algebra and let M be a dense subalgebra of M. A linear map d : M → M is called a derivation if the Leibniz rule holds: for all a, b ∈ M. We say a derivation d : for a ∈ M. We say a derivation d : for a ∈ M. Given n ∈ Z, a derivation d : A → A is said to be a n-covariant derivation if the relation (ρ K θ ) −1 d(ρ K θ (a)) = e −inθ d(a) holds. We have a similar definition for a derivation δ : B → B. Like above, when n = 0 we say the derivation is invariant. Proof. Define a sequence {α N (k)} as follows: Derivations in Then α N (k) ∈ c 00 (Z ≥0 ) and α N (K) converges to α(K) as N → ∞. Next, define a sequence {β N (k)} by is an invariant inner derivation. We have for all a(K) ∈ A 0 . Thus, by the Leibniz rule, the limit exists for all a ∈ A. Thus, this limit is a derivation from A to A. It follows that d α is approximately inner and invariant. , and notice that d f N : A → A is an inner invariant derivation. By direct calculation we have for every a(K) ∈ A 0 . Thus, by the Leibniz rule, the limit exists for all a ∈ A and is a derivation from A to A. It follows that d f is approximately inner and invariant. Proof. Let a(K) ∈ A 0 be a diagonal operator such that a(k) ∈ c 00 (Z ≥0 ). Then, by invariance of d, we have d(a(K)) ∈ A 0 . Notice that since A 0 is precisely the algebra of diagonal operators in A it is therefore a commutative algebra. Let P 2 = P be a projection in A 0 . Applying d to both sides of the equation and using Leibniz's rule we have which implies that (1 − 2P )d(P ) = 0 and hence d(P ) = 0. Since a(K) is a finite sum of projections in A 0 , it follows that d(a(K)) = 0. Let P k be the one-dimensional orthogonal projection onto the span of E k . Then P k ∈ A 0 and thus d(P k ) = 0. We have the following formula: Therefore, applying d to both sides, we obtain: Finally, to verify uniqueness of the decomposition, we only need to check that that d K is not approximately inner. If d K is approximately inner then we can arrange that it can be approximated by inner invariant derivations of the form d j (a) = [β j (K), a] with β j (k) ∈ c(Z ≥0 ). Since {β j (k + 1) − β j (k)} ∈ c 0 (Z ≥0 ) we would also get {(k + 1) − k} ∈ c 0 (Z), which is a contradiction. Full details of an analogous result are given in Theorem 3.10 of Proof. We only discuss the case of n > 0 as the case of n < 0 is completely analogous. By definition of n-covariance there exists an α(K) ∈ A 0 such that We define a "twisted" derivationd : for a(K), b(K) ∈ A 0 . Since A 0 and A 0 are commutative algebras we get Similarly to the proof in Theorem 3.4 in KMRSW2 [9], there must exist a β(K) such that d(a(K)) = β(K) (a(K) − a(K + nI)) . Next we apply d to the commutation relation U * a(K) = a(K+I)U * for a diagonal operator a(K) ∈ A 0 , and obtain: where we define β(−1) := 0. Rearranging these terms gives: for all a(K) ∈ A 0 . It therefore follows that β(K) − β(K − I) = α(K). Thus β(k) is uniquely determined by for any a ∈ A, since both sides of the above equation are derivations, and they agree on the generators of the polynomial algebra A. By the remark preceding the statement of the proposition, if f ∈ F is such that . Therefore, it follows that we must have β(K) ∈ A 0 , and the proof is complete. To classify all derivations d : A → A we need to define the Fourier coefficients of d following the ideas of BEJ [2]. Definition: If d is a derivation in A, the n-th Fourier component of d is defined as: A direct calculation shows that if d : A → A is a derivation then d n : A → A is an n-covariant derivation. We have the following key Cesàro mean convergence result for Fourier components of d, which is more generally valid for unbounded derivations in any Banach algebra with the continuous circle action preserving the domain of the derivation: if d is a derivation in A then The terms under the limit sign are all finite linear combinations of n-covariant derivations and so they are inner derivations themselves, meaning that the limit is approximately inner, which ends the proof. We also have the following useful but weaker convergence result for the Fourier components of derivations. We say that n∈Z d n (a) converges densely pointwise on the set c 00 (Z ≥0 ). Proof. By Leibniz rule we only need to verify the above formula on generators of A. Moreover, it is enough to consider only x = E + k , since c 00 (Z ≥0 ) consists of finite linear combinations of such x's. Below we show the details for a = M + f , as the calculations for a = U and a = U * are very similar. We have the following basis decomposition: Using the definition of the n-th Fourier components d n of d and the fact that d n are ncovariant, a direct calculation gives: It follows that completing the proof. Derivations in B. Next we classify derivations in B starting with the invariant derivations. It turns out that there are new types of invariant derivations in B that were not present in A. We describe these in the following lemma. artial_der Lemma 3.7. Let ∂ : F → C(G) be any derivation such that for all f ∈ F , which we call a ϕ invariant derivation in C(G). Then there exists a unique invariant derivation δ ∂ : B → B such that Define the δ ∂ on the generators as above by δ ∂ (V ) = 0 and δ ∂ (M f ) = M ∂f . Using the Leibniz rule we try to extend this definition to all B. To verify that δ ∂ is a well-defined derivation from B → B, we thus need to check that it preserves the relation. Applying δ ∂ to both sides of the relation yields M ∂f •ϕ = M ∂(f •ϕ) , completing the proof. As with algebra A there is a simple example of an invariant derivation which is given by The proof is identical to that of Lemma andδ is approximately inner. Proof. Since δ is invariant, there exists f 0 ∈ C(G) such that Moreover, there exists a linear map ∂ : Applying δ to the relation M f g = M f M g gives Hence ∂ satisfies the Leibniz rule and thus is a derivation. Applying δ to both sides of the i.e. ∂ is ϕ invariant. Now write f 0 = c 0 +f with c 0 ∈ C and Gf dµ = 0. It follows that where δ ∂ : B → B is the derivation defined in Lemma To complete the proof we notice that a non-zero derivation δ ∂ cannot be approximately inner since F is commutative and hence has no non-zero inner and approximately inner derivations. This proves the uniqueness of the decomposition and finishes the proof of the proposition. Because the proof of classifying all n-covariant derivations in B is essentially the same as in the case of A, we only state the result. where δ ∂ is the derivation defined in Lemma partial_der andδ is an approximately inner derivation. We also state here a dense pointwise convergence result for Fourier components δ n of a derivation δ : B → B, which is similar to Proposition and we say that n∈Z δ n (b) converges densely pointwise on the set c 00 (Z). Lifting Derivations The first important observation is that any derivation in algebra A preserves compact operators. ve_compact Proposition 4.1. If d : A → A is a derivation, then d : A ∩ K → K. Proof. It is enough to prove that d(P 0 ) is compact, where P 0 is the orthogonal projection onto the one-dimensional subspace spanned by E + 0 , because A ∩ K is comprised of linear combinations of expressions of the form U l P 0 (U * ) j and compactness would follow immediately from the Leibniz property. To see that d(P 0 ) is compact, apply d to both sides of the relation P 0 = P 2 0 to obtain: which completes the proof. As a consequence of Proposition 3.11 that if there is a nonzero ϕ invariant derivation in C(G), ∂ : F → C(G), then there is no d : A → A such that [d] = δ ∂ , because δ ∂ is not approximately inner. A natural example of this is G = T 1 = R/Z with x k = θk (mod Z), k ∈ Z and θ irrational, giving a dense subgroup of T 1 . In this case, F is the actual space of trigonometric polynomials. Any derivation ∂ : F → C(T 1 ) invariant with respect to ϕ(x) = x + θ (mod Z) is of the form: In this case, the algebra B is generated by V and W = M e 2πix satisfying the relation Consequently, B is isomorphic with the irrational rotation algebra. B is the algebra of polynomials in V and W and the derivation δ d/dx : B → B is given on generators by and it cannot be lifted to a derivation in A. The key reason is that there is an additional relation on A given by equation ( 3) which prevents existence of such a lift. We conjecture however, that for any compact infinite monothetic group, any approximately inner derivation δ : B → B can be lifted to a derivation d : For the remainder of the section we let G be totally disconnected, in other words G is an odometer, and thus by Proposition Below we prove one of the main results of this paper that for odometers, any unbounded derivation in B(N) can be lifted to an unbounded derivation in A(N). We will need the following useful result for computing Hilbert-Schmidt norms of operators in ℓ 2 (Z) and ℓ 2 (Z ≥0 ). Since below we work mostly with algebra A, we only state the corresponding version for brevity. where {a n (k)} n∈Z,k∈Z ≥0 ∈ ℓ 2 (Z×Z ≥0 ). Then a is an integral operator with the Hilbert-Schmidt norm given by: Proof. Write f ∈ ℓ 2 (Z ≥0 ) in the canonical basis: Applying the formula for a to f yields: af = n≥0 k≥0 a n (k)f (k)E + k+n + 0≤k+n n<0 a n (k + n)f (k)E + k+n . Therefore, by writing a in the following way the Hilbert-Schmidt norm formula now follows, completing the proof. Finally, we get the following expression for diagonal operators M + χ : The result follows provided we can choose β n,0 (k) so that the right-hand sides of the above equations are compact operators. We compute the Hilbert-Schmidt norm of the above operators to show the compactness. A direct calculation using Proposition where M | N and j ∈ Z. Therefore I and II become The key observation used below is that the coefficients f n on the Fourier decomposition of the derivation δ : B → B satisfly the following condition: for all M | N: Here P −1 is the orthogonal projection in ℓ 2 (Z) onto the one-dimensional subspace spanned by E −1 , while P ≥0 is the orthogonal projection onto the subspace spanned by {E l } l≥0 . Equations above imply that we have: To proceed further we choose a scale s = (s m ) m∈N for the supernatural number N, which is a sequence of positive integers such that s m divides s m+1 , s m < s m+1 , and such that N = lim m s m , see ( N_limit 2.1). For every n ∈ Z there is an index m such that s m | n but s m+1 ∤ n. We then write n = s m n ′ , where n ′ is such that s m+1 /s m ∤ n ′ . Using this decomposition we choose N n = C m to be a constant depending on m only, to be determined later. Also, without loss of generality, we can choose M, in the formula for the character χ, to be equal to one of the elements of the scale: M = s q . It is then important to notice that s q ∤ n = s m n ′ if and only if m < q. Consequently, we have the following expressions: for any choice of C m because the sum over s m+1 ∤ n is finite by equation ( hs_condition 4.1). Next, for II we have an estimate: By equation ( hs_condition 4.1) the interior sum is finite. Finally, we can always choose C m large enough so that II < ∞. This completes the proof.
2019-05-17T14:55:43.000Z
2019-05-17T00:00:00.000
{ "year": 2019, "sha1": "ec40cfccca2d7027b753cb9225cf21a1fe05190c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1905.07306", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "df7b7ae1ffcbc6a7b0b66de7a465e654396a4e1e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
232158800
pes2o/s2orc
v3-fos-license
Practice, self-confidence and understanding of pediatric obstructive sleep apnea survey among pediatricians Background. Pediatricians play an important role in the screening, diagnosis and management of childhood obstructive sleep apnea (OSA). This study used a questionnaire to explore the knowledge, self-confidence and general practices of childhood OSA among Thai pediatricians. Methods. This was a descriptive cross-sectional survey study, using a newly developed questionnaire; including: 21 knowledge items, 4 self-confidence items, questions regarding OSA screening, number of OSA cases per month and OSA management. Results. A total of 307, convenient pediatricians; from different types of hospitals across all regions of Thailand, participated in this study. The median, total knowledge score was 19 (range 14‒21). Two-thirds of the respondents felt confident/extremely confident in their ability to identify and manage children with OSA. The average number of OSA cases reported by pediatricians was 5.9 cases per month. During a general medical check-up, 86.6% of the respondents did not routinely ask about OSA symptoms. Significant odds ratios (ORs) for the use of montelukast, as the first-line drug for OSA in young children, were observed in pediatric allergists and pulmonologists (adjusted OR 2.58, 95% CI 1.11–6.01 and adjusted OR 2.20, 95% CI 1.2–4.02) (P = 0.008), respectively, compared to general pediatricians and other sub-specialties. Conclusions. Pediatricians had a high level of overall OSA knowledge, and good self-confidence in identifying and managing children with OSA. However, a low recognition rate and unawareness of OSA screening were observed. Obstructive sleep apnea (OSA) is the most common form of sleep-disordered breathing (SDB) in children. It is characterized by prolonged and repetitive partial or complete upper airway obstruction during the sleep period that results in hypoxemia and hypercapnia, which affects sleep quality. [1][2][3] Undiagnosed or untreated childhood OSA may lead to a significant negative effect on health-related quality of life and cause serious cardiovascular complications, metabolic abnormalities, neurocognitive and behavioral problems and a failure to thrive. Early recognition and treatment of childhood OSA is crucial to prevent morbidity and to also provide better quality of life for both children and their families. Routine screening for SDB in primary pediatric care settings has been recommended by the American Academy of Pediatrics (AAP) since 2012. 4 However, many previous studies showed unawareness, low recognition rate, and under management of children with OSA among community-based primary care physicians and academic settings. 5,6 Some studies reported improvement of care in childhood OSA was associated with better knowledge, positive attitudes and formal education in childhood OSA. 7,8 Pediatricians are the key people, who play an important role in the screening, diagnosis, and management of childhood OSA; however, currently there is no information regarding knowledge, self-confidence and practices related to childhood OSA in Thailand. The Thai guidelines for childhood obstructive sleep apnea was first published in 2015, to provide a national standard practice guideline for the diagnosis and management of OSA among the pediatric population. 9 In spite of the available guideline, a knowledge gap still persists, and the practices of OSA treatment continue to be heterogeneous. Therefore, this study used a questionnaire to explore the knowledge, selfconfidence, and general practices of childhood OSA among Thai pediatricians. The outcome measurement of this study may contribute to a better understanding of the importance of OSA, and the confidence of pediatricians in their ability to screen, diagnose and manage childhood OSA. Study design and population This study was a descriptive cross-sectional survey study, conducted from January to March, 2019; using the questionnaire to assess the knowledge, self-confidence and general practices of childhood OSA among Thai pediatricians. The study was approved by the Human Research Ethics Committee of the Faculty of Medicine, Prince of Songkla University at 7th February 2019 (REC.62-001-1-1). The study participants consisted of convenient pediatricians who were currently working in Thailand. Sample size calculation The sample size was calculated based on an estimate of the finite population proportion equation. 10,11 It was estimated that 80% of Thai pediatricians had good knowledge and self-confidence scores (higher than 80%). It also was determined that a sample size of 257 pediatricians was required to represent the population of Thai pediatricians, with a sampling error of 5% at a 95% confidence level and 10% allowed for non-respondents. Developing the questionnaire A newly developed questionnaire was used to evaluate the knowledge, self-confidence and practices of Thai pediatricians in concerns to OSA. The questionnaire consisted of 3 parts. OSA knowledge part The OSA knowledge part consisted of 21 items, which were presented in a true or false format. Validity testing of the knowledge items used the individual content validity index (I-CVI) method, by four pediatric pulmonologists. If at least 3/4 of the expert members gave the individual items a score of relevant or extremely relevant, then the items were considered for inclusion in the final questionnaire. For internal consistency, a pilot test, conducted by 20 pediatric staff doctors, was used to assess the questionnaire's reliability. Cronbach's alpha by SPSS software was 0.572. Self-confidence part The self-confidence part consisted of 4 items, used to evaluate the confidence of pediatricians in their ability to identify children at risk of OSA, initiate treatment, and follow-up of the children with OSA as well as their confidence to give information. Practices part The practices part consisted of 3 items, used to evaluate the number of OSA cases each month, frequencies of performing history taking for OSA symptoms and therapies for OSA management. Study Procedure The questionnaire was created on a Google form, and submitted to convenient pediatrician participants, using the Line application and E-mail. There was a consent paragraph in the participant's information sheet, and an informed consent process was done in active voluntary action to complete the survey online. The questionnaires were completed anonymously. Data management and analysis Data were collected from the Google forms, and the analysis used R program version 3.5.1. For the knowledge parts, the total scores were presented as percentages, median and interquartile range (IQR). The chi-square test, Kruskal-Wallis test, rank sum test, and the logistic regression model were used to assess the differences between the knowledge score and associated factors. Spearman's rank correlation was used to evaluate the relationship between the total knowledge score and self-confidence score. A P-value < 0.05 was considered statistically significant. For practice items, the number of children with OSA in general practice is presented as mean. The frequency of history taking of OSA and OSA treatment are reported as percentages. Results The convenient respondents totaled 307 pediatricians, ranging from 28 to 60 years of age. Characteristics of the respondents are shown in Table I. Most of the respondents were female (82.7%), and nearly half were general pediatricians (40.4%). More than half of the respondents had less than 10 years of experience, since pediatric board graduation. There were 128 community-based pediatricians and 147 pediatricians who worked in teaching hospitals. Most of the respondents worked in either central Thailand (45.3%) or southern Thailand (27.7%). OSA knowledge part From the 21 knowledge items, the mean, total knowledge score was 18.5; with the median being 19 (range 14-21). All sub-categorical knowledge domain scores are shown in Table I. The responses in each individual knowledge item are listed in Table II. Most of the knowledge items were answered correctly (range 63.8-99.7%). Factors associated with knowledge score We compared the total knowledge score and knowledge score in sub-categorical knowledge domains between specialty training using the Kruskal-Wallis test. Pediatric pulmonologists had a significantly higher, total knowledge score (P = 0.045) as well as identification & evaluation score (P = 0.047) than non-pulmonologist pediatricians. No difference in total knowledge score and sub-categorical knowledge domains were observed between pediatricians who work in community or teaching hospitals, nor between ≤ 10 and > 10 years of pediatrics practice experience (Table III). Factors associated with incorrect answers in focus items The overall percentage of correct responses in the 4 knowledge items (items 6, 8, 14, and 15) was lower than 80%. Linear regression analysis was performed to evaluate the factor determinants of the incorrect answers in these focus items. Practice part Overall, Thai pediatricians in their practices saw an average of 5.9 cases per month (range 1-60) of childhood OSA, while the pediatric pulmonologists saw 15 cases per month of children with OSA. Seventy-one percent of Thai pediatricians "always" screened for OSA in obese children, but only 13.4% of Thai pediatricians "always" asked about OSA symptoms in general medical check-ups. Figure 1 shows the percentage of therapies prescribed by Thai pediatricians for OSA management. Intranasal corticosteroids (INS) were "often/always" prescribed in 44.6% and 30.9%, respectively. Thirty-point seven percent of respondents reported "often" and 14.7% reported "always" as prescribing montelukast. Systemic corticosteroids (for example; prednisolone/dexamethasone) and oxygen therapy were rarely used for management of children with OSA. Discussion This PSU-OSA Survey aimed to explore Practice, Self-confidence, and Understanding of pediatric OSA among Thai pediatricians. This is the first childhood OSA survey in Thailand, since the Thai Guidelines for Childhood Obstructive Sleep Apnea was first published in 2015. Overall, the study found that Thai pediatricians had a high self-confidence score, which indicated that they were confident in their ability to identify, their management, and follow-up of children with OSA. We found that among Thai pediatricians, 91.9% had a total OSA knowledge score ≥ 80%. Compared to a previous study in the United States; Uong et al. 8 , found that the mean knowledge score in pediatric OSA was 69.6%. Moreover, the results of surveys concerning adult OSA also had similar findings, where the overall knowledge scores ranged from 66% to 76%. [12][13][14] The results of this study had higher knowledge scores, because the population in this study included only pediatricians; whereas the previous studies included primary physicians and pediatricians, additionally the question items were also different. However, 4 of the 21 knowledge items were problematic. There was a discrepancy in the answers of item 14. The study results from the practice part showed that almost half of the responders prescribed montelukast to treat OSA in general practice. From a linear regression analysis we found significantly higher ORs in pediatric allergists (adjusted OR 2.89) and pulmonologists (adjusted OR 2.29) who answered that montelukast was used as the first line OSA therapy in young children, compared to the general pediatricians and other sub-specialties. This was possibly caused by young patients, particularly under the age of 3 years who were referred to the specialist, according to the recommendation of the Thai Guidelines for OSA. Furthermore, montelukast is approved for patients aged 6 months or older, that was younger than the age-approval of intranasal corticosteroids (INS), which is older than 2 years of age. 15 When we focused on other items, the associated factors with incorrect answers were subspecialty, general pediatricians and pediatric allergists, who had significant ORs of incorrect answers. Different levels of training possibly had an effect on knowledge. Unlike a previous study, we didn't find significant differences in the incorrect answers in terms of years of practice or place of work. This may have implied that the national recommendations, which are accessible to all physicians, caused overall knowledge homogeneity. We found that 87.3% of the responders knew that INS help reduced the size of the tonsils and the adenoid gland (item 16). Adenotonsillar hypertrophy is a common etiology of childhood OSA, but up to 24.4% of responders never or rarely used INS for the management of OSA. This finding exemplified a barrier of knowledge. Bridgeman MB. reported several barriers that can impede the use of INS, including concerns about safety and steroid side effects; especially growth suppression, a child's resistance towards intranasal medication, undesirable sensations associated with intranasal administration, and misperceptions regarding the loss of response from frequent use. 16 The true barriers of INS among Thai pediatricians need to be explored. Overall, in general practice of OSA, we found that Thai pediatricians saw 5.9 cases per month of children who were suspected of having OSA. However, in the sub-specialty analysis, general pediatricians reported only 2 cases per month, while pediatric pulmonologists reported an average of 15 cases per month. These findings reflect the fact that most childhood OSA patients in Thailand were seen by OSA specialists. Despite, the high level of OSA knowledge, and good self-confidence in OSA practice observed among general pediatricians, they reported a low number of patients in clinical practice. Interventions to encourage general pediatricians to participate in OSA practice may be needed. Our study found that 86.6% of the responders did not routinely ask about OSA symptoms in general medical check-ups, in spite of the recommendations of AAP and the Thai guidelines for childhood OSA. These findings were consistent with a previous study from Erichsen and Rosen that offered evidence of a low OSA recognition rate and unawareness of pediatricians; particularly general pediatricians concerning the screening of OSA. 6,7 Therefore, interventions to increase OSA awareness and encouragement of pediatricians to perform history taking for OSA symptoms are needed to find children who are at risk of OSA. This would provide for early detection and optimize OSA management outcomes. The strength of this study is the information on OSA practice in Thailand, based on an adequate sample size and the demographic data, which included: age, specialty training, years of pediatric experience and type of hospital. Although, our study discovered problems in the general practice of OSA, it may not explain the cause of the problems; particularly in medications used to manage OSA and the barriers from knowledge to practice. More focus is needed on education and intervention, so as to identify and overcome the barriers for the use of INS. This study has limitations. First, this study possibly had self-selection bias. 17 Additionally, the respondents were those who had access to Line application and Google form; so the calculation for non-respondents was added, and the final participant numbers were met. In addition, we could not calculate the number of non-respondents, because we could not access the list of Thai pediatrician emails due to confidentiality concerns. Good knowledge and self-confidence in the management of childhood OSA was observed among Thai pediatricians; whereas, a low recognition rate and unawareness of OSA screening is still problematic. Misunderstandings in some knowledge points were identified; especially concerning medications, including INS and montelukast.
2021-03-10T06:23:19.242Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5f8fcc113bf8d6b9c1eada3c3ef4d32cc374a78b", "oa_license": null, "oa_url": "http://www.turkishjournalpediatrics.org/pdf.php?&id=2259", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cab8c734e31cfc5225e8f64ff43ddf3d2de6f891", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229706207
pes2o/s2orc
v3-fos-license
The impact of urban road network morphology on pedestrian wayfinding behavior : Pedestrians do not always choose the shortest available route during the process of wayfinding. Instead, their route choices are influenced by strategies, also known as wayfinding heuristics. These heuristics aim to minimize cognitive effort of the pedestrian and their application usually leads to satisfactory route choices. Our previous study evaluated and analyzed resultant routes from the application of four well-known pedestrian wayfinding heuristics across nine distinct network morphologies via simulation. It was observed that the variation in the cost (difference in route length between a heuristic route and the shortest route, expressed as a percentage of the shortest route length) across the four heuristics increased with an increase in the irregularity of the network. Based on these results, we claimed that, people may opt for more diverse heuristics while walking through relatively regular networks, as route cost across heuristics are more similar in magnitude and thus applying any one of them would not result in a substantial difference in the travelled distance. Likewise, they may prefer specific heuristics in the relatively irregular networks, as some heuristics are significantly costlier than others, thus creating greater variation in cost across heuristics and hence would result in significantly greater travelled distances. In this study, we investigated this claim by comparing simulated routes with observed pedestrian trajectories in Beijing and Melbourne, two cities at opposite ends of This novel finding could help urban planners and future researchers in producing more accurate patterns of aggregate pedestrian movement in outdoor urban spaces. Introduction Human wayfinding in outdoor spaces involves the process of selecting segments of an existing real-world network to find a viable route between an origin and a destination [13]. During wayfinding, pedestrians do not always choose the shortest possible route [8] as they may not be able to discern it, especially when the shortest routes are complex. Hence, they apply certain strategies or wayfinding heuristics that attempt to minimize their cognitive effort [3]. For example, pedestrians may seek routes with fewer turns-routes that are simpler in nature, hence require less cognitive effort or are shorter to communicate and memorize-even if this route is not geometrically the shortest one. This strategy of wayfinding in a street network and reaching the destination with the fewest number of turns is one wayfinding heuristic. Like this 'Fewest turns strategy', there exists multiple well-established wayfinding heuristics that are known to be applied by pedestrians. These wayfinding heuristics are applied by pedestrians irrespective of their level of spatial aptitude or familiarity with a given road network. Empirical studies have revealed that pedestrians switch between wayfinding strategies with a change in the ambient environment. Through his experiments, Golledge [13] inferred that "perceptions of the configuration of the environment itself (particularly different perspectives as one changes direction) may influence route choice." This gives us the impression that may be human beings are able to understand that, given a type of network morphology, certain heuristics are better at optimizing not just cognitive effort, but physical effort (in terms of distance travelled) as well. We say that certain heuristics are (on average) less costly than others in certain types of road networks, taking into account the difference in route length between the heuristic route and the shortest possible route. In this regard, our previous work [5] showed through simulation that although some heuristics are consistently cheaper and some are consistently costlier across nine different types of network morphologies, the variation in cost across these wayfinding heuristics is dependent on the regularity of the network structure, as inferred from visual assessment. It was observed that more regular networks had lesser variation in cost across heuristics while more irregular networks experienced more variation. For example, in Melbourne, the observed standard deviation in route cost was 6.96% while the corresponding statistic in Beijing was 9.39% (these numbers although not present in [5] are derived from the same analysis). Regularity of network morphologies was based on the analysis and findings by [33]. The results supported the argument that pedestrians possibly opt for a variety of heuristics in regular networks while opting for specific heuristics (or avoiding them) in irregular ones. While we arrived at this conclusion by thoroughly simulating four wayfinding heuristics in nine network morphologies following a systematic methodology, the simulation approach had to use some assumptions. While analysis of the simulated routes across different network structures helped us formulate this hypothesis, yet we could not claim with confidence that this is representative of actual pedestrian behavior. For ground-were labelled with their corresponding transportation mode, we filtered walking points, employed trip segmentation thresholds to differentiate between individual trips, and performed map matching (matching raw GPS trajectories to appropriate segments of the underlying pedestrian network) to obtain the actual traversed routes. For the same origindestination pairs, we obtained the theoretical heuristic routes using simulation of four wayfinding heuristics. Consequently, we used Network Hausdorff Distance (NHD) to derive (dis)similarity between actual and simulated routes to infer heuristics chosen by pedestrians, either partially or fully. The paper is organized as follows. Section 2 contains a review of the existing literature along with the heuristic algorithms proposed in our previous study while Section 3 talks about the datasets used in this study. Section 4 contains the detailed methodology followed in this study. Section 5 presents some preliminary findings and Section 6 discusses the findings and presents relevant arguments in relation to the same. Wayfinding heuristics Several studies have explored human wayfinding strategies in outdoor spaces. A review of existing wayfinding literature reveals the existence of multiple heuristics that are applied by pedestrians. These heuristics have been theorised based on observations of actual and probable pedestrian behavior in relatively small environments [3, 8-10, 13, 19]. Comparison between wayfinding heuristics has been done on a small scale by [20]. In contrast, in our previous work heuristic routes were simulated in a relatively larger, city-wide scale to investigate the impact of network morphology on pedestrian wayfinding decisions [5]. These simulated routes represented theoretical routes chosen by pedestrians applying a single heuristic consistently during their wayfinding exercise. The heuristics chosen were modified least angle strategy, longest-leg first strategy, shortest-leg first strategy and fewest turns strategy. Although there exists a host of other wayfinding heuristics, only the aforementioned ones are geometric in nature and thus dependent on network morphology. In these heuristics, the location of taking a turn or the number of turns taken during wayfinding determine the route choice. Human perceptions and conceptualizations vary, so what accounts to form a turn is vague from a cognitive perspective. But also representations of walkable features in databases vary in their level of abstraction and detail, challenging additionally to define what constitutes a turn. Accordingly, our previous study [5] defined a 'turn' as follows: "If two consecutive road segments in a route have a deflection angle (difference in bearing) of 45°or more, the move from one to the other is considered a turn." This definition was applied to appropriate levels of geometric abstraction. It led to satisfactory outputs according to visualizations of randomly sampled routes. But any research is sensitive to the chosen threshold value. The four chosen heuristics and the implemented algorithms are discussed briefly as follows. Modified Least angle strategy: [19] proposed a real-world wayfinding heuristic called 'least angle strategy' which can be applied in an unknown environment if the destination can be perceived directly by the navigator, at least at the beginning of the navigation process. At each decision point, the pedestrian prefers the road segment which has the least deviation from the direction of the intended destination. However, the original least angle www.josis.org strategy [19] has a significant shortcoming. The algorithm resulted in significantly longer routes more often by taking impractical detours in real street networks, meaning that these routes would not be chosen by a pedestrian during wayfinding. For example, in cases where the algorithm chooses a road segment over others based on least angle, and then the consequent roads led to detours, the results were not representative. In this paper we modified the least angle strategy as shown in Algorithm 1 to avoid similar shortcomings. It preserves the principle philosophy without running into large outliers, making it more competitive. In other words, this modified version resulted in more realistic routes, more often. It makes use of the A-star algorithm where the difference between two bearings, one between the origin and the destination, and the other between any node and the destination, has been selected as the heuristic. This is termed as deflection angle. A large positive number has been multiplied with deflection angle so that route selection by A-star algorithm depends, almost entirely, on selecting nodes that minimize the deflection angle and not the length of the edges of the road network. While this algorithm is not fully robust, it results in appropriate routes similar to what the original least angle heuristic should have resulted in under practical circumstances. Hence, we decided to implement this algorithm for our study and refer to this as the Modified Least Angle strategy in this paper henceforth. Algorithm 1 Modified Least Angle strategy algorithm after [5] Require: An undirected graph G = (N, E), where N is the set of nodes and E is the set of edges in the network with edge_weight ← edge_length origin, destination ∈ N 1: Define heuristic: target_angle ← bearing(origin,destination) node_angle ← bearing(node,destination) def lection_angle ← absolute_value(target_angle -node_angle) return 100000 * def lection_angle (so that edge_length has minimum influence on chosen route) 2: Compute heuristic for all node ∈ N 3: route ← A-Star_shortest_route (origin,destination, heuristic) 4: Return route Longest Leg First strategy: The longest leg first strategy involves basing decisions disproportionately on the straightness of the initial segments of the routes [3]. The pedestrian chooses to prefer longer and straighter initial segments to reach as close as possible to their destination, without taking a 'turn' and thereby reducing the cognitive effort spent during wayfinding. This heuristic is also popularly known as the 'initial segment strategy'. The algorithm has been provided in Algorithm 2. Algorithm 2 Longest Leg First strategy algorithm after [5] Require: An undirected graph G = (N, E), where N is the set of nodes and E is the set of edges in the network with edge_weight ← edge_length origin, destination ∈ N Nomenclature: N T _nodes = nodes which can be traversed from origin without taking a turn 1: Search for all N T _nodes in the graph using Breadth-First Search 2: Derive shortest path from destination to all node ∈ N T _nodes using dijkstra_path(destination,node) 3: route_node ← node which satisfies min(dijkstra_path_length(destination,node)) 4: f inal_segment ← dijkstra_path(route_node,destination) 5: initial_segment ← traversed_path(origin,route_node) 6: route ← append(initial_segment,f inal_segment) 7: Return route Shortest Leg First strategy: Although [13] and [9] have mentioned the shortest leg first strategy as one of the least preferred wayfinding heuristics by pedestrians, there was no formal definition found in the literature. Hence, for this study, we have assumed that this strategy involves taking turns in the initial portion of the route to keep the latter portions as straight as possible. [20] stated that shorter initial legs provide pedestrians with the choice to explore further alternatives quickly at the next decision point, to reduce the cost of potentially required backtracking when compared to long initial segments. Based on our understanding, we have obtained the shortest leg first route for an OD pair by swapping the positions of origin and destination in Algorithm 2. Fewest Turns strategy: [13] observed that the fewest turns strategy is the most popular wayfinding strategy and ranked it just after shortest distance and least time criteria. [46] developed modified wayfinding algorithms based on this heuristic. Pedestrians tend to choose routes involving the fewest number of turns that result in so called simpler routes, since turns involve decision making and increased cognitive effort. Our algorithm involves reaching a set of nodes from the origin that do not require taking a turn, and then selecting from that set, the node closest to the destination, and repeating the entire process at every turn until the destination is reached. A visual illustration of typical heuristic routes for a fixed origin-destination pair on an urban pedestrian network has been shown Figure 1. The example routes were simulated on the pedestrian network of New Orleans, a city that was included in our previous study. As the city has a grid-like network, the contrast between the heuristic routes are apparent as the heuristics tend to show their typical route choice outcomes. Algorithm 3 Fewest Turns strategy algorithm after [5] Require: An undirected graph G = (N, E), where N is the set of nodes and E is the set of edges in the network with edge_weight ← edge_length origin, destination ∈ N Nomenclature: N T _nodes = nodes which can be traversed from origin without taking a turn 1: temp_route_node ← origin 2: while temp_route_node = destination do 3: Search for all N T _nodes in the graph using Breadth-First Search 4: Calculate shortest path from all N T _nodes to the destination using Dijkstra's shortest path algorithm 5: route_node ← node ∈ N T _nodes which satisfies min(dijsktra_path_length(destination,node)) 6: temp_route_segment ← traversed_path(temp_route_node,route_node) 7: route ← append(route,temp_route_segment) 8: temp_route_node ← route_node 9: end while 10: Return route Thompson et al. [33] used convolutional neural network (CNN) to study precinct-level images of maps of 1667 cities around the world. The images (1,000 images for each city, making a total 1.667 million images) provided a high-level abstraction of the urban characteristics of interest, primarily road networks and rail transit networks. Through this visual classification technique, this study was able to capture the diversity of urban design and morphology in relation to land transport on a global scale. Nine distinct city types were identified based on the shape and extent of road and rail infrastructure networks. Melbourne, a city that evolved post-motorization, was classified as a 'Motor' city characterized www.josis.org by highly organized, medium to low density, grid-based road networks. On the other hand, Beijing was classified as 'Irregular' based on the more irregular morphology of their road and rail network that has been influenced by historic planning regimes. Hence we selected the two cities, Melbourne and Beijing, for this study as their road network morphology has been established to be contrasting [33]. Map matching Map matching is referred to the process of matching observed GPS points (latitude, longitude, timestamp) to a sequence of existing road segments. Raw GPS traces are often inaccurate with the accuracy varying from a few metres to sometimes 1-2 kilometers. These inaccuracies are due to a range of reasons, including atmospheric influences on GPS signals and the presence of urban canyons and other terrestrial features that are likely to affect GPS signals [34]. Due to the level of noise in the GPS signals simple map matching of the observed points to their nearest street segment may result in inaccurate results. Hence, geometrical and topological constraints of the road network are necessary to build a path with an acceptable level of probability that it was traversed. Multiple solutions of the map matching problem under various ground conditions have been suggested [7,16,24,38]. Newson and Krumm [26] proposed a map matching algorithm based on the principles of hidden-Markov models (HMM). They stated that the HMM was found to be successful in accounting for measurement noise and road network layout. To overcome some limitations of the aforementioned approach, Meert and Verbeke [25] proposed a new map matching approach by implementing HMMs with non-emitting states. In this study, we have made use of their algorithm in the form of Python codes publicly shared in GitHub (https://github.com/wannesm/LeuvenMapMatching). Route similarity One important aspect of trajectory data analysis is the similarity measurement of trajectories. Trajectories are composed of "a sequence of time-stamped locations" [17]. Past studies have made use of Euclidian space and calculated trajectory similarity based on Euclidian distance [36,39,45]. But Euclidian distance is not an appropriate measurement tool in road network space where topological constraints exist. Hence, more recent studies have used network distance instead of Euclidian distance for measuring the similarity between a pair of trajectories [11,18,22]. Furthermore, there exists noise in GPS data which results in the points not coinciding with the underlying road network for which map matching was done, as mentioned in Section 2.3. Hence, to compare network-based trajectories which have been mapped to the underlying road network (to form a sequence of nodes traversed), it is essential to use appropriate similarity metrics based on network constraints and not the ones based on Euclidian space. Thus, we employ Hausdorff distance, a commonly used similarity measure used in computational geometry [21] with recent advances using it for inferring trajectory similarity [11]. In our study, we use the definition of network Hausdorff distance (NHD) between two trajectories, a version of the original Hausdorff distance modified for applications on networks, as described in [11]. Calculation of NHD has been based on Equation 1: where t i and t j are two trajectories, n and m are nodes belonging to t i and t j respectively, and dist indicates Dijkstra's shortest-path distance between points n and m. Thus, to compute NHD between t i and t j , one needs to • compute Dijkstra's shortest path with edge length as weights between a node in t i and all the nodes in t j , • choose the minimum value among all the computed shortest route lengths, • repeat the process for all other nodes of t i , and • retrieve minimum values for all other nodes of t i . • The maximum value from the set of obtained minimum values gives the NHD. As has been shown in [11], NHD between t i and t j and t j and t i may not be the same, meaning NHD could result in assymetric distances depending on network configuration. Hence, during computation, NHD has been calculated between the actual (map-matched) route and the simulated heuristic route and not the other way around, for the sake of consistency. Also, the relationship between NHD and lengths of two routes is not trivial, in the sense that they may not be directly proportional. www.josis.org NHD (in meters) is a measure of how similar (or dissimilar) two routes in a road network are. The greater the magnitude of NHD, the more is the dissimilarity. For example, if NHD between the actual route and theoretical route followed by heuristic A is 50 meters and that with heuristic B is 90 meters, it indicates that the similarity between the actual and heuristic A route is more than that with heuristic B route. A positive NHD value shows that there exists some difference between two routes and the similarity is approximate. A zero NHD value indicates that the two routes are one and the same, only in cases where the start and end point of two routes are the same (as is in this study). Thus, from the above example, we infer that the actual route follows heuristic A approximately more closely than heuristic B. OpenStreetMap data quality The assessment of the data quality of OpenStreetMap (OSM) has caught the attention of researchers over the recent years, given its massive increase in patronage. OpenStreetMap is volunteered geographic information (VGI) wherein volunteers acquire spatial information and upload it for public use. Past OSM data quality analyses against conventional geographic information sources have revealed that the completeness of data varies with land use (urban vs rural), country (developed vs developing) and road type (motorways vs pedestrian ways) [42] as OSM is dependent on the contribution of data from volunteers in a given area. Hence, concerns about the credibility of research using OSM data must be carefully addressed. A study conducted in all the states in the US revealed that the coverage of pedestrian network data in OSM was higher than the US Census TIGER/Line data contrary to motorways [48]. Furthermore, Zielstra and Hochmair [47] in 2012 compared OSM with different proprietary geo-datasets in the US and Germany and concluded that the OSM database was relatively complete and can be used effectively for pedestrian routing. To further strengthen the argument in favor of OSM's pedestrian data completeness, Novack et al. [27] relied entirely on OSM data for proposing a system that generates pleasant pedestrian routes, and Gil [12] proposed a multimodal urban network model using OSM network data including pedestrian ways. Australia is among the top countries in terms of the ever-increasing OSM data completeness [23] where studies have focused on routing based on OSM street network data [30]. In China, OSM data related to Beijing has been reported to be fairly complete [41,42]. Based on these evidences, and the fact that the coverage and quality of OpenStreetMap data is growing day by day, we argue that the use of OpenStreetMap data for this study is justified, although we concede that occasionally OpenStreetMap may suffer from incompleteness and hence cannot be considered to be robust. For our study, we import pedestrian networks of Beijing and Melbourne from OpenStreetMap [6] which have been illustrated in Figure 2. We have used the Python package OSMnx [6] for extracting network information from OpenStreetMap. The overall road network structure between Beijing and Melbourne may not appear to be too dissimilar only when looking at it at a large scale, like Figure 2. On a closer look, Melbourne is a designed modern city, and Beijing an old city, with an elaborate pedestrian network crowded with dead ends. So, while looking at the two cities at a micro-scale, namely at a scale akin to a pedestrian's average walking route, it can be observed that Melbourne retains its regular grid-like pattern (even as we move into the suburbs) while Beijing does not. There are a host of studies which analyze aggregate city networks using complexity measures such as average circuity, entropy, and centrality. These complexity measures (conducted on a large city-wide scale) do not always reveal the true nature of street network orientation. In this study, we are interested in studying pedestrian movements. Pedestrian movements are very different from movement via other transportation modes as (a) pedestrian movements are mostly limited to shorter trip distances and (b) pedestrian movements do not always conform to the major roads, but mostly are concentrated within the arterial and sub-arterial streets. Hence, we felt that the complexity measures at the city-scale are not entirely appropriate for our study. The original study which we rely on for our choice of study areas [33], analyzed 1,000 map images for each of the cities at smaller scales (400m x 400m, which is a relevant scale for pedestrian movement) and concluded that Melbourne and Beijing street network morphologies are of contrasting nature. We present sample figures (Figure 3 and Figure 4) of typical pedestrian network structure in both the cities at a much smaller scale. Here the contrast between the two cities becomes more apparent. Beijing dataset This GPS trajectory dataset was collected in Microsoft Research Asia's Geolife project by 182 users in a period of over five years (from April 2007 to August 2012) [43,44]. The raw dataset contains 17,621 trajectories with a total distance of 1.2 million kilometers and a total duration of more than 50,000 hours. These trajectories were recorded by different GPS loggers and GPS-phones, and have a variety of sampling rates. 91.5 percent of the trajectories are logged in a dense representation, e.g. every 1 to 5 seconds or every 5 to 10 meters per point. Although this dataset is distributed over 30 cities of China and in some cities located in the USA and Europe, the majority pertains to Beijing, China. A substantial portion of the data was labelled by the users generating the data with the corresponding travel mode. In our study, we have limited our algorithms to the labelled portion of this large dataset (10.4 million GPS points, 9,070 trajectories from 70 users). Melbourne dataset Data for Melbourne was generated from the Victorian Future Mobility Sensing Project which was part of a new Urban Mobility and Intelligent Transportation initiative by the University of Melbourne, in partnership with Department of Economic Development, Jobs, Transport and Resources (DEDJTR), Massachusetts Institute of Technology (MIT), and Singapore-MIT Alliance for Research and Technology (SMART). The project collected personal travel data using a download-able smartphone application developed by SMART. Mode detection techniques were applied on the raw data to infer the transportation mode. The inferred modes were validated from the survey participants by asking them at the end of each day. Survey respondents were typically asked to complete the survey for 14 days, including five continuous days [31]. The raw dataset contains 1.2 million GPS points contributed by 84 users. Trip segmentation In the first step, for each user, raw GPS points having transportation mode label as 'walk' or equivalent were filtered. Consequently, we obtained a series of GPS data points for each user in chronological order. These GPS points needed to be clustered into separate walking trips which would then be further analyzed. Thus, in the second step, trip segmentation criteria were applied to the filtered set of GPS points. A review of existing trip identification literature indicated that trip segmentation thresholds (also known as 'dwell time') are applied under two conditions: GPS signal-available situation and GPS signal-lost situation [14,15]. It can be observed in [14] that the signal-available dwell time thresholds are consistently smaller than the signal-lost dwell time thresholds. This dwell time thresholds tend to vary with characteristics of local activity and ranges between 45 and 900 seconds [32]. Trip Identification and Analysis System (TIAS) concludes 'confident' trip ends for dwell time greater than 300 seconds [2]. For our study, we have selected a threshold of 300 seconds for differentiating between consecutive walking trips. Although the participants in the datasets had labelled their data by stating the duration of travel in certain transportation modes, plotting GPS points clearly indicated unreasonable spatial gaps between two clusters of points inside the same walking trip. This indicated that using only a time-based threshold was not appropriate for trip segmentation due to occasional erroneous labelling of transport mode by the survey participants. For example, there could be a chance that the participant took a motorized mode of transport for a very short duration (less than 300 seconds) and instead of differentiating that non-walking trip, incorporated it under the encompassing walking trip by mistake. This resulted in erroneous map matching, as observed from trials. One such instance from the Beijing dataset is illustrated in Figure 5. But such observations could stem from noisy GPS points as well. To remove such potentially erroneous labelling and avoid trip segmentation due to noisy GPS data points (outliers) at the same time, we have supplemented the first trip segmentation threshold with an additional threshold. Here, we check whether the time difference between two consecutive data points is greater than 20 seconds. If not, then we do not consider trip segmentation and thus try and avoid trip segmentation due to outliers. Otherwise, we calculate the velocity between the two points by dividing the great circle distance by the time gap. If the velocity is unreasonable (greater than 2 meters/second) in terms of human walking speeds, the trip is segmented. The flowchart for this method has been illustrated in Figure 6. While the aforementioned trip segmentation does not guarantee robust results, from our observations on the datasets (with sampling rate less than 20 seconds), these thresholds provide satisfactory outcomes. Activity locations Apart from trip segmentation, there is the aspect of identification of activity locations that reside at the end points of trips [15]. Observation of plots of some trajectories revealed that their points were clustered in a small geographic area, indicating the occurrence of an activity, rather than a trip. It was necessary to remove such instances to obtain more representative results, since our study is interested in routes and their characteristics and not the origins and destinations where activities take place. One study showed the use of a www.josis.org sophisticated algorithm for inferring activity locations by employing density-based spatial clustering of applications with noise (DBSCAN) and support vector machines (SVM) [15], while another study applied distance and time thresholds to do the same [37]. We have applied the distance and time thresholds of 200 metres and 20 minutes to remove such instances, following [37] as mentioned in Equation 2: where Dist (p 1 , p n ) refers to the Haversine distance between the first point p 1 and the last point p n of the inferred trip and T d is the time duration. In addition to the above criteria, after map matching, we have checked whether the map matched route distance exceeds twice the length of the corresponding shortest route or whether the length of the shortest route is equal to zero, indicating a possible round-trip. We have consequently removed such activity-based trips and round-trips, which are not relevant for this study as including them in our analysis will lead to our results being less representative of ground truth. Filtering walking trips based on trip duration In our previous work [5], we simulated heuristic routes between a pair of origin and destination only if the length of the shortest route between them fell inside the range of 400 metres (equivalent to a 5-minute walk) to 2,000 metres (equivalent to a 25-minute walk), based on the reviewed literature [1,28,29,35,40]. In this study, we have only considered walking trips where the duration of the trip lasts at least five minutes and not more than 25 minutes. Trips shorter than five minutes rarely deviate from the shortest route with actual routes and wayfinding heuristics coinciding with the same. Trips longer than 25 minutes are rarely non-activity-based trips and have a high chance of having multiple destinations instead of just one. Removing trips made outside the cities As mentioned in Section 3.1, the Geolife dataset contains trips made outside Beijing as well. Since the scope of our study is limited to analyzing walking trips made within Beijing and Melbourne, it was necessary to remove trips that were made outside the city. Map matching As mentioned in Section 2.3, we have made use of a public GitHub repository based on [25] for map matching. For searching for probable consecutive road segments, we have set the search radius parameter at 300 metres. A greater value is computationally more expensive and sometimes results in more inaccurate outcomes, at least in case of walking trajectories where the points are closely spaced as compared to its motorized counterparts. A lesser value of the search radius often results in impossible map matching, as was experienced from values of 200 and 250 metres. Map matching resulted in the algorithm returning a sequence of OSM nodes that were traversed. We have considered the first and last point of the obtained sequences as the origin and the destination for each trip, respectively. This was necessary to simulate shortest route using Dijkstra's shortest-path algorithm and heuristic routes using algorithms mentioned in Section 2.1. The preprocessing methodology has been illustrated in Figure 7 and the data has been described in Table 1 Figure 8, illustrating the temporal distribution of the number of trips, shows two distinct peaks, one in the morning and one in the evening, in both the datasets. Trips made during the night and early morning are significantly lower than the other times of the day. Also, the evening peak in Melbourne (5 p.m.) occurs earlier than Beijing (6-7 p.m.), while the morning peak is similar (8-9 a.m.) dropping possible hints at the difference in usual working-hours in both the cities. Furthermore, given the temporal distribution revealed by the visualizations is typical for the population, we assume some representativeness of our datasets. Preliminary findings The mean route lengths of the actual (map-matched) route, the shortest possible route and the routes simulated based on the four wayfinding heuristics have been illustrated in Figure 9. It can be observed that the mean route lengths (both actual and simulated) in Melbourne are consistently lower than those in Beijing, even though we had filtered trips that had a duration between 5 and 25 minutes, as mentioned in Section 4.3. This was also observed in our previous study, where actual routes had not been analyzed but rather simulations were undertaken. If anything, the contrast appears even more between Beijing and Melbourne, in comparison to our previous study. The mean route costs (the difference in route length between a given route and the corresponding shortest route expressed in terms of a percentage of the shortest route length) of the actual route and the simulated heuristic routes have been illustrated in Figure 10. The variation of cost across heuristics is less in Melbourne (Standard Deviation = 3.33% and Coefficient of Variation = 61.62%) as compared to Beijing (Standard Deviation = 6.20% and Coefficient of Variation = 90.04%), a pattern that is in line with the conclusion from our previous study (Melbourne : Standard Deviation = 6.96% and Coefficient of Variation = 87.05% and Beijing : Standard Deviation = 9.42% and Coefficient of Variation = 101.80% ). These route length and route cost results show that our previous study (which only used simulations) and our current study (which analyzes actual observations), both follow a similar pattern and are not contradictory. They reveal two important things. One, these preliminary findings on route length and route cost validate the results of our previous study. We do not say that the results are the same, but the pattern is apparently similar (Melbourne's more consistent than Beijing), and they support the argument of contrasting morphologies to a greater extent. It must be noted that the mean cost of Melbourne's actual route in Figure 10 is more than Beijing, because of comparison with shorter 'shortest available routes' than Beijing. Two, even though the spatial extent of our study areas are www.josis.org not confined to a 5-kilometer bounding box (like in our previous study), the contrast between the morphologies of Beijing and Melbourne remain consistent (if not increased) even at a larger scale. In our previous study, we had selected the smaller study area so that it preserved the unique morphological characteristics of the pedestrian network of each city without diminishing the morphological differences between cities. Usually, as we move further into the suburbs of a city, the morphology tends to lose its uniqueness (usually, by becoming more irregular) and the density of their pedestrian network also reduces drastically. As we had to consider larger study areas (for the sake of not depleting our sample sizes), we felt that we might lose the significant contrast in network structure between Beijing and Melbourne. But a closer assessment of Figure 2 shows that Melbourne's suburban pedestrian network maintains its grid-like structure, much more consistently than Beijing. That is, even at a larger-scale (larger than the 5-kilometer bounding box), Melbourne is much more regular than Beijing. And this is supported by our our preliminary results. Results and discussion To investigate the relationship between heuristic choice distribution and network morphology we have made use of one-way analysis of variance (ANOVA) test, which tests the null hypothesis that two or more independent groups have the same population mean. As mentioned in Section 2.4, we have opted to use NHD as our route similarity metric. NHD (in meters) is a measure to quantify the dissimilarity between two routes in a road network. The greater the magnitude of NHD, the more is the existing dissimilarity. We compare the all the actual (map-matched) routes with their corresponding theoretical (heuristic) routes from both the datasets based on NHD. In the context of this study and our stated hypothesis, we found that the variation of NHD across the heuristics is far more apparent in Beijing (standard deviation = 16.58 m and coefficient of variation = 11.77%) as compared to Melbourne (standard deviation = 7.11 m and coefficient of variation = 6.85%). The one-way ANOVA test was applied to both the datasets to check whether the mean NHD obtained from the four heuristics in each city was statistically significantly different from each other. In Melbourne, this difference was statistically not significant at 95% confidence interval. That is, the difference in mean NHD across the four heuristics is probably random in nature. In contrast, this difference was found to be statistically significant at 99% confidence interval in Beijing. This indicates a strong evidence against the null hypothesis (that all four mean NHDs were equal in Beijing), which leads to its rejection. The detailed results are as follows. www.josis.org Results from the one-way ANOVA test make for interesting interpretations with respect to our hypothesis. Based on the findings from our previous study, we had argued that pedestrians choose heuristics by morphology as it was rational to disregard costly heuristics in irregular networks (thus creating a skewed heuristic choice distribution) and choose any heuristic in regular networks as all were equally costly (uniform heuristic choice distribution). Thus, we hypothesized that in Melbourne, the choice of heuristics will be uniformly distributed, while in Beijing, this distribution will be skewed. In this study, the choice of heuristic, rather the extent of compliance of the actual route with a heuristic route, was measured using NHD. So the distribution of heuristic choice was inferred by statistically measuring the uniformity of mean NHD values (average over all routes in the dataset) across all four heuristics. The ANOVA results suggest that this extent of compliance across heuristics is uniform in Melbourne. On the contrary, in Beijing the extent of compliance varies significantly across heuristics. In other words, actual routes, on an average, had uniformly complied with all four heuristics in Melbourne i.e. not one heuristic is significantly more (or less) dissimilar from the actual routes. But such was not the case in Beijing. This strengthens our argument that some heuristics are more (or less) popular in Beijing while all four heuristics are equally popular in Melbourne, owing to its more grid-like regular pattern of pedestrian network. From Figure 11, it is evident that in Beijing, Modified Least Angle heuristic is significantly less popular as it has the least average compliance (highest mean NHD value among heuristics) with the actual routes. On the basis of these statistical validations, it can be argued with some confidence, that if mean NHD is considered a proxy for choice of heuristics, and Melbourne and Beijing are representative of their respective network morphologies, pedestrians are unbiased towards wayfinding heuristics in regular networks while being biased in irregular networks. Conclusion We investigated whether network morphology of an urban pedestrian network has an impact on wayfinding heuristic choice distribution. In our previous work, we had shown via simulation that the variation in the cost of heuristic routes was greater in irregular networks as compared to regular ones. In regular grid-like networks, all heuristics were uniformly costly and not significantly longer than the shortest available route. On the contrary, in irregular networks, some heuristics were consistently resulting in significantly costlier alternatives in comparison to the shortest available routes. Based on this rationale, we hypothesized that pedestrian actions on the ground would be in line with these findings. In other words, we had argued that pedestrians choose heuristics by morphology as it was rational to disregard costly heuristics in irregular networks (thus creating a skewed heuristic choice distribution) and choose any heuristic in regular networks as all were equally costly (uniform heuristic choice distribution). We chose Beijing and Melbourne as the two cities for our study as they were deemed to have contrasting pedestrian network morphologies (as suggested by literature). We also concluded the same via close-up visual observations of the networks, especially inside the urban and suburban blocks, where Melbourne clearly had more regular patterns than Beijing. Our preliminary findings (in terms of route length and route cost) suggested the same. In this paper, we demonstrated the use of raw GPS trajectories from both the cities in conjunction with heuristic route simulation to investigate whether these claims can be augmented with actual observations of pedestrian wayfinding behavior. Network Hausdorff Distance (NHD) was used as a measure of comparing actual routes with heuristic routes and compute the extent of compliance with our four studied heuristics. Using one-way ANOVA test on NHD values across heuristics, we established statistically that the mean NHD values for all four heuristics were not significantly different in Melbourne, but were significantly different in Beijing. This meant that actual routes had uniformly complied with all four heuristics in Melbourne but not in Beijing. In other words, heuristic choice distribution is different between the chosen cities, uniform in Melbourne and skewed in Beijing. This provided sufficient statistical evidence towards proving our hypothesis. Considering Melbourne and Beijing to be representative of regular and irregular network morphologies respectively, we generalized our conclusions and argued in favor of our hypothesis with requisite statistical evidence. As wayfinding heuristics help generate realistic aggregate movement patterns of people in urban spaces, relevant future studies should be able to make informed decision on the choice distribution of these heuristics (with multiple strategies under consideration) across the pedestrian population considered for the study, based on network morphology of the urban space studied. There were certain considerations and assumptions made in this study that need to be highlighted as well. First, we used map matching to infer actual routes from sets of raw timestamped GPS records. Map matching results in the most probable route, given the fact that GPS data often suffers from positioning errors. We have made use of a sophisticated algorithm to overcome these challenges, yet care must be taken while interpreting actual routes. Second, we have employed multiple space-time-based criteria to filter out activity-based trips, round trips, trips that fall outside the usual walking trip lengths, and trips where the effect of heuristics will not be pronounced. While one can argue about the appropriateness of the thresholds and their values, our judgments were based on consultation of existing literature and observations of randomly sampled results from our datasets. www.josis.org Third, the Melbourne dataset used for the analysis was smaller in comparison to Beijing's. Although, two datasets with closer sample sizes would have been more desirable, the usual temporal pattern of pedestrian volume in urban spaces was mirrored precisely by both the datasets. Also, from our preliminary findings, results from the Melbourne dataset had intuitive comparisons with the Beijing data, even though its sample size was considerably less. Hence, we believe that the findings of our study are not questionable in this regard. Another important consideration in reference to the datasets is the existence of superusers (users contributing heavily in the datasets). This is evident from looking at Figure 12 where clearly some users have contributed more than the rest (users #153 and #86 in Beijing and users #73 and #153 in Melbourne). As they are present in both the datasets, superusers influencing the results and acting as the differentiating factor between the two cities seems highly unlikely. While user bias can produce misleading results [4], it is important to note the context of the study, which in this case, is heuristic choice popularity distribution, and not popularity of any specific route or street segment. In relevance to this study, there could be cases where super-users, by recording their weekday walking trips using the same route (and thus the same heuristic), influence one heuristic greatly than others. But these super-users have not only shared their weekday walking trips, but also other recreational trips with varying heuristics much more than other participants. People do not apply the same heuristics in every situation and they tend to switch depending upon the environment. From visual assessment of individual heuristic choice distribution, we observed that these super-users were not disproportionately adhering to any one heuristic. Furthermore, it must be kept in mind that we used NHD, a continuous variable, to measure route (dis)similarity. In most cases, there is no absolute compliance with ideal heuristic routes. We cannot claim that one route follows one heuristic absolutely, and not the others (no binary outcome) and that was not the goal of our study. Thus, there are positive NHD values, based on which we supported our hypothesis on choice distribution of heuristics. By using a continuous variable such as NHD and not any binary outcome, the problem of bias of super-users reduces significantly. While there may be arguments in favor of random undersampling of data to remove user bias, reducing a small dataset further would not have necessarily yielded more representative results and reduced the credibility of statistical claims. Finally, there are a host of other factors that can influence the wayfinding decisions of pedestrians. Our study was confined to geometric heuristics, ones that are dependent on pedestrian network structure. But people are not limited solely by these four heuristics, or just geometric heuristics, and urban areas offer much more than just their street orientation (in terms of land-use and infrastructure). Pedestrians may select routes with most landmarks, maximum weather protection, maximum perceived safety, least crowded and least pollution. Also, pedestrians may apply multiple heuristics at multiple stages of a single walking trip, and they are not always strictly adhering to their chosen heuristic. This is also reflected in the positive NHD values for the heuristics, meaning that compliance with ideal heuristic routes is partial in most cases. But these non-geometric heuristics are not relevant for this study, as our intention was to test heuristic choice distribution across network morphologies. For example, when analyzing two urban spaces vastly different in terms of green space proportion, it will reveal contrasting heuristic choice distributions. Then, of course, the heuristics in consideration have to be relevant to land-use and not network morphology. Yet, one could argue about the relevance of the four heuristics used in this study. It must be noted that the context of this study was comparing heuristic choice distribution between two contrasting network morphologies. The intention was not to check extent of compliance for any individual heuristic. Hence, we investigated heuristic choice over all heuristics and across two contrasting morphologies. So, even though other heuristics have been applied by pedestrians, quantifying dissimilarity with actual routes using NHD meant that we had a continuous variable to compare all the four heuristics (instead of fully complied or not complied at all), and judge the extent of compliance. This helped us disregard the effect of other heuristics not included in this study that may have been partially applied. Hence, the findings of this study hold true. Overall, the findings from our previous study made us argue that in regular grid-like networks, where heuristic choice does not matter and almost all strategies lead to a route not substantially different from the shortest available route, heuristic choice distribution would be uniform. In this study, we gather enough statistical evidence to suggest the same.
2020-12-24T09:07:44.785Z
2020-12-21T00:00:00.000
{ "year": 2020, "sha1": "2650431530a44e8b39018cbdfbc02837a9875a97", "oa_license": "CCBY", "oa_url": "https://digitalcommons.library.umaine.edu/cgi/viewcontent.cgi?article=1142&context=josis", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8693a234201305625d5594cb0fd97f8845a9bebe", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
250038801
pes2o/s2orc
v3-fos-license
Novel Unipolar Optical Modulation Techniques for Enhancing Visible Light Communication Systems Performance Visible Light Communications (VLC) are receiving increased attention in the wireless communications research community. VLC is secured, power efficient, and operates in the visible light range, thus RF communication bandwidth limitation is overcome. In this article, the authors enhance the data rate, system complexity, power efficiency, and spectrum efficiency in VLC systems. An innovative unipolar transceiver system is proposed, mathematically analyzed, and compared with other existing techniques and it demonstrates to have a very high data rate ratio (43.75%) with a good system bit energy to noise ratio <inline-formula> <tex-math notation="LaTeX">${(E}_{b}/N_{O})$ </tex-math></inline-formula> compared to other existing techniques. Development for the traditional asymmetrically and symmetrically clipping optical (ASCO-OFDM) system is also proposed, which involves combining a modified receiver with the ASCO-OFDM system traditional transmitter. The proposed receiver reduces the system complexity by O (<inline-formula> <tex-math notation="LaTeX">$N\log _{2}N$ </tex-math></inline-formula>) with better <inline-formula> <tex-math notation="LaTeX">$E_{b}/N_{O}$ </tex-math></inline-formula> than the conventional ASCO-OFDM. Detailed analysis, simulation results, and comparison of the proposed systems with the existing systems are presented beside a brief assessment of existing techniques. I. INTRODUCTION The demand for wireless communications are growing day by day and currently, most researchers are looking for a new spectrum for wireless communication systems instead of the radio frequency (RF) spectrum as it will be fully occupied by 2035 [1]. Therefore, Visible Light Communications (VLC) research attracts more attention over the last ten years [2], as the visible light spectrum ranges from (4 × 10 14 ) to (8 × 10 14 ) Hz [3], that is ten times wider than the RF spectrum. In VLC, data is transmitted using non-coherent sources at the transmitter such as Light Emitting Diode (LED) due to its high efficiency, low-cost, and easy implementation for front-end devices [4]- [6]. Meanwhile, LEDs are used for making their conventional job as a lighting source. Intensity modulation with direct detection (IM/DD) technique is The associate editor coordinating the review of this manuscript and approving it for publication was Barbara Masini . applied in VLC applications [7] as data is transmitted by modulating the input current intensity of the LED. Based on this idea, different optical modulation techniques were introduced for optical wireless communication (OWC) [8]. At the receiver, direct detection is accomplished using a photodiode (PD) to generate current proportional to the received optical power. VLC systems require modulation techniques that restrict the data to be transmitted as real and in a unipolar form [9]. Moreover, those techniques must take into consideration the required high data rate. Single subcarrier techniques such as pulse position modulation (PPM), on-off keying (OOK), and binary phase shift keying (BPSK) satisfy both the real-valued and unipolar criteria but unfortunately, they imply low data rates because of its low modulation order (M = 2) [10]. On the other hand, higher-order modulation techniques such as M-ary pulse amplitude modulation (M-PAM), M-ary phase shift keying (M-PSK), and M-ary quadrature amplitude modulation (M-QAM) achieve high VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ data rates but cannot be directly used in VLC systems as they output complex data. Consequently, optical orthogonal frequency division multiplexing (OFDM) was proposed to be utilized in VLC systems [9]. OFDM systems achieve high spectral efficiency (SE) and high data rate transmission for multiple users by transmitting spectrally efficient OFDM signals with minimal inter symbol interference (ISI). However, employing OFDM and those high-order modulation techniques in the VLC systems needs some adaptations to be applied to achieve high data rates with real and unipolar data form [11]. One of these adaptations is applying the Hermitian symmetry on the transmitted data at the input of the IFFT [12], [13], which results in converting the data into real form at the cost of reducing the data rate to half compared to the rate of the traditional OFDM (real and imaginary). The issue of converting the bipolar signal into a unipolar one was the main motivation for many researchers [5], [8]- [11]. So, different modified OFDM systems were under investigation taking into consideration SE, power efficiency (PE), bit error rate (BER), and nonlinearity distortion. Previous studies introduced different techniques like DC-biased optical-OFDM (DCO-OFDM) [14], [15], which main merit is achieving the maximum data rate ratio (DRR) of (50%) in the VLC systems at the cost of adding a DC bias that leads to power inefficiency and degrading the system BER performance [16]. Asymmetrically clipped optical (ACO)-OFDM [17], [18], FLIP-OFDM [19], [20], and unipolar OFDM (U-OFDM) [11], [21], schemes have a low DRR of (25%), but better BER system performance compared to DCO-OFDM system. Asymmetrically clipped DC biased optical (ADO)-OFDM [9], is considered as a hybrid technique of ACO-OFDM and DCO-OFDM. ADO-OFDM also has a high DRR of (50%) same as DCO-OFDM but with worse BER system performance than DCO-OFDM, ACO-OFDM, FLIP-OFDM, and U-OFDM. Asymmetrically and symmetrically clipping optical (ASCO)-OFDM scheme achieved a compromise between data rate and BER performance. It has a moderate DRR of (37.5%) with better BER system performance than other existing schemes [22]. According to the massive development in the VLC system applications, the transmitted data rate, BER system performance, and SE are traditional metrics for measuring the improvement of the VLC systems performance. The tradeoff between those metrics imposes a challenge for the researchers to guarantee a high data rate and SE, at low system complexity, and minimal BER. Two new innovative optical modulation schemes are introduced in this paper. The first proposed scheme is the special symmetric and asymmetric clipping optical (SSACO)-OFDM system which has a higher DRR than ACO-OFDM, FLIP-OFDM, U-OFDM, and ASCO-OFDM while achieving almost the same BER performance, with lower DRR than DCO-OFDM and ADO-OFDM but with much better BER system performance than these two schemes. The second proposed scheme is the enhanced ASCO (EASCO)-OFDM system. It improves the traditional ASCO-OFDM system complexity by reducing the processing latency and the computational complexity by O (N log 2 N ) with better BER system performance than the conventional ASCO-OFDM scheme by at least 0.2 dB at BER of 10 −4 , where N is the FFT size. The rest of the paper is organized as follows: Section II illustrates four different optical OFDM modulation schemes which are ACO-OFDM, DCO-OFDM, ADO-OFDM, ASCO-OFDM, and FLIP-OFDM. The proposed systems, SSACO-OFDM and EASCO-OFDM are explained and mathematically analyzed in Section III. The SE of the proposed and existing schemes are discussed in Section IV. Section V shows the simulation results and analysis of the proposed SSACO and E-ASCO OFDM techniques. Finally, the paper is concluded in Section VI. II. VLC SYSTEMS BASED ON OFDM TECHNIQUES In this section different optical modulation schemes such as DCO-OFDM, ACO-OFDM, FLIP-OFDM, ADO-OFDM, and ASCO-OFDM are illustrated and analyzed. A. DCO-OFDM In this system, a DC-biasing level is applied to shift up the signal to achieve a unipolar signal form [14]. This biasing level is selected taking into consideration the LED power rating and linear operating range. It is practically impossible to convert all the signal samples into positive ones by adding a high DC-bias value since, the peak to average power ratio (PAPR) will be very high and some of the signal samples will lie out of the LED linear operation range. Thus, a clipping at zero level after the addition of the DC-biasing level should take place. The main advantage of this technique is the high data rate since half of the subcarriers carry information symbols data (N /2) where N is the FFT size [14]. On the other hand, its main disadvantage is the degradation of the system BER because the addition of a DC-biasing level may force the system to work in a nonlinear region causing non-linearity distortion. Also, the distortion resulting from clipping the signal at the transmitter with the difficulty to restore the original data at the receiver. The optimization of the DC bias point was studied in [23], The DCO-OFDM system block diagram is represented as in Fig. 1. In the DCO-OFDM transmitter, the input serial data is first mapped using one of the modulation techniques (e.g., M-QAM, QPSK, or BPSK), then a serial-to-parallel conversion is applied to output complex data symbols X (s) with length ( N 2 − 1). The X (s) signal is then arranged as in (1) [14], and Fig. 2 to apply the Hermitian symmetry property form. where k is the sample index, X (k) is the complex data symbol after applying Hermitian symmetry property on X (s), and ( ) * indicates a conjugate operation. The output from the Hermitian symmetry block is applied to the IFFT block with size N to get real-valued time-domain data symbols x(n), then converting them into serial form through the (P/S) block. In this technique, a shifted signal x DC (n) is produced through adding a DC biasing level B DC [24]. Then a zero clipping is applied to get the unipolar real data x DC (n) and a Cyclic Prefix (CP) is added before signal transmission. An inverse operation takes place at the receiver, first by removing the CP and the DC-level that were added at the transmitter side. Second, converting the received timedomain samples y(n) into the frequency domain symbols Y (k). Finally, inverting the process of Hermitian symmetry is realized at the receiver to get Y (s) signal which subcarriers carry the original symbols. Demodulation is applied through De-Mapper to extract the originally transmitted data. B. ACO-OFDM In this technique, there is no need for adding a DC-level. The input data are arranged in a specific manner that differs from the DCO such that only the odd subcarriers carry data and the even ones carry zero value [25], as illustrated in Fig. 3. In this arrangement, the second half of the odd subcarriers carry the conjugate of the symbols carried on the first halfin reversed order -to attain Hermitian symmetry criterion. So, effectively only (N /4) of the subcarriers are utilized, and accordingly, the data rate is reduced by half compared to the DCO-OFDM technique. However, the system BER performance is improved as will be explained shortly. The ACO-OFDM system block diagram is shown in Fig. 4. The only difference here from the DCO-OFDM block diagram illustrated in Fig. 1 is removing the DC-level and the way of arranging the data. This arrangement results in asymmetric output samples from the IFFT block, as represented in (2). Derivation for (2), is found in [26]. The data output from the IFFT is repeated after (N /2) samples with opposite sample signs. So, each clipped sample has its positive counterpart as shown in Fig. 5, which facilitates the reconstruction of the data at the receiver. A zero bias level clipper is then applied on the signal to clip negative samples to be ready for transmission through the channel. The clipping process causes a clipping noise to appear at the even subcarriers indices only as shown in Fig. 6 and proved in [26]. So, the original data carried on the odd subcarriers VOLUME 10, 2022 will not be affected by the clipping distortion which explains the BER improvement. The reverse operations are applied at the receiver side to extract the original data from the received clipped signal y(n). An FFT block is used to get the frequency domain signal Y (k). Then, the resulting output is arranged in such a manner to get only the odd subcarriers symbols that carry the original data and by demodulating these symbols through a De-mapper block, the original data can be extracted. C. FLIP-OFDM In FLIP-OFDM [20], contrary to ACO-OFDM and DCO-OFDM, there is no need for adding any DC-biasing level or using any clipping techniques. It utilizes a different way to convert the data into unipolar by separating the signal into two parts positive part x + (n) and negative part x − (n). Then, x − (n) is multiplied by (−1), so it can be transmitted using the VLC technology. Afterwards, the two parts are transmitted sequentially through the channel. Data information samples frame of length (N ) is transmitted on two frames each of length (N ). So, the data rate is the same as in the ACO-OFDM but reduced to half compared to the DCO-OFDM system. At the receiver, two subframes y + (n) and y − (n) are received through the LED. The received signal y + (n) represents the positive samples of the signal and y − (n) represents the absolute value of the negative samples. So, by subtracting y − (n) from y + (n), the bipolar received signal y (n) can be extracted as in, Then after passing y (n) through the FFT block the frequency domain Y (k) samples are detected. Data information samples with length (N ) are transmitted on two frames each of length (N ). So, the data rate is the same as in the ACO-OFDM but reduced to half compared to the DCO-OFDM data rate. The FLIP-OFDM (also known as Unipolar-OFDM (U-OFDM)) and ACO-OFDM systems have comparable BER system performance since both are not affected by the clipping noise. However, for FLIP-OFDM system, the channel noise added is doubled due to transmitting the data over two frames. The FLIP-OFDM transmitter and receiver block diagrams are shown in Fig. 7. D. ADO-OFDM This system is considered as a hybrid technique that combines ACO-OFDM and DCO-OFDM to double the data rate compared to (ACO-OFDM, U-OFDM, and FLIP OFDM) [27]. It has the same data rate as the DCO-OFDM system. Which is considered the maximum data rate in VLC systems so far. At the transmitter side, the input data are carried on the odd subcarriers in the same manner as the ACO-OFDM shown in Fig. 3. This system increases the utilization ratio by carrying data symbols on the (N /4−1) even subcarriers as well instead of being nulled. These data symbols are transmitted using the DCO-OFDM technique shown in Fig. 1, where, x odd (n) is the data carried on the odd subcarriers and x even (n) is the data carried on the even subcarriers. On the receiver side, first to regenerate the transmitted data carried on the odd subcarriers, Y odd (k) is extracted from the total received data Y ADO (k) directly in the frequency domain. Second, to regenerate the data carried on the even subcarriers, a reference signal Y odd Ref (k) is generated to be subtracted from the received signal Y ADO (k) to remove the clipping noise applied on the even subcarriers as derived in [26], and shown in Fig. 6. then Y even (k) can be extracted. E. ASCO-OFDM The objective behind the ASCO-OFDM system was to promote the data rate to be better than FLIP-OFDM and ACO-OFDM systems. But, unfortunately, ASCO still has a data rate below the data rate of DCO-OFDM. On the other hand, it has better system BER performance than ACO-OFDM, FLIP-OFDM, and DCO-OFDM systems [29]. ASCO-OFDM system enhances the data rate by using (3N/4) subcarriers carrying actual data but, as the samples are separated into two frames, the data rate spectrum efficiency ratio is degraded to be 37.5%. ASCO-OFDM transmitter and receiver block diagram are illustrated in Fig. 10. In Fig. 10. the transmitted data is divided into four parts x odd i , x odd j , x even PC (n), x even NC (n) where, x odd i and x odd j are two data vectors with only odd-indexed data symbols from the original data stream, i.e., both vectors have zeros at all even indices. Those vectors are transmitted on two consecutive frames. Also, the even data vector is divided into two vectors one for the positive even data signal x even NC (n) and the other is for the negative one x even PC (n). From Fig. 10. three frames are generated from two IFFT blocks. The first and second frames [x odd i (n), x odd j (n)] are generated from the first IFFT block where the first part of the input data symbols [x odd i (k) and x odd j (k)] with size (N /4) are carried on the odd subcarriers only, and the even ones are set to zero as in the ACO-OFDM. These two frames are then clipped at the zero-bias level to convert the transmitted bipolar asymmetric samples into positive clipped ones The third frame is generated from the second IFFT block by carrying the last part of the input data symbols x even (k) with size ( N /4 − 1) on the even subcarriers only and setting the odd ones to zero. Then, the third frame is separated into two parts with equally sized (N ) samples, x even PC (n) and x even NC (n). Where, x even PC (n) is the absolute value of the negative frame which results from clipping all the positive samples of x even (n), and x even NC (n) is the positive frame which results from clipping all the negative samples of x even (n). The data carried on the even subcarriers will be symmetric bipolar frame x even (n) as illustrated in [29], and shown in Fig. 11. The transmitted two OFDM symbols are x i sum (n) and x j sum (n) as shown in (4), and (5), then a CP is applied to both symbols. At the receiver, the received signal y i,j sum (n) is FFT transformed to Y i,j sum (k). Then, the original data is processed in two steps. Firstly, detecting the odd data symbols from the received signal Y i,j sum (k). Secondly, generating a reference signal Y i,j odd Ref (k) by converting the odd symbols into timedomain samples y i,j odd (n) then clipping it. This reference signal will be next subtracted from the received signal, Y i,j sum (k) to get the data carried on the even subcarriers. Y even (k) then demodulating the symbols to extract the originally transmitted data. The reference signal is generated to remove the clipping distortion that falls into the even subcarriers due to the clipping that occurs at the odd transmitted data samples, this is illustrated in [26]. III. THE PROPOSED SCHEMES This section shows two unprecedented modulation schemes in VLC systems. The first one is SSACO OFDM system that enhances the SE than other existing systems, except for the DCO-OFDM system which has high SE but, worse system BER performance, and power inefficient system compared to the proposed SSACO-OFDM system. The second proposed system is E-ASCO OFDM system that has main advantage in reducing the system complexity with better system BER performance than the traditional ASCO-OFDM system. A. THE SPECIAL SYMMETRIC AND ASYMMETRIC CLIPPING OPTICAL (SSACO-OFDM) SCHEME In this scheme, ((7N /8) − 1) data symbols are transmitted over two different OFDM symbols each has (N ) available subcarriers without any need for DC-biasing, i.e., more power-efficient system. The SSACO scheme also has the merit of high SE equal to 43.75% which is higher than ACO-OFDM, FLIP-OFDM, and U-OFDM by 18.75% and higher than ASCO-OFDM by 6.25%. Moreover, it enhances the transmitted data rate by a factor of (N /8) data symbols compared to the ACO-OFDM, FLIP-OFDM, and U-OFDM systems, and by a factor of (N /16) data symbols compared to the ASCO-OFDM. In comparison to the DCO-OFDM and ADO-OFDM systems, the SSACO-OFDM system has less SE ratio and data rate by only 6.25% and (N /16) data symbols, respectively, with much better (E b /N o ) system performance by at least 6 dB at BER of 10 −4 . 1) THE SSACO-OFDM TRANSMITTER The SSACO-OFDM transmitter block diagram is shown in Fig. 12. The input transmitted data is applied to a mapper then S/P block that outputs parallel data symbols X (k). then X (k) , is divided into 3 vectors X A (k), X B (k), and X C (k) as represented in (6), and (7). Those 3 vectors are applied to the Hermitian symmetry and data arrangement block that outputs the parallel data vectors X odd i,j (k), X even special (k), and X even (k) respectively. The symbols of those vectors are arranged as illustrated in (8)- (10). The data symbols are sorted, such that the output of the IFFT blocks are real bipolar signals. where k is the symbol index that ranges as, The vectors in (8-a) - (10), are processed as follows; Firstly, the data symbols of X odd i,j (k) applied to the first IFFT block are arranged according to the ACO-OFDM technique in which only the odd subcarriers carry data and the positions of the even subcarriers are set to zero so the resultant output will be asymmetric bipolar signal x odd i,j (n). Then, x odd i,j (n) signal is applied to a splitter to output x i odd (n) and x j odd (n) signals. After that a zero clipping is applied to get unipolar asymmetric signals x i odd (n) and x j odd (n). Secondly, the data symbols of X even special (k) applied to the second IFFT block are arranged in a special manner in which a set of the even subcarriers carry data while all other subcarriers are set to zeros as illustrated in (9), This arrangement leads to a symmetric-asymmetric bipolar signal x special (n) at the output of the IFFT as shown in Fig. 13. Thus, the proposed technique is named as special symmetric and asymmetric clipping optical technique. Moreover, a zero clipping process takes place to convert x special (n) into a unipolar real signal x special (n) as in, 67930 VOLUME 10, 2022 Thirdly, the data symbols of X even (k) applied to the third IFFT block are applied to some even subcarriers in an order as illustrated in (10), this arrangement leads the output of the IFFT to be symmetric bipolar signal x even (n) as discussed in [29], and shown in Fig. 11. Afterwards, negative and positive clipping processes takes place to generate the negative clipping signal x even NC (n), and the positive clipping signal x even PC (n), respectively, where, x even NC (n) is the exact replica of x even PC (n). x even PC (n) is multiplied by (−1), so it can be transmitted using the VLC technology. After that the two parts are transmitted sequentially through the channel as in, x even PC (n) = −x even (n) , if x (n) < 0 0, otherwise The two transmitted OFDM symbols x i SSACO (n) and x j SSACO (n) are represented in (16), and (17). 2) THE SSACO-OFDM RECEIVER The received data which is mathematically represented in (18), and (19), are reconstructed through three steps. y SSACO i (n) = y i odd (n) + y even NC (n) + y special (n) = y i odd (n) + y even i (n) y SSACO j (n) = y j odd (n) + y even PC (n) + y special (n) = y j odd (n) + y even j (n) y even i (n), and y even j (n) are a part from the total received time domain signals y SSACO i (n), and y SSACO j (n) respectively. where y even i (n), and y even j (n) are the time domain signals corresponding to the clipped data carried on the even subcarriers (special even, and even symbols) which indices as in (9), and (10), respectively. i-Reconstruct the data carried on the odd subcarriers. ii-Reconstruct the data carried on the even subcarriers through subtracting the data carried on the odd subcarriers from the total received signal. iii-Reconstruct the data carried on the special even subcarriers. The SSACO-OFDM receiver block diagram is shown in Fig. 14. It improves the system BER performance by overcoming all the clipping noise resulting from clipping the transmitted data carried on both the odd and even subcarriers through using the following data receiving algorithm. 1. The data carried on the odd subcarriers are reconstructed via dividing the received signal y SSACO i,j (n) into two parts each of size (N /2) samples, The first part is y A i,j (n) and the second part is y B i,j (n) as in, where the received time domain signals y odd A i,j (n) and y Even A i,j (n) are the first (N /2) samples of the received signal y SSACO i,j (n) which are corresponding to the data carried on the odd subcarriers (odd symbols) which indices as in equation (8), and the data carried on the even subcarriers (special even, and even symbols) which indices as in equation (9), and (10), respectively. Also, where, the received time domain signals y odd B i,j y SSACO i,j (n) which are corresponding to the data carried on the odd subcarriers (odd symbols) which indices as in equation (8), and even subcarriers (special even, and even symbols) which indices as in equation (9), and (10), respectively. Assuming the signal is transmitted through an AWGN channel, then where, x odd A i,j (n), and x Even A i,j (n) are the first (N /2) samples of the transmitted x odd i,j (n) signal which are corresponding to the data carried on the odd subcarriers (odd symbols) that indices as in equation (8), and even subcarriers (special even, and even symbols) that indices as in equation (9), and (10), respectively. Also, x odd B i,j (n) and x Even B i,j (n) are the last (N /2) samples of the transmitted x odd i,j (n) signal which are corresponding to the data carried on the odd subcarriers (odd symbols) that indices as in equation (8), and even subcarriers (special even, and even symbols) that indices as in equation (9), and (10), respectively. Moreover, n o is the added channel noise, and (x) is the clipped version of (x). Using the symmetry property mentioned in [29], and represented in Fig. 11, then: Accordingly, the data carried on the odd subcarriers can be reconstructed as follows, where y i,j odd (n) is the received data carried on the odd subcarriers represented in time domain. At this stage, the data carried on the even subcarriers cancel each other after subtracting y A i,j (n) from y B i,j (n) due to the symmetry property of the even subcarriers. Then, after passing y i,j odd (n) through the FFT block, the odd data symbols Y i,j odd (k) can be reconstructed. By substituting equations (23) and (24) in (26), assuming perfect channel estimation and very low noise level that can be ignored, the proof goes as follows, Then substituting (26), in (30), Using the asymmetric property mentioned in [26], Similarly, y odd B i,j (n) can be reconstructed in the same manner, Thus, (32), and (33), show that the data carried on the odd subcarriers can be completely reconstructed. 2. The data carried on the even subcarriers are reconstructed from two different signals y even (n) and y sp_ nc (n). Where y even (n) is the received data carried on a set of even subcarriers which indices are defined in (10), and y sp_ nc (n) is the received signal carried on the special even subcarriers which indices are defined in (9), after applying the noise cancellation technique, respectively. So, the even subcarriers data will be extracted in two stages. The first stage is to extract y even (n) from the received signals represented in (18), and (19) taking into consideration y even NC (n), and y even PC (n) are the received signals for x even NC (n) illustrated in (14), and x even PC (n) illustrated in (15), respectively. Therefore y even (n) can be obtained through subtracting y even PC (n) from y even NC (n) as shown in (34), y even (n) = y even NC (n) − y even PC (n) This will be accomplished by first splitting the received odd signal y i,j odd (n) into y iodd (n), and y j odd (n) signals then clipping them to generate the reference signals y i odd (n), and y j odd (n), respectively. Then, subtracting y i odd (n) and y j odd (n) from the total received signals y SSACO i (n), and y SSACO j (n), respectively. This step eliminates the clipping distortion resulting from the clipping process that occurred on the odd data samples at the transmitter and removes the data carried on the odd subcarriers. Thus, the output from the first stage can be represented as in (35) and (36), y even i (n) = y even NC (n) + y special (n) (35) y even j (n) = y even PC (n) + y special (n) Finally, y even (n) can be extracted by subtracting (36), from (35). The second stage is to extract y special (n) by subtracting y even NC (n) signal from y even i (n) signal as in, y special (n) = y even i (n) − y even NC (n) Afterwards, y special (n) is applied to a clipping noise cancellation block at the receiver that outputs y sp_ nc (n) signal. The noise cancellation block is used not only to enhance the system performance by minimizing the system clipping distortion noise but also to restore the original transmitted special even data from the clipped samples. Figure (15), shows the noise cancellation technique is based on three stages, the first stage is splitting the received y special (n) into four parts each of size (N /4) and storing them as represented in (38), Since for special even subcarriers the data is symmetricasymmetric as shown in Fig. 13 and discussed in Section III-A, then the inverse of y A (n) is y B (n) and the inverse of y C (n) is y D (n). So, each negative clipped sample has its positive counterpart. For this reason, in the second stage two comparisons between the four signals take place. One comparison is between (y A (n) and y B (n)) and the other comparison is between (y C (n) and y D (n)). Finally, y sp_nc (n) is extracted after some subtraction processes as in the third stage represented in Fig.15. Also, Fig. 16 shows that the transmitted data is mainly the same as the received data after the noise cancellation technique. So, the proposed noise cancellation technique has a great enhancement in improving the system BER performance. Taking the second sample as an example to show the noise cancellation technique ability in restoring the data. The second sample has an original value of −0.175 at the transmitter as shown in Fig. 16-a, then the sample is clipped to be transmitted through VLC system, so the transmitted sample value is forced to zero. Then the received sample value is a little value above zero value as shown in Fig. 16-b, due to channel noise effect but after applying the noise cancellation technique, the received sample value almost becomes −0.175 as shown in Fig. 16-c, as its original value so, the noise cancellation technique can almost restore all the clipped samples and removes the channel noise. Finally, the data is applied to an FFT block then a data arrangement, and de-mapper blocks to extract the originally transmitted data. B. THE ENHANCED-ASCO (E-ASCO) OFDM SYSTEM The EASCO-OFDM is proposed to reduce the receiver complexity and the processing latency with better system BER performance than the conventional ASCO-OFDM system explained in Section II-E. This is accomplished by introducing an innovative receiver as shown in Fig. 17. 1) THE E-ASCO OFDM TRANSMITTER E-ASCO OFDM has the same transmitter as the traditional ASCO-OFDM illustrated in Fig. 10. The transmitted odd, and even data vectors are represented as in (39), (40), and (41), also Fig. 18 shows an example for the data arrangement in E-ASCO OFDM transmitter using N = 16 subcarriers. 2) THE E-ASCO OFDM RECEIVER The main difference in the proposed modified receiver structure compared to the original one is the methodology used to extract the information from the received signal. The proposed E-ASCO receiver process the received data in time domain to extract the information by applying a subtraction process after splitting the received signals. while, the traditional ASCO receiver process the received signal in frequency domain to extract the information data. So, for the proposed EASCO-OFDM receiver, the received data is extracted through two stages: 1. Extract the data carried on the odd subcarriers by the same criteria represented in Section III-A, and mathematically analyzed in (20), -(33). 2. Extract the data carried on the even subcarriers by generating the reference signal y i,j odd (n) which represents the received clipped odd signal. Then, subtract y i,j odd (n) from the total received signal y (n) to eliminate the clipping distortion resulting from clipping the odd data samples at the transmitter as shown in Fig. 6. Moreover, a splitter is used after the FFT block to separate the even negative clipping data symbols Y Even NC (k) and even positive clipping data symbols Y Even PC (k). Afterwards, a subtraction process takes place as in (42), to extract the received data Y Even (k) carried on the even subcarriers. The odd Y i,j odd (k) and the even Y Even (k) data symbols are then arranged using a data arrangement block before passing through a de-mapper block to restore the originally transmitted data. 3) THE COMPLEXITY ANALYSIS OF THE EASCO-OFDM AND ASCO-OFDM SYSTEMS For the EASCO-OFDM system, there are two N-point IFFT blocks and two N -point FFT blocks at the transmitter and the receiver, respectively. Firstly, the subtraction and addition operations complexity at both transmitter and receiver can be neglected as it is a very small value compared to the IFFT and FFT operations complexity [30]. So, the EASCO-OFDM system computational complexity is approximately 2O (N log 2 N ) for the transmitter and 2O (Nlog 2 N ) for the receiver. Also, since ASCO-OFDM transmitter is identical to EASCO-OFDM transmitter, therefore, the transmitter computational complexity is 2O (Nlog 2 N ), whereas the receiver complexity for the ASCO-OFDM system is approximately 3O (Nlog 2 N ). Because at the receiver, there are two N-point FFT and one N -point IFFT blocks with computational complexity 2O (Nlog 2 N ) and O (Nlog 2 N ) respectively. Thus, the EASCO-OFDM has a great reduction in the computational complexity than the ASCO-OFDM by O (Nlog 2 N ). IV. DATA RATE AND SPECTRAL EFFICIENCY In this section, the transmitted data rate R for x technique (R x ) like R DCO , R ADO , R ACO , R FLIP , R ASCO , R SSACO , and R E−ASCO are calculated and compared. The data rate for the DCO-OFDM system R DCO is (N /2) due to applying a Hermitian symmetry property on the subcarriers. Since applying this property requires only half of the available subcarriers to be loaded by the data, so R DCO ratio is 50%. The ACO-OFDM system data rate R ACO is (N/4) as it also follows Hermitian symmetry and only odd subcarriers are loaded by the data, so the DRR is 25%. For the FLIP-OFDM system, the R FLIP is (N/2) for each transmitted OFDM symbol but since the (N/2) symbols are to be transmitted over two OFDM symbols, then the actual data rate for the system will be reduced to half to be (N /4) with a DRR of 25%. In the ASCO-OFDM system, for transmitting (3N /4) data symbols, three OFDM symbols are produced at the transmitter. For the first and second ones, the data is carried on the odd subcarriers only. So, for the first 2 OFDM Symbols each have only (N /4) of the subcarriers carrying data symbols. For the third OFDM symbol, the data is carried only on the even subcarriers, accordingly only (N /4) of the subcarriers are carrying data symbols. Hence, the total transmitted data rate for the ASCO-OFDM system is (3N /4) but since the three frames are summated and transmitted through the channel in two frames only, thus the actual data rate R ASCO will be (3N /8) that has a DRR of 37.5%. For the proposed SSACO-OFDM system the R SSACO is (7N /16) as two generated OFDM symbols are transmitted through the channel. The odd subcarriers of the two OFDM symbols are loaded by data arranged in the same manner as in the ACO-OFDM system. For the even subcarriers a certain arrangement is used for loading the data as discussed in section III-A, taking into consideration that the data carried on special even subcarriers are transmitted twice on the channel as represented in (16), and (17), which causes a reduction in the DR by a factor of (N /16). For the proposed E-ASCO OFDM system, it has the same data rate as ASCO-OFDM R ASCO of (3N /8) since the modification proposed was only in the system receiver. Fig. 19 and Table 4. SE is defined as data bit rate divided by normalized bandwidth. The SE of DCO, ACO, ADO, FLIP, ASCO, SSACO, and E-ASCO OFDM systems can be calculated as in (43) where M xx is the modulation order M used with VLC technique xx. Moreover M odd , M even , and M special even is the used modulation order for the odd subcarriers, the even subcarriers, and the special even subcarriers used in the system respectively. where N = 1024 is the used IFFT size in the analysis. V. SIMULATION RESULTS The simulation results of the proposed techniques compared to the published systems in [9], and [17] are introduced and simulated using MATLAB. The simulation results were studied for different modulation techniques to analyze the effect of modulation on the system performance. The simulated channel is additive white Gaussian noise (AWGN) channel and Table 2, summarized the simulation parameters. For the DCO and ADO OFDM systems, the biasing DC-level used in the simulation is 13 dB to reach BER of 10 −4 at high modulation order. The performance of E-ASCO, DCO, ACO, FLIP, ADO, and ASCO OFDM systems at SE of 2.9942 is illustrated in Fig.19. The E-ASCO OFDM system outperforms the other compared techniques with E b /N o = 17.35 dB at BER of 10 −4 . which is better than the other existing techniques by values illustrated in Table 3. Also, the E-ASCO OFDM system has a higher system data rate performance by 12.5% than ACO-OFDM and FLIP-OFDM, and the same DRR of 37.5% as ASCO-OFDM but with lower system complexity at the receiver side. However, it has a lower DRR of 12.5% than DCO-OFDM, and ADO-OFDM. For the SSACO-OFDM system, by changing the modulation order there is no common suitable SE between the existing techniques, so a comparison is applied under the same modulation order of 16-QAM with N = 1024 as shown in Fig. 20. The SSACO OFDM system at BER of 10 −4 outperforms the DCO, and ADO OFDM systems by 6 dB, and 9.5 dB, respectively. Also, it has higher BER than ASCO and E-ASCO OFDM systems by just 1.4 dB, and 1.6 dB respectively, with the benefit of improving the system data rate by 6.25% than ASCO and E-ASCO OFDM systems. [17], [19], [20], and [29], at spectral efficiency = 2.9942. Figure (21), shows that the E-ASCO, SSACO, ASCO, ACO, and DCO OFDM techniques have almost the same rate of increase of required Eb/No by increasing the constellation sizes. At a certain BER of 10 −4 , the DCO has the highest required Eb/No values for all the constellation sizes due to the added dc level. Whereas, ACO has the lowest required Eb/No as it has the lowest SE compared to the other systems, so minimum energy is required. Moreover, the proposed E-ASCO OFDM has a moderate value between the compared techniques that is because E-ASCO has a SE higher than ACO and lower than DCO, thus moderate energy is required. Also, SSACO requires a higher energy than ACO, ASCO, and E-ASCO but lower than DCO that is because SSACO has a SE higher than ACO, ASCO, and E-ASCO but lower than DCO. Figure (22), shows the relation between different optical modulation techniques SE against the Eb/No at different constellation sizes. It's noticed that the rate of increase of Eb/No against the SE for the ACO-OFDM curve is very high that results in a spectrally inefficient technique. However, the proposed SSACO and E-ASCO OFDM have lower rate of increase that results in minimum Eb/No at high spectral efficiency and high modulation orders. Table 4. VI. CONCLUSION In this paper, two new optical modulation techniques were presented; the SSACO-OFDM and the E-ASCO OFDM systems. The proposed systems were evaluated, analyzed, and compared with other existing techniques as ACO, DCO, ADO, FLIP, and ASCO OFDM according to system complexity, SE, and BER system performance through simulation verification and mathematical analysis. The proposed SSACO-OFDM system demonstrates to be a compromise choice between having a low system complexity by introducing a special receiver with mathematical analysis and enhancing the system BER performance by introducing a noise cancellation technique also, increasing the transmitted data rate which results in a spectrally efficient technique. SSACO-OFDM system has a spectral efficiency enhancement by a ratio of 75%, 75%, and 16.666% compared to ACO, FLIP, and ASCO-OFDM systems respectively, but with a spectral efficiency less than both DCO OFDM and ADO OFDM by 12.5%. Also, it has an enhancement in the system performance at BER of 10 −4 by 6 dB, and 9.5 dB compared to DCO-OFDM, and ADO-OFDM respectively, and worse than ASCO-OFDM by just 1.4 dB. The proposed E-ASCO OFDM system has the same spectrum efficiency as ASCO-OFDM but with lower complexity and better BER system performance by at least 0.2 dB at BER of 10 −4 . Also, E-ASCO outperforms the existing techniques for enhancing the system BER performance by 6.95 dB, 10.25 dB, 2.1 dB, and 2.2 dB than DCO, ADO, ACO, and FLIP respectively at the same spectral efficiency of 2.9942.
2022-06-26T15:00:52.652Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "11e61075285c9100a50fe1bb39d094b77387b14e", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09805716.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "f56a43ac397832aa9f35491a84a25bc97d2a707c", "s2fieldsofstudy": [ "Engineering", "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247462965
pes2o/s2orc
v3-fos-license
Tuning the Polarity of a Fibrous Poly(vinylidene fluoride-co-hexafluoropropylene)-Based Support for Efficient Water Electrolysis Water electrolysis under alkaline conditions is of interest due to the applicability of non-precious metal-based materials for electrocatalysts. However, the successful design and synthesis of earth-abundant and efficient catalysts for the oxygen evolution reaction (OER) remain a significant challenge. This work presents cost-effective and straightforward ways to improve the OER activity under alkaline conditions by activating the catalyst–support and reactant–support interaction. Micro/nano-sized fibrous poly(vinylidene fluoride-co-hexafluoropropylene) (PVdF-HFP) was synthesized via simple and scalable electrospinning and subsequently coated with Cu by electroless deposition to obtain the electrocatalyst with a large specific surface area, enhanced mass transport, and high catalyst utilization. Scanning electron microscopy, infrared spectroscopy, and X-ray diffraction confirmed the successful synthesis of the series of Cu/PVdF-HFP fibrous catalysts with varied ferroelectric polarizability of the PVdF-HFP support in the order of stretch-anneal > anneal > stretch > without pre-treatment of the catalyst. The best OER activity was confirmed for the Cu/PVdF-HFP catalyst with stretch and annealed treatment among the catalysts tested, suggesting that both the reaction kinetics and energetics of stretch-annealed Cu/PVdF-HFP catalysts were optimal for the OER. The electron delocalization between Cu and PVdF-HFP substrates (electron transfer from Cu to the negatively charged (δ–eff) PVdF-HFP region at the Cu|PVdF-HFP interface) and the enhanced transport of reactive hydroxide species and/or the increase in the local pH by positively charged (δ+eff) PVdF-HFP region concertedly accelerate the OER activity. The overall activity for the prototype water electrolyzer increased 10-fold with stretch-anneal treatment compared to the one without pre-treatment, highlighting the effect of tuning the catalyst–support and reactant–support interaction on improving the efficiency of the water electrolysis. INTRODUCTION Water electrolysis is one of the efficient and sustainable means to produce hydrogen, which is considered as a promising alternative to fossil-fuel-based energy sources, utilizing electricity generated from renewable sources, e.g., wind and solar. 1−4 The overall efficiency and cost of the water electrolyzer are critical in achieving mass production of hydrogen via water electrolysis. Typical water electrolyzers operate under acidic or alkaline conditions at temperatures up to 80°C. 5,6 However, most non-precious metal-based catalysts gradually degrade in an acidic medium, and only precious metal-based catalysts can exhibit substantial stability. 7−9 Therefore, the study under an alkaline condition is essential to develop water electrolyzers with cost-effective, non-precious metal-based catalysts. In water electrolyzers operated under alkaline conditions, 10,11 the hydrogen evolution reaction (HER, 2H 2 O + 2e − ⇄ H 2 + 2OH − ) and the oxygen evolution reaction (OER, 4OH − ⇄ O 2 + 2H 2 O + 4e − ) proceed at the cathode and anode, respectively. Although the HER has minimal energy losses, 12,13 the OER is a more complicated process with multiple-electron transfer, which requires a large overpotential and leads to a substantial energy loss, 14−16 even for the state-of-the-art OER catalyst (e.g., IrO 2 17,18 and RuO 2 17−19 ). Furthermore, these electrocatalysts commonly contain precious metals such as Ir and Ru, and their high cost and scarcity impede the large-scale application. Design-efficient and durable OER electrocatalysts based on earth-abundant elements, e.g., 3d transition metals, 20,21 are thus crucial and have been investigated for more than decades. 7,22,23 Among the 3d transition metals, Cu can be a potential candidate for the practical OER electrocatalysts due to its rich redox properties, 24,25 low cost, 26,25 and non-toxicity. 25 Recent studies successfully developed Cu-based OER catalysts with OER activity comparable to well-optimized Ni/Co-based catalysts 27−31 by tuning the energetics of the reaction intermediates via controlling sulfur content in Cu sulfide 32 or alloying with other 3d metals 33−35 or by adjusting the Cu oxidation state under the OER potential with H 2 O 2 36 or annealing 30,35 treatment of the Cu surface. Furthermore, for composite catalysts, such as nanoparticles deposited on a conductive support, successful control of the micro/macrostructure effectively increased the OER activity of Cu-based catalysts 31,37−39 by enhancing both mass transport and catalyst utilization. In addition to active site engineering, designing the interactions between the catalyst atoms and support (catalyst−support interaction) plays a significant role in determining the stability and activity of catalysts. 40−44 The support provides a platform where the catalytic reaction occurs and defines the electronic structure of the catalyst atoms. 45,46 In this regard, to maximize the catalytic activity of Cu, optimizing the catalyst−support interaction is essential for the rational design of highly active Cu-based OER catalysts. Although the impact of the catalyst−support interaction on the OER activity has also been suggested for Cu-based catalysts, 47,48 insights into tuning the catalyst−support interaction to optimize the electronic structure are still insufficient and are further explored. We present a simple pre-treatment of the catalyst, e.g., stretch and anneal treatment, which can effectively activate the catalyst−support and reactant−support interaction and improve the OER activity under alkaline conditions. The best OER activity was confirmed for highly polarized Cu/ PVdF-HFP catalysts with stretch and anneal treatment among the catalysts tested, suggesting that the reaction kinetics and energetics of the OER was optimized by the simple stretch and annealing treatment. We propose that the positively charged (δ + eff ) PVdF-HFP region facilitates the transport of reactive hydroxide species, while the electron transfer from Cu to the negatively charged (δ − eff ) PVdF-HFP region at the Cu|PVdF-HFP interface accelerates the rate-determining step of the OER. A more than 10-fold increase in the overall performance was confirmed for a prototype water electrolyzer consisting of the bi-functional membrane electrode assembly with stretchanneal treatment compared to the one without pre-treatment, further validating the effect of tuning the catalyst−support and reactant−support interaction on increasing the OER performance. Our findings provide a new design strategy for a highly active OER catalyst, whereby the OER activity can be increased by designing the active metal site and tuning the catalyst−support and reactant−support interaction. 2.2. Fabrication of Bi-functional Membrane Electrode Assembly. A multilayered fiber (PVdF-HFP/PdCl 2 |PVdF-HFP|PVdF-HFP/PdCl 2 ) was synthesized using the setup mentioned above with multiple electrospinning of different electrospinning solutions. First, PVdF-HFP/PdCl 2 was synthesized using the electrospinning solution with the same composition described in Section 2.1. The PVdF-HFP was then synthesized over the PVdF-HFP/PdCl 2 fiber using the electrospinning solution without the PdCl 2 additive. Finally, the multilayered fiber of PVdF-HFP/PdCl 2 |PVdF-HFP|PVdF-HFP/PdCl 2 was obtained by the electrospinning using the electrospinning solution with the PdCl 2 additive over the PVdF-HFP/PdCl 2 |PVdF-HFP fiber. The obtained multilayered fiber was dried at room temperature for 24 h under a reduced pressure (ca. 400 Pa) followed by Cu electroless deposition in the same manner described in Section 2.1. 2.3. Characterization. The microstructure of all Cu/ PVdF-HFP catalysts was analyzed by a scanning electron microscope (SEM, JSM-7600F, JEOL Ltd. with an accelerating voltage of 20 kV) equipped with an energy-dispersive X-ray spectrometer (EDS, JMS-7600F, JEOL Ltd.). The X-ray diffraction (XRD) patterns of Cu/PVdF-HFP catalysts were obtained by an X-ray diffractometer (Rigaku Ultima IV) with Cu Kα radiation. X-ray photoelectron spectroscopy (XPS) was performed on the K-Alpha spectrometer (Thermo Fischer Scientific). XPS spectra were calibrated by adventitious carbon at 284.8 eV (C 1s spectra). After subtraction of a Shirley-type background, the photoemission lines were fitted using combined Gaussian−Lorentzian functions. Electrochemical cleaning of the electrode was performed in a standard threeelectrode cell and by cycling the potential between −0.9 and 1.8 V vs reversible hydrogen electrode (potential cycling was terminated at 1.8 V after 10 cycles). The infrared spectra of the materials were obtained on a Nicolet iS50 (Thermo Fischer Scientific) equipped with a deuterated triglycine sulfate (DTGS) detector. A single reflection attenuated total reflection (ATR) accessory (Smart iTX, Thermo Fischer Scientific) with a ZnSe prism was used to obtain the spectra. The ATR measurements were performed at an incident angle of 45°with a 4 cm −1 resolution. The spectra were collected in the wavenumber range 4000−500 cm −1 with a cumulative number of 64. All spectra are shown in the absorbance units defined as log(I 0 /I), where I 0 and I represent the background spectra and sample spectra, respectively. The background spectrum I 0 was measured without any sample. 2.4. Electrochemical Measurements. Electrochemical measurements were carried out on an HZ-5000 potentiostat (Hokuto Denko) at room temperature. The cyclic voltammogram (CV) and linear sweep voltammogram (LSV) were obtained in a standard three-electrode cell with a Pt wire counter electrode and Ag/AgCl reference electrode. The overall performance of the water electrolysis was evaluated using a two-electrode configuration, and the chronoamperometric curve was recorded at an applied voltage of 2.5 V. The electrolyte solution was prepared by mixing KOH (Wako Pure Chemical, >85 wt %) and ultrapure water (Nihon Millipore K.K). Before every experiment, argon was bubbled through the electrolyte for at least 15 min to completely deoxygenate the solution. The electrode was cleaned by cycling the potential between −0.05 and 1.50 V versus the reversible hydrogen electrode (RHE) before the measurement. All potentials reported here are referenced to the RHE scale (expressed as V RHE ). The ECSA for the series of the Cu/PVdF-HFP fiber was determined following a similar procedure previously reported. 51,52 In short, ECSA (A) was calculated from the peak oxidation current (I p ) related to Cu(OH) 2 formation using the following equation: A = 3525.8 × I p . The peak oxidation current was obtained by linear sweep voltammetry at a scan rate of 10 mV/s in an Ar-purged KOH electrolyte (see Figure S1 for the LSV and obtained ECSA). The current density was obtained by normalizing the current to the ECSA (expressed as μA cm −2 ECSA ) unless otherwise noted. RESULTS AND DISCUSSION The successful synthesis of a series of Cu/PVdF-HFP fibrous catalysts was confirmed by scanning electron microscopy (SEM), infrared (IR) spectroscopy, and XRD, suggesting uniform Cu nanoparticle deposition for all the catalysts as well as diverse ferroelectric polarizability of PVdF-HFP substrates by annealing and/or stretching treatment ( Figure 1). The SEM images confirm that all the pristine PVdF-HFP/ PdCl 2 fibers, prepared by the electrospinning method with and without annealing and/or stretching treatment, possess a similar morphology with a fiber diameter of ca. 0.47 μm (Figure 1a−d, inset). After electroless deposition of Cu, the smooth surface of the PVdF-HFP substrate was covered by a particle-like deposit, in line with the increase in the average diameter of ca. 0.66 μm (Figure 1a−d). The energy-dispersive X-ray spectroscopy (EDS) identified the deposit as a Cu particle, which confirms the successful synthesis of the series of Cu/PVdF-HFP fibrous catalysts ( Figure S2). Note that the Cu deposit (a light gray area in the SEM) did not fully cover the fiber surface, and the PVdF-HFP substrate was partially exposed (a dark gray area in the SEM). However, the obtained Cu/PVdF-HFP fibrous catalysts showed good electrical conductivity, suggesting that the Cu particle connected well enough to create the electron-conducting path. The high void volume observed for all the Cu/PVdF-HFP fibers contributes to the efficient mass transfer of reactant and product molecules. 47 Infrared spectra of pristine PVdF-HFP/PdCl 2 fibers showed distinctive features corresponding to β-phase PVdF-HFP at ca. 1275 cm −1 , 53,54 suggesting the increased β-phase PVdF-HFP population in the following order: stretch-anneal > anneal > stretch > pristine (Figure 1e). The ferroelectric polarizability is in line with the amount of β-phase PVdF-HFP due to the following reasons: The β-phase PVdF-HFP has an orthorhombic structure and an all-trans molecule conformation, leading to alignment of the dipoles (−CH 2 CF 2 −) perpendicular to the chain axis ( Figure S3). 55,56 Therefore, the β-phase PVdF-HFP possesses a large spontaneous polarization, which evokes the characteristic ferroelectricity of PVDF and its copolymers. We thus conclude that the ferroelectric polarizability of PVdF-HFP can be tuned by the simple annealing and/or stretching treatment, resulting in the ferroelectric polarizability in the following order: Cu/PVdF-HFP stretch-anneal > Cu/PVdF-HFP anneal > Cu/PVdF-HFP stretch > Cu/PVdF-HFP. XRD patterns of the series of Cu/PVdF-HFP catalysts further supports the varied ferroelectric polarizability and deposition of Cu particles (Figure 1f). The characteristic XRD peak corresponds to the β-phase PVdF-HFP appeared at ca. 21°(200/110), 52,57 which gradually increased its intensity and shifted to a higher degree after annealing and/or stretching treatment, indicating the formation of a metastable β-phase by those simple treatments (XRD patterns of the pre-treated PVdF-HFP support without Cu deposition are shown in Figure S4). The XRD patterns also showed diffraction peaks corresponding to Cu, confirming the successful deposition of Cu on all Cu/PVdF-HFP catalysts. Note that Cu/PVdF-HFP catalysts are in the form of thin films, and the flexibility of the pristine PVdF-HFP/PdCl 2 fiber is still maintained after Cu deposition. The electrocatalytic activity toward the oxygen evolution reaction (OER) was clearly improved by the annealing and/or stretching treatment, while the hydrogen evolution reaction (HER) activity only showed slight improvement by the pretreatment of the catalysts (Figure 2). The linear sweep voltammogram showed the similar HER current of ca. −500 μA cm −2 at −0.45 V RHE for the catalysts with pre-treatment (Cu/PVdF-HFP stretch , Cu/PVdF-HFP anneal , and Cu/PVdF-HFP stretch-anneal ), which was slightly larger than that of pristine Cu/PVdF-HFP (ca. −300 μA cm −2 at −0.45 V RHE ) (Figure 2a). The onset potential of the HER also showed a similar trend, where the catalysts with pre-treatment required slightly smaller (<0.1 V) overpotential to initiate HER compared to the pristine catalyst. The Tafel slope value was ca. 120 mV dec −1 regardless of the pre-treatment (Figure 2b), indicating that the initial Volmer step (water dissociation: H 2 O + e − → H ad + OH − ) 58,59 could be the rate-determining step of the HER for the catalysts used in this study. From the above observations, we concluded that the change in the ferroelectric polarizability of the PVdF-HFP support slightly improved the HER activity. We here propose that the increased ferroelectric polarizability of the PVdF-HFP support lowers the water dissociation energy barrier (responsible for the rate-determining step for the HER in alkaline electrolytes) 60 by stabilizing the metal-OH-water (M-OH ad -H 2 O ad ) complex due to the increased hydrophilicity 61 (discussed further in the later paragraph). The specific OER current of Cu/PVdF-HFP stretch-anneal , Cu/ PVdF-HFP anneal , and Cu/PVdF-HFP stretch catalysts at 1.65 V RHE showed ca. 6.9-, 5.4-, and a 1.6-fold increase compared to that of the pristine Cu/PVdF-HFP catalyst, respectively ( Figure 2c). Furthermore, the OER current at a relatively large overpotential region (>1.7 V RHE ) observed for Cu/PVdF-HFP anneal and Cu/PVdF-HFP stretch-anneal showed a steeper slope compared to that of Cu/PVdF-HFP stretch and pristine Cu/ PVdF-HFP. The result indicates the enhanced diffusion of the reactant and/or the product for catalysts with stretch-anneal and anneal treatments. Although the high void volume of the fibrous structure improves the mass transfer of reactant and product molecules and partial exposure of hydrophobic PVdF-HFP substrate assists the removal of the reaction product (oxygen gas) from the surface, 47 both of which cannot be the reason for the steep slope of the LSV at the large overpotential region for catalysts with stretch-anneal and anneal treatments. We here propose that the ferroelectric polarizability of PVdF- HFP effectively anchors negatively charged OH − at the vicinity of the electrode, accelerating the OH supply to the Cu active sites. The hypothesis is supported by the fact that OER activation by stretch-anneal treatment was not observed for the comparable fibrous Cu/polystyrene (Cu/PS) catalyst without ferroelectricity ( Figure S5). Although the polarized PVdF-HFP surface possesses both positive and negative charges depending on the CH 2 /CF 2 orientation, 62 negatively charged PVdF-HFP (surface with CF 2 dipoles (δ − eff )) might preferentially be covered by Cu since it attracts Cu 2+ during the electroless deposition process. The electrostatic interaction between the positively charged PVdF-HFP surface by CH 2 dipoles (δ + eff ) and negatively charged OH − anchors the OH − close to the electrode surface. Linear sweep voltammograms in various KOH concentrations further support our hypothesis ( Figure S6). The OER current became more extensive along with the increase in the KOH concentration (from 1 to 2 M KOH) for pristine electrodes, suggesting the enhancement of OER by the increased amount of OH − active species. The OER current for the catalyst with stretch-anneal treatment obtained in 1 M KOH was notably more significant than that for the pristine electrode in 2 M KOH, suggesting the high local concentration (>2 M) of OH − achieved by the enhanced ferroelectric polarizability of the PVdF-HFP support. All the pre-treated Cu/PVdF-HFP catalysts exhibit superior specific OER activity compared to that of the pristine catalyst, with an overpotential (η) of 370 mV (stretch-anneal) < 380 mV (anneal) < 440 mV (stretch) < 490 mV (pristine) to reach 50 μA cm −2 ECSA (Figure 2d). In addition, the Nyquist plot of the Cu/PVdF-HFP stretch-anneal catalyst shows a smaller charge transfer resistance than that of pristine Cu/PVdF-HFP, demonstrating the enhanced charge transfer kinetics ( Figure S7). 63 Tafel analysis further confirms the activation of OER for the Cu/PVdF-HFP stretch-anneal catalyst, showing the smallest Tafel slope value of 31 mV dec −1 followed by Cu/PVdF-HFP anneal (32 mV dec −1 ), Cu/PVdF-HFP stretch (52 mV dec −1 ), and pristine (84 mV dec −1 ) catalysts. The Tafel slope value slightly decreased from ca. 84 mV dec −1 (1 M KOH) to 58 mV dec −1 (2 M KOH) with increasing the KOH concentration for the pristine catalyst ( Figure S8), suggesting that the enhanced OH supply to the Cu active sites and/or the increase in pH at the vicinity of the electrode surface can be part of the reasons for the improved specific OER activity. However, a significant decrease in the Tafel slope value, as well as the overpotential observed for the catalyst with stretch-anneal treatment, cannot be explained only by the increase in the OH − concentration. We propose that the synergetic effect between Cu and PVdF-HFP (electron transfer from Cu to PVdF-HFPField 47) at the Cu|PVdF-HFP interface varies with the pre-treatment and optimizes reaction energetics for OER. Attenuated total reflection infrared (ATR-IR) spectroscopy and ex situ XPS revealed that the electron delocalization between Cu and PVdF-HFP substrates was promoted for the catalyst with increased ferroelectric polarizability of the PVdF-HFP support. The electron transfer from Cu to PVdF-HFP alters the electronic states of Cu active sites, boosting OH binding on the Cu, especially for the Cu/PVdF-HFP stretch-anneal (Figure 3). The electron transfer from Cu to the PVdF-HFP substrate can also be suggested from Cu 2p spectra of the electrochemically cleaned Cu/PVdF-HFP catalysts (Figure 3b). The Cu 2p XPS spectra showed two asymmetric bands, which could be deconvoluted into two pairs of doublets assigned to Cu 0 (932.6−933.6 and 952.5−954.3 eV) 64,65 and Cu II (934.8− 935.7 and 954.6−956.2 eV). 64,66 The contribution from Cu II (934.8−935.7 and 954.6−956.2 eV) was dominant for all the catalysts tested, indicating that the Cu mainly exists as Cu II in the Cu/PVdF-HFP fiber surfaces after electrochemical cleaning. CuO formation was also confirmed by comparing the XRD patterns before and after the OER, further emphasizing the importance of Cu II on the OER ( Figure S10). The Cu II peaks shifted to a higher binding energy in line with the increase in the ferroelectric polarizability of the PVdF-HFP support: Cu/PVdF-HFP stretch-anneal (935.7, 956.2 eV) > Cu/ PVdF-HFP anneal (935.4, 955.8 eV) > Cu/PVdF-HFP stretch (935.0, 954.8 eV) = pristine Cu/PVdF-HFP (934.8, 954.6 eV). The positive shift in binding energies of Cu II peaks implies the electron deficiency of the Cu sites, which supports the existence of the electron transfer from Cu to PVdF-HFP. 40 Furthermore, the trend in the binding energy of the Cu II peak coincides with the wavenumber shift of the ν s (CF 2 ) band, strongly indicating that the electron transfer from Cu to PVdF-HFP can be accelerated by increasing the ferroelectric polarizability of PVdF-HFP. The electron transfer from Cu to PVdF-HFP affects the binding energetics of the O/OH adsorbates, which can be confirmed by comparing the onset potential of the O/OH adsorption (Figure 3c). Cyclic voltammograms showed butterfly features at ca. 0.35 V RHE , corresponding to the O/ OH adsorption/desorption on the Cu(100) facet. 67,68 The onset potential of the O/OH adsorption shifted to a lower potential by increasing the ferroelectric polarizability of the PVdF-HFP support: pristine Cu/PVdF-HFP (0.348 V RHE ) = Cu/PVdF-HFP stretch (0.348 V RHE ) > Cu/PVdF-HFP anneal (0.345 V RHE ) > Cu/PVdF-HFP stretch-anneal (0.344 V RHE ). The trend suggests the strong O/OH binding for the Cu on the highly polarized PVdF-HFP support, which is in accordance with the degree of the electron transfer from Cu to the PVdF-HFP substrate (Figure 3b). The Cu/PVdF-HFP catalyst with stretch-anneal treatment showed the best OER activity among the catalysts tested, suggesting that both the reaction kinetics and energetics of the Cu/PVdF-HFP stretch-anneal catalysts were optimal for OER. The positively charged (δ + eff ) PVDF-HFP region facilitates the transport of reactive hydroxide species, while the electron transfer from Cu to the negatively charged (δ − eff ) PVdF-HFP region at the Cu|PVdF-HFP interface accelerates the ratedetermining step of the OER (Figure 4). The highly polarized PVdF-HFP substrate with stretchanneal treatment possesses both positive and negative charges depending on the CH 2 /CF 2 orientation. Negatively charged PVDF-HFP (surface with CF 2 dipoles (δ − eff )) mostly covered by Cu owing to the electrostatic attraction between δ − eff and positively charged Cu 2+ during the electroless deposition process. The large electronegativity of the F atom effectively withdraws the electron from Cu to PVdF-HFP, creating a slightly electron-deficient Cu site. Stronger O/OH binding on the slightly electron-deficient Cu site than the normal Cu site promotes the initial hydroxide adsorption and the subsequent deprotonation of OH ad to form O ad , which agrees with the CVs in Figure 3c. Furthermore, the electrophilicity of the oxygen adsorbates (O ad ) on the slightly electron-deficient Cu site can be increased, promoting the formation of the OOH ad via nucleophilic attack from OH − within the electrolyte. 36 The fourth electron transfer reaction of the OER (deprotonation of OOH ad to form OO ad ) can also be facilitated through the electron-withdrawing inductive effect, 43,71 which accelerates the overall OER activity ( Figure 4). 69,70 Tafel analysis of the stretch-annealed Cu-deposited PVDF-HFP catalyst further supports our hypothesis (Figure 2d). The Tafel slope (b) can be expressed as eq 1, where η is the overpotential, i is the current density, R is the universal gas constant, T denotes the absolute temperature, F is the Faraday constant, and α is the transfer coefficient The transfer coefficient (α) for a multiple-electron reaction 72 is shown in eq 2, where n b is the number of electrons that transfer back to the electrode before the ratedetermining step, ν is the number of rate-determining steps that have taken place in the overall reaction, n r is the number of electrons that participate in the rate-determining step, and β is the symmetry factor (β = 0.5 in this study, assuming that overpotential is much smaller than the reorganization energy). (2) Figure 4. Proposed reaction mechanism for the electrochemical oxygen evolution reaction on Cu under basic conditions. The positively charged (δ + eff ) PVDF-HFP region by CH 2 dipoles electrostatically attracts hydroxyl species at the vicinity of the surface. The negatively charged (δ − eff ) region of the PVdF-HFP support effectively withdraws the electron from Cu sites, leading to form electron-deficient Cu sites. The electron-deficient Cu sites promote the rate-determining step of the OER, resulting in the highest OER activity for the highly polarized catalyst with stretch-anneal treatment. A Tafel slope value of 31 mV dec −1 for stretch-annealed catalysts thus suggests n b = 3 and ν = 2, which translated into the fact that the second (deprotonation of OH ad to form O ad ) and fourth (deprotonation of OOH ad to form OO ad ) electron transfer reactions can be the sluggish (energetically unfavorable) process (ν = 2), and the fourth reaction acts as a major rate-determining step (n b = 3). The proposed rate-determining step agrees with the proposed OER energetics on Cu, 73,74 further validating our Tafel analysis. The Tafel slope value varies between 31 and 84 depending on the pre-treatment of the PVdF-HFP substrate, probably due to the change in the OER energetics and/or the existence of the mixed ratedetermining step. On the other hand, positively charged PVdF-HFP (surface with CH 2 dipoles (δ + eff )) was preferably exposed to the electrolyte due to the electrostatic repulsion, which prevents the reduction of Cu 2+ during the electroless deposition process. The electrostatic attraction between δ + eff of the exposed PVdF-HFP surface and negatively charged OH − in the electrolyte may promote (1) the diffusion of the OH − toward the electrode and/or (2) increase the local pH at the vicinity of the surface. The former facilitates the diffusion kinetics of the reactant (OH − ) together with the unique fibrous structure of the substrate, while the latter improves the reaction energetics of the OER. 40,75,76 The unique interaction between Cu and PVdF-HFP with stretch-anneal treatment may strongly influence its activity for overall water electrolysis. The overall performance of the water electrolyzer consisting of the bi-functional membrane electrode assembly (MEA), Cu/PVdF-HFP|PVdF-HFP|Cu/PVdF-HFP, with stretch-anneal treatment was significantly improved in comparison with a pristine bi-functional MEA, together with the high (electro)chemical stability for more than 24 h ( Figure 5). The bi-functional membrane electrode assembly (MEA) was synthesized by a simple two-step process, electrospinning, and subsequent Cu electrodeposition, without slurry synthesis and/ or screen printing of the catalyst, which was involved in the conventional MEA manufacturing process. The resultant bifunctional MEA is a single flexible sheet with a thickness of ca. 0.5 mm (Figure 5a,b). The cross-sectional image confirms that the top and bottom Cu-deposited layers (Cu/PVdF-HFP) are tightly attached to the PVdF-HFP layer (middle layer), and it is electrically separated from each other (Figure 5c,d). To evaluate the effect of pre-treatment on the stability and activity of the Bi-functional MEA, a water electrolyzer consisting of bifunctional MEA with and without stretch-anneal treatment was operated under a potentiostatic mode at an applied voltage of 2.5 V in 1 M KOH. As plotted in Figure 5e, both bi-functional MEAs exhibit a slight activity decay in the first 16 h of operation, subsequently representing a stable horizontal line up to 28 h. The cycling stability test suggests that the initial activity decay can be due to the partial aggregation of the Cu particles, which was confirmed by XRD, XPS, and SEM analyses ( Figure S11). A more than 10-fold increase in the current was observed for the bi-functional MEAs with stretchanneal treatment (9.63 mA cm −2 at 28 h) compared to that without pre-treatment (0.63 mA cm −2 at 28 h), demonstrating the outstanding improvement in the overall performance for the water electrolysis by the simple stretch and anneal treatment. CONCLUSIONS In this work, the oxygen evolution reaction activity on Cu in alkaline environments was significantly increased by activating the catalyst−support and reactant−support interaction via simple pre-treatment of the Cu-deposited fibrous PVdF-HFP catalysts. The ferroelectric polarizability of the PVdF-HFP support is successfully tuned by simple pre-treatment, leading to the increased population of the highly polarized β-PVdF-HFP in the following order: stretch, anneal, and stretch-anneal treatment. The electron transfer from Cu to PVdF-HFP was accelerated in line with the polarizability of the PVdF-HFP support, which was supported by the redshift of the ν s (CF 2 ) band and the positive shift in binding energies of Cu II peaks of the ATR-IR and XPS spectra, respectively. The Cu/PVdF-HFP catalyst with stretch and anneal treatment showed the best OER activity among the catalyst tested, suggesting that both the reaction kinetics and energetics of Cu/PVdF-HFP stretch-anneal catalysts were optimal for the OER. The increased OER activity for the Cu/PVdF-HFP stretch-anneal catalyst can be attributed to the (1) facile transport of reactive hydroxide species and increased local pH by the electrostatic interaction between the positively charged (δ + eff ) PVdF-HFP region and hydroxide ions and (2) the acceleration of the rate- determining step of the OER (deprotonation of OOH ad to form OO ad ) by the electron transfer from Cu to the negatively charged (δ − eff ) PVdF-HFP region at the Cu|PVdF-HFP interface. The performance of the prototype water electrolyzer consisting of bi-functional membrane electrode assembly was significantly increased by stretch-anneal treatment, further validating the impact of tuning the catalyst−support and reactant−support interaction on the performance of the water electrolysis. The abovementioned interactions can be adjusted by simple pre-treatment with stretch and anneal, leading to aligning the molecular structure and increasing the polarity of the polymer substrate. Furthermore, the proposed pretreatment, as well as the synthesis procedures for the flexible and durable membrane electrode assembly, is simple and scalable, which not only expands the applicability of the water electrolyzer but also opens up a new avenue to fabricate the membrane electrode assembly required for various electrochemical energy conversion/storage devices. Linear sweep voltammograms used for ECSA calculation; EDS mapping of the Cu/PVdF-HFP; schematics of the PVdF-HFP with α-, β-, γ-phases; XRD patterns of the pre-treated PVdF-HFP support; linear sweep voltammograms for Cu/polystyrene (Cu/PS) catalysts; Nyquist plots of the Cu/PVdF-HFP and Cu/PVdF-HFP stretch-anneal catalysts during OER; linear sweep voltammograms and corresponding Tafel slopes for Cu/PVdF-HFP in various KOH concentrations; XPS spectra of the C 1s photoemission lines for pristine and pre-treated Cu/PVdF-HFP catalysts; XRD patterns of Cu/PVdF-HFP before and after the OER; and comparison of the OER activity before and after the cycling test (PDF)
2022-03-16T15:18:00.243Z
2022-03-14T00:00:00.000
{ "year": 2022, "sha1": "83c82804b184cb3ac747d20093b68cd2377043a5", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c06128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd214c596e4d75b7fbdeca5ee90e60e126785bbe", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
230795267
pes2o/s2orc
v3-fos-license
Statistical field theory of the transmission of nerve impulses Background Stochastic processes leading voltage-gated ion channel dynamics on the nerve cell membrane are a sufficient condition to describe membrane conductance through statistical mechanics of disordered and complex systems. Results Voltage-gated ion channels in the nerve cell membrane are described by the Ising model. Stochastic circuit elements called “Ising Neural Machines” are introduced. Action potentials are described as quasi-particles of a statistical field theory for the Ising system. Conclusions The particle description of action potentials is a new point of view and a powerful tool to describe the generation and propagation of nerve impulses, especially when classical electrophysiological models break down. The particle description of action potentials allows us to develop a new generation of devices to study neurodegenerative and demyelinating diseases as Multiple Sclerosis and Alzheimer’s disease, even integrated by connectomes. It is also suitable for the study of complex networks, quantum computing, artificial intelligence, machine and deep learning, cryptography, ultra-fast lines for entanglement experiments and many other applications of medical, physical and engineering interest. Background In 1952 British physiologists Sir Alan Lloyd Hodgkin (1914Hodgkin ( -1998 and Sir Andrew Fielding Huxley at the University of Cambridge demonstrated the existence of selective and voltage-dependent ion channels in the nerve cell membrane with five famous pioneers works published in the Journal of Physiology. They received the Nobel Prize for Medicine in 1963 together with the Australian physiologist Sir John Carew Eccles [1][2][3][4][5]. Nowadays, after almost 68 years, the success and evolution of the Hodgkin and Huxley models, hereinafter referred to simply as" HH models", are still alive and continuously stimulate the development of new topics and branches of physiology and neurosciences [5][6][7][8]. In 1976, the "Patch Clamp" method, developed by Erwin Neher and Bert Sakmann [9], who received the Nobel Prize in Medicine in 1991, demonstrated among other things: 1. Microcurrents. The existence of microscopic electric currents of intensity in the order of pA (picoampere) that flow through each ion channel, transporting on average thousands of ions per millisecond (Fig. 1); 2. Stochastic channels. The stochastic dynamics of opening/closing ("gating") of each ion channel. Methods The recording of current flow through individual channels (Fig. 1), shows stochastic fluctuations between closed and open states. This is a sufficient condition to define the concept of membrane conductance through statistical mechanics of disordered and complex systems [10][11][12][13][14][15]. We will therefore start from the basic formalism developed by Hodgkin and Huxley to describe the processes carried out by the conductances of the Na + and K + channels to explain the generation of action potentials. The opening and closing of voltage-gated ion channels ("gating") is a physical process that involves complex conformational changes in the structure of each channel, or in the sub-units of which it is composed. Opening a gate is generally called "conductance activation", while closing "conductance deactivation" [5][6][7][8]. The Ising model We define gating as an Ising spin variable. We will therefore consider a distribution of N voltage-dependent ion channels on an elementary region (slice) of a nerve membrane (axon) made by a thin ring of radius ρ ≈ 10 μm and thickness h ≈1 nm. (See Fig. 2). A population of N ion channels of a certain superfamily (Na + , K + , Cl − , ..) will be distributed on an axon section made by a thin ring (represented in Fig. 2 and topologically modeled in Fig. 3), formally described by the Hamiltonian of the one dimensional Ising Model for each superfamily of channels: The border condition (See Fig. 3) is S N + 1 = S 1 , where S i are N Ising variables (S i = +1 corresponds to an open channel state, while S i = − 1 corresponds to a closed channel state, for i = 1, ….,N). The energy of interaction between the channels of the same superfamily is represented by the variable J > 0, which we assume isotropic and" ferromagnetic", while ϕ (mV) is the electrochemical driving force, hereinafter called "driving force" ϕ = V -E γ where V (mV) is the membrane potential and E γ (mV) is the equilibrium potential of each superfamily of ion channels. The Helmholtz Free Energy will be [13]. And the magnetization will be Where β ¼ 1 k B T ; k B is the Boltzmann constant and T the absolute temperature. We will see shortly what the observables of the Ising model mean in our case, above all the magnetization, which will be our main observable. Before that it is necessary to discuss a formal issue about the relationship between our stochastic model and the Hodgkin-Huxley model, which will help us to explain our choices. Ising and Hodgkin-Huxley The sigmoid distribution of spin magnetization reproduced in Fig. 4 recalls the activation and inactivation limit functions n ∞ , m ∞ , h ∞ defined by the HH models (Fig. 5) and the conductances in function of the membrane potential for the Na + and K + channels (Fig. 6). The sigmoid characteristic is typical of a cooperative process, as in the present case. In the HH model, the limit function for the conductance is expressed by the gating fractions α n (V) and β n (V) as a function of the potential [5,7,8]. In the present case, the choice of the one-dimensional Ising model obeys a different methodological choice that we use to call "congruence" because it wants to express a "special" link between the physics of gating process and its mathematical law in a closed form, that is without using a "metatheory". Therefore, to interpolate the experimental data ( Fig. 6), we discard function (5) of the HH model because it is a "metatheory", but we choose Results The Ising conductance Recall the expression (3) for the magnetization in the one-dimensional Ising model. According to our methodological constraint, here the magnetization becomes the conductance of the nerve membrane. We thus define the "Ising conductance" g I as the magnetization: In practice, we will consider the specific conductance (mSiemens / cm 2 ) so that the membrane current per unit area will be expressed by Ohm's Law Where V (mV) is the membrane potential and Eγ (mV) is the equilibrium potential of each superfamily of channels. From our model we define a stochastic circuit element which we call for convenience of reading "Ising Neural Machine" (INM), briefly "Ising Machine" (which we abbreviate as "Ising N-Machines", "INMs" or just "Ising Machines") and we indicate it with a rhomboid frame icon. 2 We place the INMs in the equivalent circuit with single compartment of Fig. 7 defined for two superfamilies of ion channels (Na + and K + ). Now we want to discuss the problem of the generation of action potentials. In the following we will for brevity refer to action potentials as spikes. Nuons With reference to Fig. 8 [6], which shows the reconstruction of an action potential after Hodgkin and Huxley, 1952d [4,6], we find that an increase in the conductance of the Na + channels triggers a spike. A flow of Na + ions enters the nerve cell, causing the membrane potential to depolarize up to the E Na value. The depolarization activates the (delayed) conductance of the K + channels which provokes the escape of K + ions from the nerve cell, thus blocking the Na + channels and repolarizing the membrane up to the E K value ("refractory period"). Since K + conductance becomes transiently higher than its rest value, the membrane potential exceeds its negative rest value ("hyperpolarization"), so that Characteristic limit ratios as a function of membrane potential in the HH model. The figure on the left shows the limit functions for activating the K + conductance (n ∞ ) and the activation and inactivation functions for the Na + conductance (m ∞ , h ∞ ). The relative time constants (as a function of potential) are shown on the right. (courtesy of P. Dayan et Al., [8]) 1 The concept of congruence was used by the present author in the context of the unconventional statistical calculus system called "SHT", based on the theory of categories and intended for the study of complex and disordered systems (derived and patented between 1997 and 2011 [16]). From a practical point of view, the SHT calculus does not "destructively interfere" with the sample, but analyzes the system sic rebus stantibus, considering also the "junk", the environmental background and the noise. SHT treats the sample as a "dynamic system". It studies maps and transforms, looks for critical points and transitions, bifurcations and attractors. For example, if SHT finds an attractor, it will become a "category" of the experiment. This is precisely our meaning of "congruence". From a statistical point of view, a congruent model has the same properties as a probability density. Any metatheory is not a category of the experiment. In the case of very large and complex systems, the analysis is generally carried out on many logical levels (see for example some works on complex systems and particle physics [17,18]). In other cases, the analysis is conducted by arranging the data on an REM energy landscape and studying the configurations of minimum entropy [19]. 2 As a functional icon synonymous of complexity, we chose the "complex" polyhedron called Icosi-icosahedron first described by Edmund Hess in 1876. It is the result from the auto dual composition of 10 tetrahedra enclosing a dodecahedron, all intersected by an icosahedron. The compound of ten tetrahedra is one of the five regular polyhedral compounds. This polyhedron can be seen as either a stellation of the icosahedron or a compound. both the K + conductance and the (possibly) residual Na + conductance are inactivated. Finally, the membrane returns to its resting value and it is ready to trigger a new spike. With reference to the next Fig. 9, the local depolarization of the nerve membrane is started by a current of carriers (i.e. a synaptic potential, an artificial stimulus, or a passive current) which triggers a spike (see Fig. 9a and d). From our point of view, consider for a certain instant t > 0 a single carrier triggering a pair of "Ising machines" (Na + /K + ) housed into an annular section of the nerve cell membrane (See Figs. 2, 9b and e). As it will be clarified below, only one annular section is activated at a time t > 0. At this point, we remind that the activation of the voltage-dependent ion channels involves reversible conformational changes in the membrane of the nerve cell (gating) which configure a structural deformation of the membrane itself. The deformation tends to chase the carrier and to propagate along the nerve axis in the direction of the carrier itself, activating one annular section at a time. The refractory period prevents the spike from propagating backwards and, at the same time, stops the generation of further spikes, in order to fire one spike at a time. This phenomenon shows strong similarities with the concept of polaron by H. Fröhlich [22][23][24][25][26][27] which describes an electron that moves with its field of deformation (see also RP Feynman, 1954, [28,29]). In that case, the carrier together with the induced deformation can be considered as one entity: a quasiparticle called polaron. In the present case, following H. Fröhlich's concept of polaron, we define the spike wave function Ψ spike by exploiting the formalism of the "Produkt-Ansatz" by L. D. Landau (1933) [30] as the following (in kets): Where | φ(r) carrier is the carrier wave function while |field is the field of the Ising Machine and r is the position operator of the carrier along the axon axis in the direction of propagation of the carrier. The total Fröhlich Hamiltonian function of our model will be given by: Where p is the canonically conjugate momentum operator of the carrier of mass m c and H Ising the Fig. 6 Conductances as a function of the membrane potential for the Na + (left) and K + (right) channels. (Courtesy of Purves [6], after Hodgkin and Huxley [2]) Fig. 7 The "Ising Neural Machine". The equivalent single compartment circuit containing two stochastic elements called "Ising Neural Machines", respectively for the Na + channels and for the K + channel s [1]. By "inside" and "outside" is meant inside and outside the nerve cell membrane. The specific capacity of the membrane is indicated as c m while i L , g L and E L indicate respectively the "leakage current " [2] per unit area, the leakage conductance and potential. The currents leaving the two Ising machines are total currents (per unit area) Hamiltonian function (1). In this way, we can interpret the spike as a quasi-particle which represents the carrier together with the induced deformation on the nerve cell membrane. We call this quasi-particle nuon and denote it with the letter ñ. The statistical field theory that foresees the concept of nuon will for brevity be called SFT [ñ]. As a first approximation, if we consider the density of ion channels on the axonal membrane of nonmyelinated axons almost constant [6,[31][32][33][34], we can neglect the composition terms H Carrier*Ising in (9) after the trigger of the first spike, because the process is ruled only by the Ising machines. Therefore, we consider the process of generation and transmission of a spike for a time t > τ, where τ is the generation time of each spike by each pair (Na/K) of Ising Machines. If we indicate with s the coordinate along with the axis of the axon (which coincides with the axis of the coaxial annular sections), then each section (housing a pair of Ising Machines Na/K) will be traveled in a time t by the coordinate s = s (t) Therefore, the velocity v = ds/dt of the nuon will be given by the limit of the difference quotient Δs/Δt with Δt ≠ 0. Our choice of the one-dimensional Ising model is thus clear. Finally, we will have Discussion The saltatory conduction An application case of neurological interest is that of the so-called "saltatory conduction" in myelinated axons. Multiple Sclerosis (MS) is a serious pathology of the central nervous system (CNS) characterized by a complex of clinical disorders caused by the bad conduction of spikes, as a consequence of damage and/or total or partial loss of the myelin sheath (demyelination) due to the inflammation of the axon pathways [6,[31][32][33][34][35]. The study of saltatory conduction is therefore crucial to understand and deal with these serious diseases. Saltatory conduction is described by means of the "cable theory". Let's now see how some relations derived from cable theory can be interpreted in the context of SFT [ñ]. We can model the myelin sheath as composed of a series of concentric thin cylindrical surfaces of length L, capacity per unit of area c m and thickness d m distributed from the radius a 1 of the axon core to the external radius a 2 , that is to the axon radius (See next Fig. 10). We will then have a total capacity C m (series) given by the following relations [8] Where the myelin sheath extends from the radius a 1 of the axon core to the outer radius a 2 , that is to the axon radius (See Fig. 10d, e). Performing the (linear) cable theory, we obtain the diffusion equation: The diffusion coefficient is: Where r L is the intracellular resistivity. The optimal value of the internal radius a 1 -which maximizes the diffusion constant-is a 1~0 .6a 2 [8]. In the case of a myelinated axon the propagation velocity is thus proportional to the outer radius a 2 , that is to the axon radius, while for an unmyelinated axon it is proportional to the square root of the axon radius (a 2 ) [8]. Fig. 8 Reconstruction of an action potential after Hodgkin and Huxley, 1952d [4], courtesy of Purves [6] v∼a 2 : Let us show with an example the versatility of the particle description of nerve impulses. Here we exploit the physics of particle accelerators [39,40] because from our point of view the functionality of a myelinated axon is that of a (micro) linear particle accelerator (μLINAC). The myelinated regions behave like Faraday cages (drift tubes) while at the gaps of the nodes of Ranvier there is a non-zero electric field that provides the acceleration of the particle along the axon (see Fig. 10f). During the acceleration the velocity increases monotonically. In the ith drift tube the velocity v i is reached. Now, considering the effective mass m ñ of a nuon, we thus have an energy: From cable theory we deduce that the average velocity is proportional to the radius of the myelin axon (14). In this way, we can estimate the effective mass and charge of the nuon and the modulus of the electric field at the nodes of Ranvier. This is a crucial result. To explain the biophysical mechanism of demyelinating pathologies we can use the nuon model because it provides advantages over the "classic" electrophysiological description. However, the model is congruent with the "classic" description because a spike is the electrophysiological trace and probe of the passage of a nuon. Demyelination, due to the interruption of the paranodal myelin circuits, causes the dispersion of all the ion channels, pumps and exchangers along the axon [6, 31- 36]. Sodium overload causes axonal calcium to reach toxic levels and so on [31]. As the conduction velocity in normal conditions (up to circa 150 m/s) is much higher than the velocity in pathological conditions (about 5 or 10 m/s), we can predict that, in pathological conditions, the resultant of the field-forces on the system will contain a finite set of deterministic dissipative fields acting on the demyelinated axon, generating instabilities and losses (that we can think as in the damaged drift tubes of our μLINAC). This model can also offer an operative tool characterized by self-similarity and reproducibility properties for polytype diffusion, since the etiology of the disease is presumably caused by a pathological (inflammatory) process that affects the whole body. Knowledge and measurement of these dissipative fields can therefore lead to significant progress in the study and treatment of neurodegenerative and demyelinating diseases. Furthermore, our considerations on the dissipative field model can be used to define a special circuitry intended to integrate the equivalent models (See Richardson [41]). Conclusions and possible insights In this work we found a particle description of action potentials, based on considerations of statistical mechanics of complex and disordered systems, independently of classic electrophysiological models, such as Hodgkin-Huxley (HH). Nevertheless, as soon as we consider the action potential as the electro-physiological trace of the nuon, we thus have the opportunity to exploit a full dualism of points of view and formal descriptions in order to describe the generation and propagation of nerve impulses, especially when classic electrophysiological models break down. In this case, SFT [ñ] is a powerful tool that allows us to use the techniques and results of theoretical and [36]. b-Courtesy of Prof. Peter Brophy [37]. c-Transmission electron micrograph of a myelinated axon. The myelin layer (concentric) surrounds the axon of a neuron, showing cytoplasmathic organs inside [38]. d, e-The nodes of Ranvier and the myelinated regions of an axon are represented as an equivalent circuit within a intercompartmental model, modified after F, Dayan et al., [8]. f-Schematic of our model a μLINAC, where an electric field acts in the nodes of Ranvier and accelerates the nuons in the myelinated sections that behave like drift tubes general physics [42,43]. As we have just pointed out in the previous paragraph for the case of saltatory conduction, it is advantageous to exploit the dualism performing both the representations. But we expect the dual representation to be useful in many other cases as well. Functional Magnetic Resonance (fMRI) can be integrated by specific hardware devices and algorithms currently employed in particle physics in order to obtain real-time velocity field maps, even led by connectomes [44]. A detailed integrated real time imaging is therefore suitable to study a non-active area of the brain (i.e. in the presence of ischemia, injury, ictus, neurodegenerative pathology or tumor), by considering an" activity" tensor dependent on the nuon frequencies and fluxes defined on a dendritic density field [45]. Furthermore, the study of the activity tensor with the particle model can try to explain evolutionary puzzles related to multiple sclerosis, difficult to solve with electrophysiological models (see [46]). Other possible applications will exploit "nuon coding" [7] to study and develop complex networks, quantum computing, artificial intelligence, machine and deep learning, cryptography, ultra-fast lines for entanglement experiments and so on. A particle model of synaptic transmission through a "nuon number" conservation law can be also derived and will be the subject of a future work.
2020-03-19T10:55:06.303Z
2020-03-12T00:00:00.000
{ "year": 2021, "sha1": "393988efba0a79055211a2a0218cae92e7483022", "oa_license": "CCBY", "oa_url": "https://tbiomed.biomedcentral.com/track/pdf/10.1186/s12976-020-00132-9", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "baa044b39f254bc65180b2bf2bea9380e389645e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
225326139
pes2o/s2orc
v3-fos-license
Development of technology and methods for detecting metal inclusions in composite materials In complex multilayer structures made from PCM, metal inclusions of small sizes (from 0.1 ÷ 0.2 to 15 mm) randomly distributed throughout the material (at depths up to 100 mm), are unacceptable for normal operation, as they can penetrate into the material structure of a finished product. This paper is aimed at developing a device that provides a small error in determining the coordinates of small-sized metal inclusions in PCM when they are detected in real conditions of production and operation. Some devices capable of detecting the content of small particles in fluids or capable of detecting metal objects in various environments are known. The disadvantages of these devices are that they can only detect magnetically active particles, or large objects–more than 3–6 mm, while the location accuracy is recorded with a big error, insufficient for the detection process of small metal inclusions in PCM–from 30 mm and higher. This determines the urgency of developing a device for detecting small metal inclusions in finished products from PCM and in the technological cycle of their production. In this paper, the basic principles of the developed device are described, a block diagram of the device is presented, including the configuration of the eddy current transducer and the main processing units of the signals coming from the transducer. As confirmation of the operation of the developed device, photos of experimental studies and their results are presented in the form of the obtained dependences of the value of metal inclusions on the depth of their occurrence, from which it can be seen that the error in determining the depth of small-sized metal inclusions by the developed device did not exceed 10% and unchanged for all small-sized metal inclusions. Introduction Small-sized inclusions in multilayer products made of polymer composite materials (PCM), for example metal particles that are not included in the design of products, are not allowed, due to the specifics of their operation. However, in production there is a probability of such metal particles entering the PCM (sizes from 0.1 ÷ 0.2 to 15 mm) randomly distributed throughout the material (at a depth of up to 100 mm), which is unacceptable by technical conditions. Thus, the urgent need for the detection of small metallic inclusions in PCM products is in the development of a diagnostic device for a wide range of PCM products. As studies have shown, this problem is currently not completely solved due to a small size, shape uncertainty and physical characteristics of the particles, their random location in the material [1][2][3][4][5]. Methods The process of testing and implementing the control technology includes the following steps: modeling the control process, choosing the optimal modes, creating equipment, testing and implementing the control methodology. Due to the complexity of the experimental process of testing the technology for detecting metallic inclusions in PCM (installation of defects in it, etc.), studies using modern 3-D models reliably simulate real research on the basis of modern mathematical apparatus and powerful computing technology play an important role. Figure 1 shows a generalized structural diagram of a mathematical model of the process of nondestructive testing (detection of defects in the material) based on the analysis of the distortion of the physical field after its interaction with the controlled material and inclusions (defects) [6][7][8]. Here: U in is the input function of the magnetic field depending on the magnitude and frequency (f V ) of the excitation current (I V ) of the exciting coil of the eddy current transducer; W mat , W incl are transfer functions of the material and inclusions, respectively, depending on the characteristics vectors Ѳ , Ѳ of the material and inclusions, respectively; U out is the output function of the magnetic field depending on the transfer functions of the material (W mat ) and inclusions (W incl ), as well as f V . This device detects metallic inclusions and marks the coordinates of each of them (if the smallsized metallic inclusions are not too close to one another). A device [9] for determining the content of small particles in fluids is known. Its disadvantage is that it is capable of detecting only magnetic particles. A more universal device [10] is designed to detect metal objects in various environments, but it only detects the presence of large objects (more than 3-6 mm). The accuracy of the location is recorded with a large error, insufficient for the process of detecting small metal inclusions in PCM. A decrease in the error in determining the location of small-sized metal inclusions in the PCM became possible with the advent of new technologies in the field of electronics, control, and informatics [11] and the development of the corresponding software for mathematical modeling and microprocessor data processing in real time [12]. This problem can be fundamentally solved by other non-destructive testing (ND) methods, for example, X-ray, thermal or ultrasound. However, this did not give the desired results [2]. In the present study, a device that provides a small error in determining the coordinates of smallsized metal inclusions in PCM in real production and operation conditions is described. The technical result achieved by using the developed device is to increase the reliability of detection of small metallic inclusions by introducing several measuring inductors in the induction transducer and, as a result, rejecting the resonant circuits used in known devices. This is due to the fact that the induction converter system, which includes the use of resonant circuits, is not able to solve the problem due to the influence of temperature changes, noise and interference . The device ( Figure 2) contains a generator 1, the signal of which is amplified by an amplifier 2 and fed to an exciting coil 11.1 with a diameter 2Rv of an eddy current transducer. The radius R V is selected taking into account the maximum thickness of the product from composite material where Т OT is the maximum thickness of the object under study, and measuring coils (11.2, 11.3, 11.4, .., 11.N), coaxially located in the same plane with the exciting coil, the outputs of which are connected to the inputs of the switch. Figure 2. Block diagram of the device: 1 -generator; 2, 4, 5 -amplifiers; 3 -signal switch; 6, 7 -synchronous detectors; 8 -two-channel analog-to-digital converter; 9 -block signal processing and synchronization; 10 -indicator; 11 -eddy current transducer; 11.1 -exciting inductor of the eddy current transducer; 11.2, 11.3, 11.4 -the first, second and third measuring inductor, respectively; N -"N-th" measuring coil; 12 -object of control. The signals of the measuring coils through the switch 3 are fed to amplifiers 4, 5 and then to the synchronous detectors 6, 7 (which also receive the reference signal from the generator through a twochannel analog-to-digital converter (ADC) 8), the outputs of which are connected to the synchronization and signal processing unit 9, indicator 10 is connected to the output of block 9. The number of measuring coils of the eddy current transducer and their radii are determined using the estimated depth and size of the metal inclusions and the necessary error in determining their location. In terms of design, generator 1 is combined with block 9, the harmonic oscillation generator frequency ω is selected from the condition µ 20 25, where is the minimum possible conductivity of the supposed metal inclusions in a composite material under study ( µ is the magnetic constant of the vacuum). Experimental studies were carried out on PCM samples ( Figure 3) with artificial metal inclusions of different sizes located at different depths relative to the control surface. Figure 4 shows the dependence of the device readings on the size of metal inclusion in the PCM. The experiment was carried out in accordance with [17]. Results and discussion As can be seen from Figure 4 the proposed device can detect small inclusions up to 2 mm in size at a depth of 30 mm, which is several times smaller than the known devices. Figure 5 shows that the error in determining the depth of small-sized metal inclusions by the developed device does not exceed 10% and is constant for all small-sized metal inclusions. For prototype devices, the error in determining the depth is approximately 1.5-2.2 times greater and reaches 20%. Conclusion The developed device is capable of detecting small-sized metal inclusions with sizes from 0.1 ÷ 0.2 to 2 mm at a depth of up to 30 mm in PCM products with an error of less than 10% and, thereby, improves the quality of PCM structures due to timely and reliable location detection metal inclusions.
2020-10-28T19:07:53.078Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "8a8c945869a64285ac2c682d769010cf97d59d0c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1636/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "744fbd84bc2de0fdab0bec6c291d8bdc808a4341", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
234378533
pes2o/s2orc
v3-fos-license
Preparedness among Family Caregivers of Patients with Non- Communicable Diseases in Indonesia Background: Family caregivers spend 24 hours a day looking after and assisting patients. However, they are not always adequately prepared for all the problems they face. There is a lack of evidence exploring caregivers’ preparedness among family caregivers of patients with non-communicable diseases in Indonesia. Purpose: This study aimed to identify caregivers’ preparedness among family caregivers of patients with non-communicable diseases. Methods: This cross-sectional study was conducted on 120 Indonesian family caregivers for patients with non-communicable diseases, who were selected using a purposive sampling technique. Data were collected using the Indonesian version of the Preparedness for Caregiving Scale (PCS) which had been validated before its use. The possible scores of this tool ranged from 0.00 to 4.00. The higher the score, the more prepared the family caregivers were. Data were analyzed using one way ANOVA. Results: Family caregivers reported the feeling of moderately prepared for caregiving. The score of family caregivers’ preparedness for patients with diabetes, cancer, and chronic kidney disease were 2.97±0.42, 2.83±0.40, and 2.89±0.49, respectively with a possible range from 0.00 to 4.00. There were no differences in the preparedness among family caregivers of patients with non-communicable diseases (p=0.387). Conclusion: Caregivers’ preparedness is an essential element of patient care. Nurses have to be proactive in assessing each family caregiver’s preparedness to enhance the quality of life of both the family caregivers and the patients themselves so that they can be empowered as a source of nursing care. The family caregiver is an individual who looks after patients as an extension of the health care provider, and who provides care related to the functional status of family members suffering from an illness (Given, Given, & Sherwood, 2012). They can be the spouse, parents, daughters or sons, or other relatives (Effendy et al., 2014). The studies conducted in East Java (Werdani & Silab, 2020), and Yogyakarta and Central Java (Sari, Warsini, & Effendy, 2018), Indonesia, showed that the patients have their nuclear family as their support system. Taking care of NCD patients has been transformed from curing the disease to offering comfort and a better quality of life. This situation is a challenge for family caregivers who take responsibility for caring for patients who suffer from NCDs (Rha, Park, Song, Lee, & Lee, 2015;Wolff & Jacobs, 2015). The challenge is that family caregivers spend 24 hours a day helping and assisting patients with their physical and psychological conditions, as well as financial and autonomous problems (Effendy et al., 2014;Machado, Dahdah, & Kebbe, 2018). The study conducted by Sari et al. (2018) on 178 family caregivers of advanced cancer patients in Yogyakarta and Central Java showed that the burden was higher for family caregivers who spent more time each day looking after their sick family members. The complicated problems among family caregivers are usually not balanced with their preparedness (Maheshwari & Mahal, 2016). Their preparedness includes how ready the family caregivers see themselves for the tasks and roles demanded from them when looking after family members who suffer from illness, including the provision of physical care and emotional support, preparing support services at home, and compensating for the burden of responsibility (Gonzales, Polansky, Lippa, Gitlin, & Zauszniewski, 2014;Petruzzo et al., 2017). It is also about dealing with the stress of the care process (Gonzales et al., 2014). Less-prepared caregivers feel anxious about the caring process, feel burdened, stressed, and have mood swings (Carter, Lyons, Stewart, Archbold, Scobee, 2010;Grant et al., 2013;Schumacher, Stewart, & Archbold, 2007). Furthermore, they have poorer health than caregivers who are better prepared (Ahn, Hochhalter, Moudouni, Smith, & Ory, 2012). In contrast, well-prepared caregivers with appropriate skills and knowledge feel happy about the care they provide; they have better hope (Henriksson, Pearson-r values higher than 0.320, and the Cronbach's alpha coefficient value was 0.933. I-PCS consisted of eight questions with five answer choices using a Likert scale ranging from 0 (not at all prepared) to 4 (very well prepared) and one open question about the specific preparedness desirable in the caregiving process. The possible score ranged from 0.00 to 4.00. The higher the score, the more prepared the family caregivers were. Data collection The family caregivers for cancer and CKD who met eligibility criteria were identified through the ward manager based on the medical record. Meanwhile, the family caregivers for diabetes were identified through data from the public health centre by cadres in that area. They were fully informed about the study's aim and signed the informed consent after they were identified as potential respondents. Then, the family caregivers completed the instruments, including the socio-demographic and caregiver preparedness questionnaires. The completed forms were corrected and clarified again to the respondents before they were processed and analyzed. Four research assistants administered the data collection. Data analysis The Statistical Package for Social Sciences (SPSS) version 21 software package (IBM SPSS, Chicago, IL, USA) was used for data entry and analysis. Descriptive statistics were used to summarize the demographic characteristics and caregivers' preparedness. The Shapiro Wilk normality test was used to describe the normality of the numerical data. The result showed that caregivers' preparedness in each group had a normal distribution (p>0.05), so a one-way ANOVA test was used to assess the differences on caregivers' preparedness for cancer, diabetes, and CKD patients. A p-value of <0.05 was considered to be significant. Ethical issues The Health Research Ethics Committee, Faculty of Health, Universitas Jenderal Achmad Yani Yogyakarta, approved all the materials and protocols used in this study (Number: SKep/05/KEPK/II/2020). Family caregivers were fully informed about the aims of the study. They signed an informed consent form and were informed that they could withdraw from the study at any time. They were also assured that all collected data would be kept confidential. Demographic characteristics of the respondents The respondents' characteristics are shown in Table 1. There were 40 consenting family caregivers for each disease included in the final analysis. The mean age of the family caregivers for diabetes, cancer, and CKD patients was 48.26±15.13, 39.54±12.30, and 47.95±12.17 years old, respectively. The majority of family caregivers were female for diabetes and male for cancer and CKD, Moslem, and married. Most family caregivers for diabetes and CKD were spouses, and for cancer, they were parents. Most of them had a senior high school education, and a low-income level. Only 85.0% and 80% had ever received health education about diabetes and CKD, respectively, while 82.5% had no health education for cancer. The majority of the treatment experienced by diabetes patients' caregivers was in seeking medical treatment (80.0%), while it was chemotherapy for cancer caregivers (40.0%), and hemodialysis for CKD caregivers (100%). They all had good health and had been taking care of the patients for approximately a minimum of two months up to two years. The specific desirable preparedness in the caregiving process is shown in Figure 1. From this result, it can be concluded that financial preparedness is the principal preparedness that is desirable by the family caregivers (63.0%). The comparison of caregivers' preparedness of NCD patients The comparison of caregivers' preparedness among family caregivers for diabetes, cancer, and CKD is shown in Table 3. There were no differences on the caregivers' preparedness among family caregivers for diabetes, cancer, and CKD (p=0.387). (Otto et al., 2020). The previous study in Ohio family caregivers showed a lower range of preparedness for the admission phase than this current study (2.65±0.78). However, during the post-discharge phase, the score escalated and had the same range as this current study (2.97±0.72) (Mazanec et al., 2018). In the Asian context and especially in the Indonesian culture, there is a large family structure called an extended family (Subandi, 2011) with a strong bond between each other (Subandi, 2011;Yoon, Kim, Jung, Kim, & Kim, 2014). Although NCDs require a caregiving process, it is still considered to be a "normal condition" for people in Indonesia. Looking after sick family members, such as by providing personal care, daily need, and health management (Kaye, Harrington, & LaPlante, 2010) is, in Indonesian culture, accepted as a duty that should not be questioned (Funk, Chappell, & Liu, 2011;Kristanti et al., 2017). To be a caregiver for their loved ones suffering from illness is natural. This condition makes the family caregivers feel more prepared to look after their family members, so they become more confident in doing this (Vellone et al., 2020). This study demonstrates a contrasting result with Maheswari & Mahal (2016) for 226 family caregivers of cancer patients in India. The mean of their preparedness was at a low level (13.56±2.8) with a possible range from 9.00 to 22.00. A lack of caregivers' preparedness was also an issue for Italian family caregivers who cared for heart failure patients. Their PCS score was 2.13±0.77 (Petruzzo et al., 2018) and 2.11±0.76 (Vellone et al., 2020) with a possible range of 0.00 to 4.00. Contrary to this current study, a study of Chinese family caregivers for stroke patients demonstrated a considerably low score for their preparedness (M=4.42 of 32.00), indicating that the family caregivers were not well prepared (Liu et al., 2020). The low level of preparedness occurred due to family caregivers' inadequate training for their caregiving skills and education (Maheswari & Mahal, 2016). The significant factors that affected the low preparedness were low educational background and caregiving experience. The low educational level could affect the family caregivers' ability to communicate effectively with the health care providers. The higher the degree of education, the greater the preparedness since they had a more excellent opportunity to improve awareness and expertise and gain more accurate caregiving information (Liu et al., 2020). Surprisingly, there were no differences on caregivers' preparedness for diabetes, cancer, and CKD patients in this current study. It means that all the family caregivers who look after family members suffering from chronic illnesses have the same moderate preparedness. The moderate level of preparedness means that the family caregivers feel prepared but, on the other hand, also need help in certain situations. This may happen because all chronic illnesses, including cancer, CKD, and diabetes, have the same problems that must be faced by a family caregiver. The problems include physical and psychological aspects (Effendy et al., 2014;Machado et al., 2018). The family caregivers must prepare for caring process, such as preparedness to provide physical care, emotional support, support services at home, and compensation for the burden of care resulting from the caring process (Petruzzo et al., 2017). Interestingly, the cancer family caregivers had the lowest preparedness compared to the others in this study. Uncertainty about cancer is considered a significant source of psychological distress (Guan, Santacroce, Chen, & Song, 2020). Besides this, the degree of severity of the disease also influences the caregivers' preparedness (Liu et al., 2020;White, Barrientos, & Dunn, 2014). The family caregivers felt severe pressure, burdened, and anxiety about their patients' disease. They could not predict whether the healthcare team would provide help, which would have a significant impact on the caregivers' preparedness (White et al., 2014). The additional question (item number 9 of the I-PCS) showed that the family caregivers want to be better prepared for the financial aspects of illness. The family caregivers in this study faced financial problems because they had low-income levels. Although they received some funding from National Insurance programmes (i.e., BPJS or KIS), there were still other expenses that the insurance could or would not cover. These expenses, such as for specific drugs, specific diagnostic procedures, accommodation and other needs, such as food, occur during the process of seeking treatment (Kristanti et al., 2017). This study has limitations such as having no data about what kind of caring the family caregivers give to their loved ones. The kind of caring would be valued data for comparing the caregivers' preparedness on each disease. The data in this study were collected at one-time period, so any dynamic changes could not be evaluated. However, this study is relatively heterogeneous because it captures three problems and has a low level of missing data indicating the accurate preparedness score. CONCLUSION In conclusion, caregiver preparedness is an essential element of care. Caregiver's preparedness in this study was in moderate level. The healthcare team needs to screen the preparedness of family caregivers because this is a critical step as they are an excellent source for optimized quality of care. As family caregivers also play an essential role in
2021-05-13T00:03:40.728Z
2020-12-28T00:00:00.000
{ "year": 2020, "sha1": "673f45cbf7b9818a6b65bfc70fde5de2a9bf193b", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undip.ac.id/index.php/medianers/article/download/31954/18534", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a2a9f69ee3e11f392619c76b22a5cca2e7515479", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266318218
pes2o/s2orc
v3-fos-license
Are E-Journals Used Effectively in NIT - Goa? : National Institute of Technology (NIT) Goa is one of the premier institutes of Government of India. It strives for quality education and facility to its students. NIT Goa Library is one of the instances which maintain all the facilities to its stakeholders. NIT Goa Library is continuous growing library as its expansion is going on as well as providing the best facilities to its users. The present research paper aims to find out what types of e-journals are available in NIT Goa and how it has been accessed by its stake holders? Are the stakeholders of Library facing difficulties while accessing e-journals? Introduction This Century is the era of Electronics and Information.Information is the air we breathe.This plethora of information is astounding, and one can"t help but gaze in bewilderment.We have to capture it and make it available to our users.And this task is surmountable only via the Internet. The library is a symbol of humanity's collective memory.Information and communication technology, the Internet, and the web have resulted in a scenario in which we have more and more data on the web, but less and less information and knowledge.In the digital world, new types of information exchange, such as e-books and e-journals, are emerging.E-journals are another well-known phenomenon.Electronic journals are scholarly journals or intellectual publications that can be accessed by electronic transmission.They are also known as E-journals and electronic serials.In reality, this implies that they are generally made available on the Internet.As a result, in this current era of technology, it is rational to assume that practically everyone has access to E-journals, owing to the fact that the Internet, in its purest form, is essentially free.One only has to have access to any piece of technology that allows them to surf the web, such as personal computers, personal digital assistants (PDAs), or even mobile phones, which are now considered vital items in our daily lives. National Institute of Technology (NIT) Goa: The National Institute of Technology Goa abbreviated as NIT Goa is the most premier technical Institute in the state of Goa.NIT Goa was established in the year 2010 along with 9 other NIT"s established far and wide across India and it is declared as "Institute of National Importance".NIT Goa is an autonomous institute and functioning under the aegis of Ministry of Education (MoE), Govt. of India.NIT Goa can be rated as the best NIT among the newly established NIT"s due to its striving excellence in academic and research activities NIT Goa Library Library in NIT Goa serves as a beautiful treasure house of knowledge to the students, faculties and research scholars who are the members of this institution.It was established in the year of 2011.It has the qualitative documents and books on Science, Technology, Engineering, Economics & Finance, Management, Professional Communication and Ethics other subjects.Apart from standard textbooks, the library has a rich collection of digital magazines, international journals, newspapers and other dailies.The main features of NIT Goa Library include:  Library housekeeping activities are done by using a library automation software. Library reminds and alerts to its users on their transactions and assists them in searching database to save the time. Library has an Electronic journal database which gives access to popular journal publishers like-ACM, Springer, IEEE Xplore and Science Direct. It has a repository of previous year institution question papers, Gate question papers and Digital Magazines. Library assists researchers in obeying the copyright rules and following the ethical pattern for research publications by providing the access of anti-plagiarism tool. According to Pullinger, David and Brain, Schkel.(1990), "An E-journal is one whose input text may be entered directly by a computer or by other file transfer mechanisms in a machine-readable form, whose editorial processing is facilitated by a computer and whose articles are thus made available in the electronic form to readers" Significance of this study In this era of information explosion, an increasing number of publications are becoming Web-based.The majority of science and technology libraries have altered people's perceptions of their purposes and offerings.The goal of this study is to examine the challenges or constraints that various stakeholders at NIT Goa have while utilising e-journals, as well as to identify their recommendations for improving Ejournal use for academic purposes. Furthermore, in this era where the internet plays a pivotal role in our lives, various avenues that are available to us are often neglected, owing to the fact that we are overwhelmed by everything that is available.Hence this study also aims to highlight this negligence with respect to E-journals and also shed light on how effectively this leviathan resource is being utilised. Objectives of this study The main objectives of the present study are as follows: 1) To ascertain the level of awareness among various stakeholders about the existence of the E-journals and about E-journals subscribed by the library of NIT Goa. 2) To explore the use of electronic journals. 3) To find out the purpose and utilisation of E-journals by students, faculty and research scholars.4) To analyse the frequency of usage of E-journals by students, faculties, and research scholars.5) To find out the hindrances and problems encountered by the stakeholders while accessing and using E-journals.6) To study the satisfaction level of users about availability and coverage of online journals. The study explores the accessibility, availability, understand ability, sources, user friendly, 1) Accessibility Accessibility plays a vital role in determining how well E-Journals are being utilised at NIT Goa.Accessibility simply means "the quality of being able to be reached or entered".It could also be construed as geographic accessibility, which suggests how easily the client can physically reach the resource. E-Journals should be widely accessible to the students, faculty and other stakeholders.This could essentially be surmised as providing affordances to the stakeholders, such that they could make use of E-Journals with ease.The degree to which a system is operable in a given period or interval of time is called availability.Whereas accessibility speaks about the affordances and if the affordance allows an entrance into the system. Availability encompasses when the resource can be accessed.Is the system for accessing E-Journals available throughout the day?Is it available only during the hours when the institute is open?These are some of the important questions that must be answered.Another important question would be that when the system is available, who has the prerogative to access it? 2) Understandability The stakeholders may have access to E-Journals and the systems to access E-Journals may even be available, but if the users do not understand E-Journals, it is all in vain.It is like attempting to throw a rock into the abyss to see if it hits the bottom. Understandability is the ability to be understood, essentially meaning that when a stakeholder gets hold of an E-Journal, they not only blindly go through it but intently peruse through it, understanding what has been read.The various stakeholders must be provided with opportunities to equip themselves with the knowledge necessary to understand E-Journals. User Friendly A system is said to be user friendly if its human users find it easy to use.If the affordances that have been provided to the user are not user friendly, it will deter the user from accessing the resource.The convenience of the user takes top priority.Hence the affordances provided must be simple and easy to use. 3) Sources Researchers require journals of different publishers that involve international and local publishers for their research activities.There are many hurdles often faced by educational institutions to provide access to the required journals as some may allow only limited members while others may charge a hefty payment for access. 4) Membership Membership in various e-journals is extremely crucial for an academician's life.Throughout his course of study or research, getting access to the required e-journals plays a vital role. As a developing institution, providing membership to popular e-journal websites and gathering funds to provide the paid version of facilities to students and researchers have a pivotal role in their academics.More membership given to other less popular e-journal websites can ensure that the endusers get a chance to read more and thereby learn more seamlessly. 5) Awareness Every student pursuing an academic life should know about the importance of e-journals and how e-journals can be significant for their learning curve.From the start of their professional course life, they should be familiar and get accustomed and moreover, develop a habit to access ejournals in their day-to-day life. Faculties and research scholars can encourage their students to read more e-journals when they publish new ones, or by making them do projects/assignments based on these papers. 6) Notifications Proper notification facilities to e-journals can improve the visibility of the journal hosted and it would indirectly give a boost for the readers to read any related journals.Getting the notifications via email/push notifications has to be ensured on an institute basis. In the first stage of research, I personally approached some of the members of the college library and the labs to understand how E-journals are accessed.Some of the ideas furnished in this section is from the personal experiences after visiting.There are many avenues that can be used to access E-journals in the NIT-Goa campus, such as using college intranet, taking subscriptions, or using the openly available journals on the internet.Using college intranet requires that every student come to college and use their LAN ethernet cable.Another option would be to use the college WiFi and visit the digital library.The digital library Volume 12 Issue 9, September 2023 www.ijsr.net Licensed Under Creative Commons Attribution CC BY is a research database like IEEE Xplore Digital Library, ScienceDirect, SpringerLink, and ACM Digital Library.Each of the aforementioned digital libraries is accessible and pre-logged in when using the college intranet, allowing research scholars and students to access paid journals for free via the college intranet network.Although the IEEE Xplore digital library enables access to research articles via institute email IDs, NIT Goa has not yet subscribed and availed this service.If NIT Goa chooses to employ this service and offers institute email to every student, then every student will be able to read research papers without having to use the college intranet network, which will allow them to conduct research off campus as well. The second stage of research involved interviewing a random population of stakeholders to understand their awareness about E-journals and their utilisation.The stakeholders involved were students, faculty and research scholars.This stage served as the cornerstone, on which the questionnaire of the next stage was devised. Starting with the students, the random population that was interviewed was a mixed bag with respect to the notion of Ejournals.While some seemed complacent as they downright denied having any knowledge about E-journals, some others seemed to have misconceptions about what E-journals are.Some students had a fair idea about E-journals and some others were treading a path that would lead them to publishing their own research papers some day.It wouldn"t be a farce to state that the level of ignorance was alleviated by the fact that most students had a fair idea about Ejournals.As a matter of fact, some were also using them on a regular basis. On the other hand, coming as no surprise, the faculty and research scholars seemed rather adept with E-journals.They have indeed imbibed the true essence of E-journals.They not only access and use E-journals rather efficiently, but most have also published several papers on reputed Ejournals.The survey was populated among BTech, MTech and PhD students of NIT Goa.The survey was successful and highly opinionated responses were obtained through the survey.71.1% of BTech students, 25% of MTech students and 3.9% of PhD scholars participated in this survey.Regarding the usage of E-journals, different sections of people had varied opinions.Some of the people use Ejournals daily, some often, and some had never used ejournals for their day to day research activities.9.2% percentage of the participants always use E-journals in their day to day research activities and project works.40.8% of students occasionally use E-journals from the institution and 50% had never used E-journals.It is debilitating that only 9.2% of students use E-journals on a daily basis, which shows the dearth of awareness and popularity among the student community.This will in turn have an adverse effect on the research output and the quality of publications.The gist is that this would lead to an abominable use of a valuable resource. Figure 3 Ignorance and apathy seems to run deep as we pour over the fact that 65.8% of the participants in the survey have absolutely no idea what an E-Journal is.This general lack of awareness gives rise to several serious questions and the various implications that arise, need to be addressed. Paper ID: SR23918155600 DOI: 10.21275/SR23918155600Of the people who are aware of E-journals at NIT Goa, it is alarming to discover that 61.8% of students do not use the library nor the intranet facility provided by the institution.This scenario arises partly due to lack of briefing about the resources in the campus, or the participants might not have unleashed the true essence of E-journals in their academic activities.38.1% percentage of students access the library and utilise intranet facilities to access journals.This gap can be bridged by proper briefing about resources, awareness about significance of E-journals and encouraging them to do assignments and class work with the use of published top level journals. Figure 5 It is disappointing to observe that 43.4 % of the participants are not able to get the required journals in NIT Goa through library/intranet facilities for conducting their research activities.This will negatively affect the research output and acquiring funded projects to the institute.In order to produce quality research, the institute must allocate sufficient funds to provide membership in required journals, apart from the common funds for the scholars and students to expand knowledge without any inhibitions.When prompt resources are made available, it alleviates the burden of students, which will boost their productivity in actually solving the problem rather than wandering for accessibility to these resources. Figure 6 The participants reported that when E-journals are not available in library/intranet the following avenues were explored by them to access E-journals:  Some of the participants used pirated E-journal accessing sources like Scihub to access the journals that they required. Some of the participants used freely available journals from the internet. Some of the participants used open source journals from the internet. Some of the participants only used college intranet to access journals. When the question of how participants learned about Ejournals was probed, the answers were unexpected.The belief that the most common source would be the faculty was shattered.On the contrary, the faculty contribution was a miniscule 9.2%.It was observed that the most common resource was actually the internet itself.It is refreshing to see how self-reliant the participants are.Furthermore, it shows that the generation has come ahead leap and bounds.10.5 % learned about E-journals from the college website.Even the peers seem to have played a significant role in an individual"s acquisition of this knowledge, and their contribution was a substantial 19.7%. Figure 7 The dereliction of E-journals has several repercussions.On the other hand, usage of E-journals gives impetus to several outcomes.It is fortunate to observe the statistics, as 32.9% are using E-journals regularly for their ongoing research activities.This populace includes a higher proportion of PhD scholars and Postgraduate students.The majority of the usage was for Academic activities and Project related works. Paper ID: SR23918155600 DOI: 10.21275/SR23918155600When it is enquired if the students get access to popular Ejournal websites like IEEE, ACM, Springer etc in NIT Goa, 22.4% said they have access and the 35.5% said that they do not.The institution needs to provide access to these Ejournals as it will help to better expose the students to a plethora of subjects and different types of E-journals. Figure 9 The participants rated their ease of access to E-journals on a scale of 1-10.The distribution of the rating was highly varied.It was noted that 7.9 % rated a point of 9 for their ease of access.Around 60% percentage of the students found it difficult to access E-journals in the campus , and it showed the lack of availability.The distribution of the rating is displayed below: The students are asked if they were ever briefed about Ejournals, and the results were astounding.It was observed that 32.1% percentage of students did not receive any briefing about E-journals during their course curriculum. There is a need to bridge this gap for a better utilisation of E-journals so that it can develop research interests in students and thereby generate quality research work.Proper induction sessions need to be conducted for the freshersophomore years to imbibe the awareness about the journal facilities in the campus. Conclusion and Recommendations The survey was conducted successfully among students, scholars and the faculty community in NIT Goa and both genuine and enthusiastic responses were obtained after the survey.The need to develop E-journal resources in the campus is crucial and will eliminate a big gap between the research community and fellow academicians of tomorrow. The following recommendations are a culmination of responses from the participants of the survey and our own suggestions:  Provision of proper and functioning Wi-Fi facility in hostels and library were requested by a proportion of the participants, as it will help students to spend more time surfing E-journals, books, lab works, etc.  The speed of the internet should be high in the college, hostels and other places where the internet can be accessed. Journals covering new and upcoming research topics need to be made available to the students. There have been multiple suggestions from students to conduct awareness class so that all the student get to know more about E-journals  It has been suggested that the awareness of E-journals can be increased by putting up posters on notice boards and by circulating new releases and interesting articles on WhatsApp groups and social media. Another popular recommendation was the creation of social media groups solely dedicated to spread E-journals only through admin access  Creation of a mobile app to access E-journals was also an innovative suggestion by a portion of the participants Figure 1 : Figure 1: Qualification census of the participants It is observed that 46.1% percent of students are aware about E-journals but 53.9% percent of students have no awareness Figure 2 : Figure 2: E-journal awareness response from participants
2023-12-17T16:03:41.785Z
2023-09-05T00:00:00.000
{ "year": 2023, "sha1": "d6508ebc8eaca27dda7f8594dbffb7eac8582309", "oa_license": null, "oa_url": "https://doi.org/10.21275/sr23918155600", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e09d10911a10a0a847c7371d402711b6d14cb90b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
120332811
pes2o/s2orc
v3-fos-license
Hunting a wandering black hole in M31 halo using GPU cluster In the hierarchical structure formation scenario, galaxies have experienced many mergers with less massive galaxies and have grown larger and larger. On the other hand, the observations indicate that almost all galaxies have a central massive black hole (MBH) whose mass is ~ 10−3 of its spheroidal component. Consequently, MBHs of satellite galaxies are expected to be moving in the halo of their host galaxy after a galaxy collision, although we have not found such MBHs yet. We investigate the current-plausible position of an MBH of the infalling galaxy in the halo of the Andromeda galaxy (M31). Many substructures are found in the M31 halo, and some of them are shown to be remnants of a minor merger about 1 Gyr ago based on theoretical studies using N-body simulations. We calculate possible orbits of the MBH within the progenitor dwarf galaxy using N-body simulations. Our results show that the MBH is within the halo, about 30 kpc away from the center of M31. In addition, further simulations are necessary to restrict the area in which the MBH exists, and hence to determine the observational field for the future observational detection. The most uncertainty of the current MBH position is caused by uncertainty about the infalling orbit of the progenitor dwarf galaxy. Therefore, we have performed a large (a few 104 realizations) set of parameter study to constrain the orbit in the six-dimensional phase space. For such purpose, we have already investigated in detail a few ten thousand orbit models using HA-PACS, a recently installed GPU cluster in University of Tsukuba. Astrophysical Background and Motivation In the context of hierarchical structure formation scenario under the Cold Dark Matter (CDM) universe, large galaxies, such as Milky Way or Andromeda galaxy (M31), have likely experienced many mergers with less massive galaxies and have grown larger and larger. Furthermore, a well known observed correlation between the mass of spheroidal component of galaxies and the mass of central massive black holes (MBHs) in their central region, so called the Magorrian relation [1,2], suggests the coevolution of galaxies and their central MBHs. However, little is known how the coevolution of galaxies and MBHs proceeds. In the hierarchical structure formation scenario, galaxies collide and merge with each other and subsequently less massive galaxies and their central MBHs drift around the halo region of their host galaxy. In other words, MBHs move in the halo of their host galaxy after galaxy merging events, and finally they sink towards the central region of the host galaxy due to dynamical friction. Therefore, MBHs also locate outside the nucleus of their host galaxy, not only in the central region of galaxies like active galactic nuclei. However, we have not found such MBHs yet. Thus, searching for such MBHs is very hot issue recently [3,4]. In this study, we investigate the probable position of such an MBH theoretically. Many cosmological N -body simulations of the hierarchical structure formation exhibit a wealth of merger remnants around host galaxies [5]. To test the current cosmology by verifying such theoretical predictions from the CDM scenario, many observational studies have been examined to investigate merger remnants [6,7]. In such a context, the giant stellar stream was discovered in the south region of the M31 halo [8], which is our neighbor galaxy. Further photometric and spectroscopic observations of spatial distribution [9,10,11,12,13,14,15], radial velocity distribution of red giant stars [16,17,18,19,20,21,22] and metallicity distribution [17,18,12,20,21,22] clearly show other substructures near M31. Calculations of the motion of test particles under the gravitational potential of M31 [16,23] and N -body simulations on the interaction between the progenitor of the stream and M31 [24,25,26,27] suggest that the stream, the northeast shell and the west shell are the tidal debris formed in the last pericentric passage of a satellite on a radial orbit. These models reproduce the observed features and successfully constrain the orbit and properties of the progenitor. The Magorrian relation suggests that the progenitor dwarf galaxy has an MBH whose mass M BH is about 10 −3 of the mass of their host galaxy's spheroidal component M sph [1,2]. The similar relation between M BH and host galaxy's velocity dispersion σ, M BH − σ relation, is held down to M BH ∼ 10 5 M [28,29]. Therefore, the relation between MBHs and their host galaxies is held down to M sph ∼ 10 8 M . Since the dynamical mass of the progenitor is estimated to be an order of 10 9 M [25, 26,27], the progenitor likely has an MBH whose mass is up to an order of 10 6 M if the progenitor consists of spheroidal stellar component alone. If this is true, then an MBH should be now moving in the merger remnants. Finding such an MBH will provide us a hint for understanding the coevolution process of galaxies and MBHs. Thus, we investigate the current position of the MBH using N -body simulations, in view of future observational detections. Numerical Modeling of the Interaction between M31 and the Infalling Satellite The basic equation of N -body simulation by direct summation is Newton's equation of motion expressed as where G is the gravitational constant, m i , x i and a i are mass, position, and acceleration of i-th particle out of N particles, respectively. The gravitational softening parameter , introduced to avoid divergence due to division by zero, eliminates self interaction when calculating gravitational force. We use the word i-particles, and j-particles to denote particles feel gravitational force, and particles cause gravitational force, respectively. We assume a fixed potential model (Hernquist bulge [30], exponential disk, and NFW halo [31]) for M31 [32,25], because Mori & Rich [26] analytically and numerically showed the dynamical response of M31's disk against the collision with the progenitor is negligible. They represented the progenitor and M31 using N -body particles and focused on the thickness of the M31's disk due to the disk heating by dynamical friction after the collision. Their results showed that the effects on the disk thickness and the disk kinematics are negligibly small as far as the dynamical mass of the progenitor is less than 5×10 9 M . In this study, we assume a King sphere of M = 3 × 10 9 M , c = 0.7, r t = 4.5 kpc, the best fit model derived by [27], as the progenitor dwarf galaxy. Finding the Current-Plausible Position of the MBH To investigate the current-plausible position of the MBH in the former satellite, we calculate orbital evolution of an MBH particle whose mass is 3 × 10 6 M and 524, 288 N -body particles which represent the satellite. We set the gravitational softening length to be 13 pc. Following .50) km s −1 , respectively. We calculate the self gravity of N -body particles using Blade-GRAPE on FIRST simulator at CCS, the University of Tsukuba. We use 2nd-order Runge-Kutta integrator and shared, adaptive time step. Constraining Uncertainty for the Current Probable Position of the MBH Further simulations are necessary to restrict the area in which the MBH exists, and hence to determine the observational field for the future observational detection. The most uncertainty of the current MBH position is caused by uncertainty about the infalling orbit of the progenitor dwarf galaxy. Therefore, we have performed a large set of parameter study to constrain the orbit of the infalling satellite which reproduce the observed structures in the six-dimensional phase space. Since the number of dimension for the parameter space, six, is too large to sweep the whole parameter space, the number of dimension is reduced as follows. First, we fix the initial distance of the infalling satellite as 7.63 kpc away from the center of the M31 (corresponds to scale radius of DM halo [25]). In addition, M31 is modeled as an axisymmetric system in this study. Therefore, the resultant number of dimension becomes four; however, the parameter space is still large. To study such a wide parameter space, we have performed a large set of parameter survey utilizing GPU cluster. In this parameter survey, we represent the infalling satellite with 65536 particles and set the gravitational softening length as 50 pc. Numerical simulations of parameter study are performed on HA-PACS at CCS, the University of Tsukuba. We use 2nd-order leap-frog integrator and shared, fixed time step. Algorithm and implementation of our code is explained in Section 4. General Purpose computing on Graphics Processing Unit and HA-PACS Hence performing four dimensional parameter study is a challenging task, we need to accelerate calculation and sweep wide parameter space concurrently to complete the parameter study in realistic time. In recent days, GPU (Graphics Processing Unit) becomes one of the most attractive accelerator due to development of GPGPU (General Purpose computing on GPU). A C/C++ based programming environment named CUDA (Compute Unified Device Architecture) provided by NVIDIA enables programmers to quite easily implementing GPU codes run on NVIDIA's GPUs [33]. Furthermore, many GPU clusters exhibit on the TOP 500 list [34], such as Titan, Tianhe-1A, Nebulae, TSUBAME 2.0, and HA-PACS. Rapid performance increase of GPUs and development of such GPU clusters support accelerating numerical simulations. This preferred feature of GPU computing strongly encourages us performing four dimensional parameter study about the infalling orbit of the satellite on GPU cluster. In this study, we have used HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences), a newly installed GPU cluster at University of Tsukuba [35]. HA-PACS is equipped with the high-end GPUs and CPUs connected by PCI-express generation 3.0. Each node of HA-PACS consists of two sockets of Intel Sandy Bridge-EP and four boards of NVIDIA Tesla M2090, and the CPUs support full bandwidth connection of the GPUs without any performance bottleneck. The peak performance of HA-PACS is 1.604 PFLOPS in single precision, due to high performance GPUs of 1.427 PFLOPS in single precision. The Table 1 lists other detailed information of HA-PACS. In both implementation, a block contains 256 threads and the shared memory stores position data of 256 j-particles to minimize access time to the global memory within the innermost loop. Differences between [37] and [40] are the number of unroll count for the innermost loop, cache configuration, and the number of operations to calculate gravitational interaction. In our implementation, the number of unroll count for the innermost loop is 128 against for 32 of CUDA SDK, and we set "L1 cache preferred" since experiments exhibit slight performance increase compared with "shared memory preferred" in most cases. The last difference, the most influential one, is due to calculation process of r ji 2 + 2 . In both implementation, a float3 type variable rji, a float type variable eps2, and a float type variable r2 store the displacement vector r ji ≡ x j − x i , 2 , and result of r ji 2 + 2 calculated as Listing 1, respectively. The source codes shown in the Listing 1 look like almost the same, however, generated instruction sets are quite different. For implementation of CUDA SDK, one multiplication and two fused multiplyadd (FMA) operations are performed at first, and one addition follows at the next step. Thus, its computational cost corresponds to 4 clock cycles according to CUDA C Programming Guide [33]. On the other hand, only three FMA operations are performed using 3 clock cycles in our implementation. Therefore, our implementation would be faster than CUDA SDK. The most influential point of this optimization is that the innermost loop includes the calculation of r2; thus, this small care directly increase performance. Furthermore, we have implemented one additional optimization to hide memory access to the global memory. Since the shared memory stores the position data of j-particles, synchronization of whole threads within a block is necessary before and after updating information of j-particles. As far as a streaming multiprocessor contains multiple blocks, memory access time of a block can be hidden by overlapping with calculation of other blocks. However, such overlapping might not be occur since the CUDA schedulers determine which calculation of executable warp proceeds. Therefore, maximizing probability to occur the overlapping between calculation and memory access can help to achieve high performance. For the purpose, we have separated the load instruction from the global memory and the store instruction to the shared memory by using the two syncthreads(). By this careful treatment, overlapping of memory access and calculation becomes more effective within a block, and the peak performance of our implementation reaches 1004 GFLOPS in single precision. Results : Where is the Wandering MBH? Results of Section 2.1 and Section 2.2 are shown in Section 5.1 and Section 5.2, respectively. Current-Plausible Position of the MBH The result of the N -body simulation to investigate the current plausible-position of the MBH when the infalling satellite follows Fardal's orbit [25] is shown in Figure 1. [41]). The MBH is close to the apocenter, where it is closer to the Milky Way than the M31 center, which means that the velocity of the MBH is relatively slow, and the uncertainty of the current position is smaller than any other position such as pericenter. The distance of the MBH from the center of M31 is about 30 kpc, so it is far away from the disk of M31. Results of Parameter Study The first results for 34, 000 runs (among ∼ 10 5 in total) of the parameter study to investigate the infalling orbit which reproduce observed structures well are shown in Figure 2. Black circles represent results reproduce the stream structure, shell structure and contrast among the stream and the two shells. On the other hand, crosses are runs which failed to reproduce observed structures. The infalling orbit assumed in earlier studies ( [25,26,27]) is infalling velocity of −430 km s −1 , specific angular momentum of 660 kpc km s −1 (corresponds to the result shown in Figure 1). The Figure 1 shows that the maximum initial infalling velocity is about −300 km s −1 . Furthermore, the escape velocity of the infalling satellite is about −560 km s −1 at 7.63 kpc away from the center of the M31. Therefore, the minimum initial infalling velocity is expected to be around −560 km s −1 . By completing this parameter survey, the plausible area where the wandering MBH exists will be restricted more tightly. [23], and white filled circles show edge of the observed shells [25]. Black circle shows the most probable current position of the MBH. Magenta and blue curve show the position of the MBH when the observed shells are reproduced with 95.4% and 99.7% confidence level, respectively. The red curve shows the orbit of the MBH from 910 Myr ago to 320 Myr future. Summary We investigate the current position of an MBH moving in the M31 halo. The current-plausible position of the MBH is within the halo, 30 kpc away from the center of M31. To determine the observational field for the future observational detection, we study uncertainty of the infalling orbit of the satellite by performing parameter study. Hence the required parameter space is large, we develop a highly optimized collisionless N -body code runs on GPU cluster, and we have performed a parameter study about the infalling orbit of the progenitor galaxy to determine the observation field to detect the MBH. The tentative results of the parameter study begin to restrict the possible parameter space of the infalling orbit which reproduce the observed structures well. Preliminary results for 34, 000 runs of parameter study for the infalling orbit of the satellite. The horizontal axis is the infalling velocity of the satellite at 7.63 kpc away from the center of the M31, and the vertical axis represents the specific angular momentum of the infalling satellite. Filled circles corresponds to results reproduce the stream structure, shell structure and contrast among the stream and the two shells. Crosses represent results which failed to reproduce observed structures.
2019-04-18T13:10:19.195Z
2013-08-12T00:00:00.000
{ "year": 2013, "sha1": "d5126105f3a2ff1c9031069811197f2ebeb686a9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/454/1/012013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8342075d29683c128b2660989f18d0a9726576fc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234361250
pes2o/s2orc
v3-fos-license
Front‐line nurses' responses to organisational changes during the COVID‐19 in Spain: A qualitative rapid appraisal Abstract Aims To identify the organisational changes faced by front‐line nurses working with COVID‐19 patients during the first wave and describe how they responded to these changes. Background The COVID‐19 pandemic has altered the provision of care and the management of health care around the world. Evolving information about SARS‐CoV‐2 meant that health care facilities had to be reorganised continually, causing stress and anxiety for nurses. Methods Qualitative study based on Rapid Research Evaluation and Appraisal (RREAL). The research took place in hospital and community health settings of the Spanish national health system with a purposive sampling of 23 front‐line nurses. Semi‐structured interviews were conducted between May and June 2020. The duration was 30–45 min per interview. We used the Dedoose® data analysis software to perform a thematic analysis. Results Nurses responded to organisational changes using the following strategies: improvisation, adaptation and learning. Conclusion Our rapid approach allowed us to record how nurses responded to changing organisation, information that is easily lost in a disaster such as the COVID‐19. Implications for nursing management: Knowing about their strategies can help planning for future health disasters, including subsequent waves of the COVID‐19. | INTRODUC TI ON In the face of COVID-19 pandemic, all aspects of the provision and management of health care were affected. Spain implemented measures to prevent the spread of COVID-19: quarantine, isolation, social distancing and a stay-at-home order, which were insufficient. Spain was among the countries to suffer the highest mortality in the first wave in Europe and around the world (Sánchez-Villena & de La Fuente-Fig uerola, 2020). Health systems should have well-defined plans to maintain control of the situation and to ensure the ability to provide care. If the health system cannot guarantee this, nurses feel abandoned and unsafe (O'Boyle et al., 2006). Health managers should consider these concerns because they can affect pandemic response (McMullan et al., 2016). During the first wave of the pandemic, health systems were disorganised and often lacked organisational support to help nurses cope with the situation. | BACKG ROU N D The increased demand for health care and prioritization of patients resulted in a work overload for health care professionals. The complexity of care due to the lack of knowledge about the virus and its transmission pathways, the scarcity of personal protection equipment (PPE) and the lack of specific treatments for COVID-19 resulted in a marked increase in stress among health care workers (Mo et al., 2020). The need to adapt the provision of services as information on SARS-CoV-2 emerged required rapid changes in care procedures and protocols, which increased nurses' stress and anxiety (Lázaro-Pérez et al., 2020). Nurses had difficulty maintaining a work environment that was ethical and safe-both physically and psychologically-and facing the challenges of the pandemic (Ulrich et al., 2020). In previous pandemics, nurses have shown professional responsibility and ensured patient care despite limited resources (McMullan et al., 2016). Nurses acted in these health disasters despite suffering alarming psychological symptoms, sacrificed their own needs and acted selflessly (Aliakbari et al., 2015). Despite feeling unprepared to respond to a given health disaster, nurses developed higher-than-expected emergency response skills and a high sense of ethical and professional commitment (Jeong & Lee, 2020). Personal resilience and social and institutional support are protective factors against adversity and stress during health disasters (Labrague et al., 2018). In the COVID-19 pandemic, personal resilience and social support have helped nurses handle stress and have been key to nurses' mental health . High levels of institutional support are protective against the stress and anxiety caused by health disasters such as emerging infectious diseases. Effective leadership among nursing managers helps institutions meet organisational challenges . However, this support was often lacking at the beginning of the pandemic, as health systems were overwhelmed by the flow of patients. There is little information about how front-line nurses respond to changing circumstances, both in health disasters in general and in the case of COVID-19 in particular. Given this scarcity, we investigated nurses' ability to develop and respond to changes in their work environment and the provision of care during the first wave of the pandemic. Our findings can be useful in planning for future pandemics or other health disasters, especially because our rapid approach allowed us to collect data while the crisis was still underway. Understanding the organisational changes that took place and how nurses responded to them can inform planning for future health disasters. The aim of this study was to identify the organisational changes faced by front-line nurses working with COVID-19 patients during the first wave and describe how they responded to these changes. | Design A qualitative study was carried out using Rapid Research Evaluation and Appraisal (RREAL) (Vindrola-Padros et al., 2020). The RREAL model is particularly suited to studying health emergencies because it makes possible to obtain qualitative results in a short period of time (Green & Thorogood, 2013). | Participants and data collection Participants were selected based on purposive sampling (Morse & Field, 1995). We used the snowball technique (Naderifar et al., 2017) What is already known about the topic? • The increased demand for health care and prioritization of COVID-19 patients resulted in a work overload for health care professionals. Effective leadership among nursing managers helps institutions meet organisational challenges. However, this support was often lacking at the beginning of the pandemic, as health systems were overwhelmed by the flow of patients. What this paper adds? • Understanding the organisational changes that took place during the COVID-19 pandemia and how nurses responded to them can inform planning for future health disasters. • Front-line nurses reported developing self-management strategies to find solutions to the organisational changes they faced during the first wave: problem-solving, adaptation and learning. to recruit nurses from hospital and community health settings who provided care during the first wave of the pandemic in Spain, which took place from March to May 2020. The inclusion criterion was being a registered nurse caring for COVID-19 patients during the first wave in Spain. The exclusion criterion was being on leave from work during this period. We sent email messages to nurses known to the research team explaining the study objectives, inviting them to contact us by email if they were interested in participating and asking them to forward the message to other nurses. We sent further information and the informed consent document to the potential participants who responded. After they returned the signed consent document, we scheduled an interview via Skype or Zoom. We conducted continuous analysis of the data until reaching saturation at 23 participants. At this point, we considered data collection to be complete. The socio-demographic characteristics of participating nurses are summarized in Table 1. Three team researchers (1, 2 and 3) conducted semi-structured interviews with 23 nurses from different health care sectors from May to June 2020. We asked participants the following questions: -In your opinion, how has the organisation of the health system changed since the start of the pandemic? -In your experience, how have these organisational changes affected your tasks and roles and how nursing care is delivered? The duration of the interviews was 30-45 min, and all interviews were recorded. | Data analysis We used Braun and Clarke's (2014) thematic analysis to identify the most frequent topics from the interviews that were relevant to the study objectives. Using the Dedoose® software package, we identified meaning units and grouped them into subthemes and themes. We identified patterns in the data and organised the themes systematically to meet our research objectives, following the steps proposed by Braun and Clarke as detailed in Table 2 (Colorafi & Evans, 2016) (see Table 2). | Rigour This study meets the criteria of credibility, transferability, dependability and confirmability, which ensure trustworthiness in qualitative research (Polit & Beck, 2017). We took a reflexive stance, considering that three of the researchers (1, 4 and 5) are nurses involved in providing care during the COVID-19 pandemic. (However, they had had no prior contact with the participants). The interviewers took notes on their own impressions and reactions when they interacted with participants in order to take their own positionality into account during analysis. COREQ was used as reporting guidelines in line with EQUATOR (Tong et al., 2007). | Ethical considerations The study was approved by the institutional review board of the host university (IRB) (File 5184) and followed the principles of the Helsinki Declaration. The participants received oral and written information explaining that their participation was voluntary and that they could withdraw from the project at any time. We anonymized the interviews by substituting names with an alphanumeric code. | RE SULTS We identified three themes in participants' reports of their responses to organisational changes and the provision of care during the first wave of the pandemic: problem-solving, adaptation and learning. Each theme contains two or three subthemes (see Table 3). | Improvisation Nurses had to find innovative solutions to solve problems arising from the care needs of people infected or potentially infected with COVID-19. The abrupt start of the pandemic required nurses to improvise in order to protect themselves from contagion and to work in new spaces that had been devised for caring for COVID-19 patients. | Improvisation in the use of protective material The participants reported using improvisation to protect themselves, given the lack of certified protective gear. This included both making do with whatever certified equipment was available and making their own equipment out of uncertified materials. We've had to learn where everything was, the layout of the space. We were lost because it wasn't only a new facility that we didn't know but also the facility was upside down, since the space had to be organized differently to treat the virus. It was really hard for me to find equipment and things, and that made the work more difficult and caused frustration. TA B L E 1 Socio-demographic characteristics of participants As these examples show, participants used improvisation to address the organisational challenges presented by the pandemic. | Adaptation Because this health emergency created unprecedented pressure on health services, participants had to adapt their work practices in unexpected ways. Participants reported having to adapt quickly to new departments, risks and care protocols. | COVID-19 care protocols New information continually emerged about the transmission and treatment of SARS-CoV-2. Participants reported difficulty in adapting so frequently to new protocols. We've had about 12 protocol changes, and I understand it, since we have to adapt. But of course before we could adapt to one, it was already changed to another one. (P16 nurse) The existence of different protocols at different facilities caused a complex adaptation process as a consequence of the confusion, insecurity and lack of trust related to their reliability and applicability. | Learning Faced with a lack of knowledge about clinical practice, diagnostic procedures, care pathways, the use of PPE and measures to reduce the risk of contagion, participants reported that they acted proactively to find answers to their questions. They acquired this professional knowledge outside of conventional training, which was generally not available due to the crisis. | Seeking knowledge Although some health centres attempted to train professionals, several participants reported that they had to learn on their own. (P3 nurse) They often shared this knowledge through social media. At first in a group we sent each other protocols that we found, actions that must be taken when the case becomes complicated. Even the basic things that no one explained: how to put a patient in prone position, instead of the venti-a basic mask, wearing a Monaghan [type of PPE] because it reduces the risk that you will infect others. (P10 nurse) As we have shown, learning was a key way that participants responded to organisational changes during the pandemic. | D ISCUSS I ON We identified three themes in participants' descriptions of how they responded to organisational changes during the first wave of the COVID-19 pandemic in Spain: (a) improvisation, (b) adaptation and (c) learning. Our analysis contributes to our understanding of the capacity of front-line nurses to develop professionally during health crises (Xue et al., 2020) and especially during the first wave of COVID-19, with implications for nursing management. | Improvisation During the first wave of COVID-19, one of the main problems nurses faced was the lack of PPEs. Participants had to maximize the available equipment and, as a result, had to limit their contact with patients, resulting in the feeling that they were offering poorer quality care, as also seen in Rushton and Grady (2020). Other studies have shown that working without the proper protection causes nurses to feel fear , stress (Mo et al., 2020) and a lack of safety (Yin & Zeng, 2020). To compensate for the lack of PPEs, participants used improvised equipment to protect themselves. In the face of risk, participants found solutions on their own-without institutional support-so that they could keep working. | Adaptation Emergency care nurses in China at the onset of the pandemic reported that attitudes such as motivation and enthusiasm helped them adapt to being moved across departments, facilities and even regions to care for people with COVID-19 (Hou, Zhang, et al., 2020;Hou, Zhou, et al., 2020;Lam et al., 2019). Our participants reported being able to adapt quickly to new work environments, overcoming the uncertainty caused by being in a different department or facility or with different colleagues or on a different schedule. The scarcity of PPEs at the beginning of the pandemic was generalized around the world and health facilities established priorities according to the risk of exposure (Hou, Zhang, et al., 2020;Hou, Zhou, et al., 2020). This lack of PPEs and its effect on patient care has been identified in previous epidemics (Lam et al., 2019). Our participants had to adapt to new protocols for using PPEs. Scarcity caused them to plan their interventions with patients according to the availability of PPEs. This had an impact on nursing interventions, because contact with patients who were infected or potentially infected with COVID-19 had to be minimized to reduce | the risk of infection. Participants reported that this necessity gave them the sense that the quality of care was lower. Previous research shows the high degree of commitment and responsibility of nurses during natural disasters (Aliakbari et al., 2015) and in epidemics such as influenza (Lam & Hung, 2013) and Ebola (Pincha Baduge et al., 2017). Participants' ability to adapt to organisational changes, despite risk to their own health and lack of adequate institutional support, points to their commitment to providing patient care. In previous epidemics, emergency room nurses positively evaluated the protocols and clinical guidelines that were updated as information about the pathogen became available. The confusion caused by the lack of knowledge about the pathogen was also identified as an adverse factor at the beginning of a pandemic (Lam et al., 2019). According to Xue et al. (2020) in natural disasters, the lack of clear protocols and clinical guidelines for the everyday work of professionals affects their capacity to make decisions and prioritize care. Our results show that this finding also applies to the COVID-19 pandemic. | Learning When health centres could not provide training to nurses, participants learned about the virus on their own. Reinforcing strategies for individual learning is key, but systemic training could be more useful in these situations (Kackin et al., 2020;Yin & Zeng, 2020). Research shows that in previous epidemics such as Ebola, emergency service professionals reported that they had sufficient preparation to offer care to infected people (Pincha Baduge et al., 2017). In Spain, during the first wave of COVID-19, there were insufficient data about SARS-CoV-2 and its transmission pathways. The pace of formal training could not keep up with changing information about the virus. As a result, our participants shared with other professionals the information they acquired. This support and cooperation among co-workers have also emerged in other studies of COVID-19 (Hou, Zhang, et al., 2020;Hou, Zhou, et al., 2020;Sun et al., 2020). Our results reveal the capacity of nursing teams to learn on their own, given the unavailability of formal training. We have shown that social networking is an additional way that nurses share information with colleagues both within and outside of nursing and locally and internationally. Our analysis reveals nurses' ability to develop professionally during health disasters. | LIMITATI ON S AND FUTURE DIREC TIONS Our qualitative design means that our results cannot be generalized beyond the study population. To achieve generalizable results, a next step would be to design a mixed-method study that would allow us to examine the statistical significance of our findings. A comparative angle is also necessary to determine whether nurses outside Spain had similar experiences. We should also note that the stress and trauma experienced by some participants could have influenced their responses. | CON CLUS IONS Our rapid approach made it possible to capture fleeting information about how facilities were organised and how nurses worked during the first wave of the COVID-19 pandemic. Understanding nurses' ability to respond to organisational changes during the first wave of the COVID-19 pandemic can be useful for redesigning work sites and organisations and implementing the changes needed to ultimately improve staff health and patient outcomes. Participants reported developing self-management strategies to find solutions to the organisational changes they faced during the first wave: problemsolving, adaptation and learning. These results fill a gap in the literature about how nurses deal in their daily practice with organisational changes during a health disaster. | IMPLI C ATI ON S FOR N UR S ING MANAG EMENT Nursing supervisors and administrators can use these findings to improve organisational management policies in health disasters, including subsequent waves of the COVID-19 pandemic. Understanding nurses' ability to respond to organisational changes during the first wave of the COVID-19 pandemic can be useful for motivating and encouraging nursing teams. Obviously, the most important thing health centres can do is plan adequately based on the experience of nurses during this health disaster to ensure that protective gear, spaces, communication and training are adequate. ACK N OWLED G EM ENTS We thank the participants who collaborated in this project. We also thank Dr. Susan Frekko for her feedback and for translating the manuscript into English from the original Spanish and Catalan. CO N FLI C T O F I NTE R E S T We have no conflict of interest.
2021-05-12T06:16:53.978Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "a3de8b10c53c65c1a2bbe9c5cb80518a3603fec1", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jonm.13362", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1f5660c60ab3ae5607d37a2b2e44caab49ecdff0", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
23379322
pes2o/s2orc
v3-fos-license
Longitudinal study on birthweight and the incidence of endometrial cancer From 1976 to 2004, we followed 71 751 participants of the Nurses' Health Study and identified 676 invasive endometrial cancer cases. Birthweight, assessed in 1992, was not associated with the incidence of endometrial cancer. No effect modification by menopausal status was observed, but statistical power to detect an interaction was limited. Endometrial cancer is the commonest invasive gynecologic cancer in women in the United States, with nearly 40 000 new cases diagnosed each year (American Cancer Society, 2007). Birthweight and intrauterine exposures have been related to the risk of breast cancer (Michels and Xue, 2006), childhood leukaemia (McLaughlin et al, 2006) and testicular cancer (Michos et al, 2007). However, data are limited regarding the potential influence of early life exposures on endometrial cancer risk. Potential mechanisms for an association between high birthweight and increased breast cancer risk include exposure to elevated maternal pregnancy oestrogen levels (Petridou et al, 1990;Ekbom et al, 1992), and insulin-like growth factor-I levels (Yang and Yu, 2000;Ostlund et al, 2002), both of which also affect the development of endometrial cancer (Ayabe et al, 1997;Adami et al, 2002). Prenatal nutrition may regulate body size later in life by altering the number of adipocytes (Sayer and Cooper, 2005), reprogram metabolism or influence leptin resistance (Phillips et al, 1999;Breier et al, 2001). As obesity is an established risk factor of endometrial cancer for premenopausal and postmenopausal women (Adami et al, 2002), birthweight as a marker of prenatal nutrition may plausibly influence its development. Using data from 28 years of follow-up of 71 751 women participating in the Nurses' Health Study (NHS), we examined the association between birthweight and incidence of endometrial cancer later in life. MATERIAL AND METHODS The NHS was established in 1976, when 121 700 married registered nurses age 30 -55 years replied to a baseline questionnaire and received questionnaires biennially by mail to update information on demographic, anthropometric, and life style factors, and on newly diagnosed disease. For the current analysis, we excluded women with missing birthweight data, prevalent cases of endometrial or other cancers, and women with hysterectomy at baseline. During follow-up, women were censored if they reported recently diagnosed in situ or invasive endometrial cancer, had a hysterectomy, died or were lost to follow-up. In 1992, we asked participants to report their own birthweight with the following options: o2.500, 2500 -o3182, 3182 -o3863, 3863 -o4545, X4545 g. On each biennial questionnaire, participants were asked whether they had been newly diagnosed with endometrial cancer during the previous 2 years. The National Death Index was also routinely searched for deaths among women who did not respond to the questionnaires. For endometrial cancers reported by women or their next of kin for those who had died, permission was requested to review the relevant medical records. Study physicians reviewed all medical records and pathological reports to confirm their diagnosis. Cases included in this study were invasive epithelial endometrial cancers with stage greater than IA in the FIGO staging system. At baseline and during follow-up, we inquired about a variety of personal characteristics including reproductive and life style factors, many being risk factors for endometrial cancer. Information on age, age at menarche, age at first birth and height was obtained at baseline in 1976. Other early life exposures including premature birth (2 þ weeks premature) and duration of having been breast-fed were assessed in 1992, and birth order was queried in 2004. Weight at age 18 was assessed in 1980. Somatotype at age 5 and 10 was assessed in 1988 by asking participants to choose from nine diagrams that best depicted their figure outline at each age. Maternal vital status and family history of endometrial cancer was asked in 1996. Other covariates were inquired and updated during follow-up. The association between birthweight and incidence of endometrial cancer was analysed using a Cox proportional hazards model. Three covariate-adjusted models were pursued. In the first model (model I), we adjusted only for family history of endometrial cancer and other early life exposures including birth order, duration of being breast-fed and premature birth, in addition to age. In the second (model II), we also included other established or potential risk factors for endometrial cancer. In the third (model III), we additionally adjusted for anthropometric factors, including somatotype at ages 5 and 10, BMI at age 18, and current BMI. Though covariate-adjusted models II and III have better goodness of fit due to additional adjustment for potential risk factors of endometrial cancer, these factors are subsequent to birthweight and could conceivably mediate the effect of birthweight on endometrial cancer risk. Therefore, adjusted model I would be the preferred model to estimate the overall effect of birthweight. As birthweight was not assessed until 1992, we conducted a sensitivity analysis restricting follow-up to 1992 -2004 and compared the results with the primary analysis using the entire follow-up. We also evaluated potential effect modification by menopausal status, anthropometric factors, and other early-life exposures. DISCUSSION In two previous prospective cohort studies, both from Sweden, the risk of endometrial cancer has been investigated in relation to birthweight. Based on 112 cases, the incidence of endometrial cancer in women with birthweight of X4000 g was found to be almost half that of women with birthweight of o3000 g (HR ¼ 0.55 (95% CI 0.36 -1.17)) (McCormack et al, 2005). Based on 73, no significant association was found between birthweight and endometrial cancer (HR ¼ 0.65 (95% CI 0.34 -1.24) comparing the incidence in women with birthweight o2500 g to that in women 43000 g) (Lof et al, 2007). With more than triple the cases of all previous studies combined, results from the current study do not suggest an association. Endometrial cancer involves cancerous growth of the endometrium. Unlike breast epithelium, in which terminal differentiation occurs largely during the first full-term pregnancy, the endometrial lining undergoes repeated division and differentiation throughout the reproductive life of a woman. Though birthweight has been related to a higher intrauterine exposure to oestrogen (Petridou et al, 1990) and IGF-I (Yang and Yu, 2000), the relation of birthweight to the profile of endogenous hormone later in life is less clear. In one study, birthweight was not associated with overall premenopausal sex hormone levels, but was inversely associated with luteal estrone and estrone sulphate (Tworoger et al, 2006). Similarly, birthweight was found to be only weakly associated with serum IGF-I levels in adulthood (Schernhammer et al, 2007). The statistical power of this study would allow us to detect a modest association, if it exists. Many risk factors for endometrial cancer and other early life exposures were assessed and accounted for in the analysis. Though birthweight was queried 16 years after the start of follow-up, when compared with the analysis restricted to prospective follow-up, results including the entire follow-up of 28 years did not differ appreciably. We have previously compared self-reported birthweight with data derived from birth certificates and found self-reported birthweight to be highly reliable (r ¼ 0.74) (Troy et al, 1996).
2014-10-01T00:00:00.000Z
2008-03-18T00:00:00.000
{ "year": 2008, "sha1": "7aec08606d24024af6a1c8e495a975f8db4083b9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6604304.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7aec08606d24024af6a1c8e495a975f8db4083b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270078488
pes2o/s2orc
v3-fos-license
Exploring Marital Quality in Parents of Children with Autism: Identifying Barriers and Facilitators : The current study aims to examine the factors that facilitate or act as barriers to the marital relationships of parents of children with ASD. In total, 150 parents of children with ASD participated in this study. An online qualitative survey tool was utilized to collect data, which were subsequently subjected to thematic analysis. Through qualitative analysis, three major themes emerged: (1) Psychological and Emotional Experiences, (2) Sense of Partnership, and (3) The Rich get Richer, including sub-themes such as formal support systems, a strong marital relationship prior to ASD diagnosis, and limited family resources. The findings suggest that elements of the marital relationship can serve as valuable resources for parents of children with ASD in coping with the challenges of parenthood. Conversely, the study highlights certain factors that act as barriers to the marital relationship. Introduction The Diagnostic and Statistical Manual of Mental Disorders,Fifth Edition, from 2022 defines Autism Spectrum Disorder (ASD) as a neurodevelopmental disorder characterized by persistent deficits in social communication and social interaction across multiple contexts, as well as restricted, repetitive pa erns of behavior, interests, or activities.These symptoms must be present in the early developmental period and cause clinically significant impairment in social, occupational, or other important areas of functioning.Additionally, the DSM-5-TR specifies that the symptoms are not be er explained by intellectual disability or global developmental delay (American Psychiatric Association 2022).The World Health Organization has determined that 1% of children in the world have ASD (World Health Organization 2021). Raising a child with autism spectrum disorder (ASD) can significantly impact family functioning, couple relationships, and parental well-being.A systematic review by Desquenne Godfrey et al. (2024) found that families of children with ASD experience more problematic general family functioning and less satisfaction compared to families with typically developing children.Parents of children with ASD typically experience higher levels of stress, anxiety, depression, and caregiver burden than other parents (Bonis 2016;Bozkurt et al. 2019;Khusaifan and El Keshky 2022).In addition, studies have shown that parents of children with ASD use more inefficient coping strategies (Vernhet et al. 2019) and report lower parental competence (Mohammadi et al. 2019). The challenges of raising a child with ASD are not limited to the relationship between the parent and the child but also affect the relationship between the parents (Hock et al. 2012).The stress of raising a child with ASD can significantly impact couple relationships and parenting dynamics.Systematic reviews have found negative associations between relationship satisfaction and stressors, such as life events, parenting burden, and elevated parental stress levels (Sim et al. 2016).Saini et al. (2015) conducted a scoping review focused specifically on couple relationships among parents of children and adolescents with ASD.Their findings highlighted predominant themes of strain and disruption in marital relationships, stemming from stresses associated with meeting the needs of the child with ASD.Such stresses included accommodations and lifestyle adjustments, disagreements over treatment and care plans, limited opportunities for quality time or communication between partners, differing approaches to care, and lack of access to needed services.Limited time for the couple relationship was commonly reported by participants across the reviewed studies (Saini et al. 2015). The severity of ASD symptoms in children does not appear to be a direct predictor of couple relationship quality or marital satisfaction for parents.Research by Desquenne Godfrey et al. (2024) and Saini et al. (2015) did not find a clear link between autism severity itself and poorer family functioning or reduced marital adjustment among parents.However, associated difficulties commonly seen in ASD, such as challenging behaviors, anxiety, and intellectual disability, do correlate with lower family cohesion, adaptability, and overall poorer family functioning (Desquenne Godfrey et al. 2024).Regarding relationship satisfaction specifically, a study by Sim et al. (2016) found no significant association between cognitive functioning in children with ASD and parental relationship satisfaction.The findings on the impact of ASD symptom severity were mixed across studies, with some reporting an inverse correlation with relationship satisfaction while others did not identify a significant relationship. Research indicates that challenging behaviors exhibited by children with ASD can significantly strain couple and marital relationships.Multiple studies have found more severe behavior problems in children with ASD to be directly related to lower marital satisfaction and relationship quality for parents (Saini et al. 2015;Sim et al. 2016).Parents reporting more intense behavior problems, particularly externalizing behaviors, in their children also tended to experience higher personal stress levels.This elevated parental stress was associated with reduced relationship satisfaction, less spousal support, and lower commitment in the relationship-an effect that was especially pronounced among mothers (Saini et al. 2015).However, some studies did not find a significant difference between relationship satisfaction and the presence of either challenging or adaptive child behaviors (Sim et al. 2016). Research examining the role of a child's age has yielded mixed findings regarding impacts on couple relationship quality and satisfaction.While some studies found having older children with ASD correlated with lower marital happiness and feeling less closeness in the relationship (Saini et al. 2015;Sim et al. 2016), others reported higher marital satisfaction with older children (Saini et al. 2015).Marital dissatisfaction was more prevalent when children were aged 5-9 years compared to 10-12 years (Saini et al. 2015). The couple relationship can serve as an important source of support for parents raising a child with ASD.High-quality family relationships and social support have been positively linked to be er general family functioning, satisfaction, and cohesion (Desquenne Godfrey et al. 2024).Specifically examining couple dynamics, greater partner support has been shown to positively impact relationship satisfaction among these parents (Sim et al. 2016).The dyadic relationship provides a stage for spouses to support each other through the unique challenges of raising a child with ASD (Brien-Bérard and des Rivières-Pigeon 2023).This spousal support can play a protective role, as relationship satisfaction has been found to moderate the impact of emotional and behavioral difficulties in children with ASD on parental anxiety levels (Khusaifan and El Keshky 2022;Jose et al. 2021). The aim of this qualitative study was to explore the experiences of the marital relationships among parents of children with ASD.Specifically, the primary objective was to elucidate the factors that undermine the quality of marital relationships in parents of children with ASD, as well as those that promote the preservation and enhancement of spousal relationships.The study sought to gain insight into the unique challenges faced by couples in this situation and to uncover strategies that may help them maintain a strong and healthy marital bond.To the best of our knowledge, no such study has been conducted in Israel; in that sense, this study may add unique insight into the facilitators and obstacles of marital relations of parents of children with ASD.Through this research, the study aims to contribute to the existing body of knowledge on ASD and its impact on families, as well as to provide practical guidance for families and professionals who work with this population. Participants Of the 150 parents who responded to the survey questions, a majority were women (82.7%).The average age of parents was 50.19 years.The average age of their children with ASD was 11.65 years.All participants in the study identified as Jewish, with 61.3% selfidentifying as secular, 24% as traditional, and 12.7% as religious.The severity of the child's ASD symptoms was reported by their parents; more than half of the children were reported to have mild ASD (53.1%), approximately one-fifth mild-moderate symptoms (19.0%), 9.5% moderate symptoms, 9.5% had severe ASD, and 8.8% were reported to demonstrate profound symptoms. Measures In the current study, we used a self-created online qualitative survey with two questions, one an open-ended question (Braun et al. 2021) that allowed participants to describe their experiences and perspectives regarding their spousal relationship. The first question asked how having a child with ASD affected their spousal relationship, on a scale of 1-5 (1-significantly harmed the relationship; 5-significantly strengthened the relationship).The second question was an open-ended question encouraging the participants to further elaborate on their response to the first question ("Why does raising a child with ASD affect your relationship with your spouse in the manner you stated?").The items included in the survey were designed to collect information about facilitators of and barriers to the quality of the marital relationship. Procedure The study was approved by the Ethics Committee of the School of Social Work, Bar Ilan University (No. 092003).The online qualitative survey, based on two questions on a Qualtrics platform, was distributed via social networking, such as Facebook and WhatsApp.A convenience sampling was used to recruit parents of children with ASD for participation in the current study.To enhance transparency and address sample selectivity concerns, it is important to note that while social networking platforms were utilized for recruitment, efforts were made to ensure diversity within the sample.Specifically, recruitment messages were shared across various ASD-related groups and forums on these platforms to reach a broader audience.Additionally, the inclusion criteria for participation were clearly communicated to potential respondents, emphasizing that individuals with diverse backgrounds and experiences were encouraged to participate.Furthermore, the survey introduction provided participants with essential information about the study, including its purpose, voluntary nature, and assurance of confidentiality.Participants were also informed that their responses would be anonymized and used solely for research purposes. Data Analysis Thematic analysis was employed to identify major themes in response to an openended question (Braun and Clarke 2012).Initially, the authors independently conducted a thorough reading and coding of all the answers.The generated initial codes were then organized and categorized into potential themes.In the subsequent stage, the two researchers engaged in collaborative discussions to address any coding discrepancies and achieve consensus on the themes.Concurrently, a third researcher with expertise in disability research, who was not involved in the current study, independently reviewed the codes and provided valuable insights and revisions to the coding and themes.Following deliberation between the third researcher and the authors, a consensus on the themes was reached.Finally, the authors grouped the main themes to present the findings in a clear and comprehensive manner Results The first question asked participants to rate the way having a child with ASD affected their marital relationship.A minority of participants (25.2%) stated that having a child with ASD negatively harmed their relationship, and 7.2% cited significant harm (18%).Another 34.5% maintained that having an ASD child strengthened their relationship, and 15.1% noted significant strengthening of their marital relationship.The remaining 18% of participants stated that having a child with ASD did not affect their relationship. No significant associations were observed between various personal characteristics and the perceived impact of ASD on spousal relationships.These characteristics included the age of the child and parents, parent gender, severity of the child's disability, and religiosity. The analysis of responses to the open-ended question yielded three main themes: (1) psychological and emotional experiences, (2) a sense of partnership, and (3) the rich get richer (see Figure 1).These themes and their sub-themes are presented here. Psychological and Emotional Experiences Many parents mentioned and detailed their psychological and emotional challenges as parents of children with ASD.These fall under four sub-themes: (1) growth from adversity, (2) feeling pressure, (3) feeling hopelessness, and (4) tension between spouses. Growth from Adversity Participants spoke of the emotional process they underwent following the diagnosis of their child with ASD, and the subsequent personal and marital growth as a result of experiencing a new perspective to life.Participants felt that this strengthened the meaning of their marital relationship, placing it at the center as an important and significant source of support.As one participant stated: "New challenges mainly in everything that is related to children and dealing with these challenges, empowers and strengthens the relationship."(Participant 73) Another example was: "Caring for the child makes you look for solutions and strengthen the marital relationship."(Participant 68) Feeling Pressure Participants shared that the pressure that accompanied the diagnosis of ASD contributed to damaging the marital relationship.For example, one participant described: "The load, the pressure, the worry, the disappointment, the day-to-day difficulties damage the relationship."(Participant 22) Similarly, another participant stated: "Greater pressure creates a pressure cooker and nerves that come out on each other."(Participant 33) Feeling of Helplessness The third sub-theme of psychological and emotional barriers to the marital relationship referred to the sense of helplessness stemming from the child's ASD diagnosis.For example, in the context of marital relationships, one participant describes how the diagnosis affected her marital relationship: "Depression, dissatisfaction, lack of joy, lack of hope, lack of trust."(Participant 59) Tension between Spouses A few participants felt that the child's diagnosis with ASD weakened the marital relationship due to tensions that arose around having a child with ASD.One participant described a scenario where: "One side accepts and accommodates, and the other side still does not."(Participant 69). Another example can be seen in the following quote: "… [having a child with ASD] increased the disputes and the tension between us." Sense of Partnership Many participants spoke of elements of their relationship that be er equipped them to handle the challenges of parenting children with ASD, as well as those elements that threatened their relationship.These included the sub-themes: (1) complementary parenting, (2) common goal, (3) breaking the marital alliance, (4) differences in a itudes, and (5) uneven distribution of responsibility. Complementary Parenting This theme describes couples who effectively distribute responsibilities between them of caring for the child with ASD.These couples aim to fulfill the many roles required of parents of children with ASD; each spouse takes on a role, and together they try to meet all the requirements of raising their child.As one participant remarked: "I learned to trust him (the husband) more because there are things in which he has more emotional abilities than me and also physical and vice versa-it's quite mutual."(Participant 56) Common Goal The participants contended that a mutual motivation to take care of their ASD child strengthened the marital relationship: "Difficulty makes us stronger, we do everything together in order to give the child as many tools as possible."(Participant 83) Another participant stated: "We both understood that we needed to act on a common front so that our child would be be er, ready to accept him as he is and work together to promote him in cooperation."(Participant 100) The first two sub-themes of the sense of partnership theme addressed positive factors that facilitated be er marital relationships.The following three sub-themes address barriers to the marital relationship addressed by parents. Breaking the Marital Alliance Some participants felt that raising and caring for a child with ASD had created a rift that weakened the marital alliance.This may occur when the child with ASD takes the place of the spouse within the relationship and makes it difficult for a marital relationship to exist.For example: "Because of his lack of independence (the child with ASD) and the need to sleep with him, my husband and I do not spend evenings together."(Participant 22) As well as: "I mediated the world to my daughter as much as was needed, in an instinctive manner.My husband felt that a symbiosis was created between us that left him and the other children on the outside."(Participant 55) Differences in A itudes Participants expressed how differences in a itudes and opinions created tension between spouses.As one participant stated: "The differences of opinion, the differences in views of life, different education methods, the feeling of the burden of taking responsibility and worrying about the child's advancement, zero free time for myself and of course for our relationship, more taking control, less trusting that the other party will do it, and do it the way I believe it should be done.These revealed the gaps and reduced the cohesion that existed before."(Participant 32) Similarly, one participant described how the marital relationship was damaged due to the diagnosis of the child with ASD: "Because of the differences of opinion, the being ignored and the denial."(Participant 44) Uneven Distribution of Responsibility Participants discussed the strain in the marital relationship caused by a lack of partnership in tasks related to raising and caring for the child with ASD.As one participant shared: "We sometimes had fights because only I take him (the child with ASD) to treatments and my husband is not willing to get involved."(Participant 4) Another said: "Because most of the burden is on me, there is no consideration of me or understanding that the child has special needs."(Participant 1) The Rich Get Richer The theme "The Rich Get Richer" highlights how the presence or absence of resources, such as formal support systems, a strong pre-existing marital bond, and adequate family resources, can significantly impact the way couples cope with the challenges of parenting a child with Autism Spectrum Disorder (ASD).The availability of these resources can provide couples with the necessary tools and support to effectively navigate the demands of raising a child with ASD, potentially strengthening their marital relationship.Conversely, a lack of resources can strain the relationship, as couples struggle to meet their child's needs while neglecting their own as a couple.This theme comprises three sub-themes: (1) formal support systems, (2) a strong marital relationship prior to ASD diagnosis, and (3) limited family resources. Formal Support Systems The importance of a formal support system as a resource that contributes to the strengthening of the marital relationship was presented by some of the participants.Many study participants recalled the multitude of formal support systems that assisted them after their child's diagnosis and have provided support in dealing with complex situations that arise from caring for a child with ASD.One participant said: "[In] Special education there is someone to talk to, there is support, there is availability, there is more coordination between the school and the home-therefore it is also easier to be in a healthy relationship because everything is clear."(Participant 49) A Strong Marital Relationship Prior to ASD Diagnosis The first sub-theme addressed couples who had a strong initial marital foundation, that is, those who enjoyed a prosperous and meaningful relationship before their child was diagnosed with ASD and maintained these qualities after the ASD diagnosis.One participant noted: "Our partnership was excellent and strong before, so the child with ASD strengthened it to a certain extent because the relationship was already strong before he was born.Our child affected our partnership in that we now share struggles that are only ours, and only we feel and understand.Even if the environment accommodates and supports us, at the end of the day it's our pain and coping together and we can really share it only with one another."(Participant 54) Another example of this was the following observation from a participant: "Our relationship is strong, and even if at first it (the relationship) took a slight hit, later on we got stronger and talked more openly about taking care of what is needed."(Participant 53) Limited Family Resources Some couples shared that lack of resources in the face of the growing demands required in raising a child with ASD adversely affected the marital relationship.These participants stated that they gave preference to the needs of the child before the needs of the marital relationship.For example, one participant stated: "Since it requires a lot of physical and mental resources, time and dedication to allow the child to become an independent adult who can reach his potential and find his place socially.There is not much room left for anything else." (Participant 8) This is also evident from the words of another participant: "We think about the needs of the child and less about ours as a couple, and most of the resources are directed to him, including mental strength."(Participant 19) Discussion This study identified and explored the factors that may facilitate or act as barriers to the marital relationships of parents of children with ASD.The examination of the marital relationship holds significance due to its potential as a valuable resource in coping with the challenges associated with parenting a child diagnosed with ASD (Brien-Bérard and des Rivières-Pigeon 2023; Brown et al. 2020). As noted by Saini et al. (2015), there is a need for comprehensive investigation pertaining to the marital relationship among parents of children diagnosed with ASD, encompassing diverse populations characterized by variations in cultural backgrounds, socioeconomic status, ethno-racial composition, and structural factors.In this context, our study contributes to the existing body of research by offering cultural insights specific to the Israeli population. The study's findings can be understood through the lens of the Conservation of Resources (COR) theory (Hobfoll 1989).The theory offers a comprehensive framework for understanding the relationship between stress and managing personal resources.The central premise of this theory is that individuals strive to acquire, maintain, and protect their valued resources.Stress occurs when individuals experience or anticipate a threat to their resources, an actual loss of resources, or a lack of resource gain following an investment of resources.Conversely, well-being is achieved when individuals have access to sufficient resources and can effectively manage these resources. One interesting finding in our study was that almost 50% of participants stated that having a child with ASD strengthened their marital relationship.This finding might appear unexpected given that some studies suggest higher divorce rates among parents of children with ASD compared to the general population.Other research suggests that parents of children with ASD were more likely to be married compared to parents of children with other disabilities, and children with multiple disabilities were more likely to live in single-parent families (Saini et al. 2015).However, this finding of the potential for a positive effect on the quality of the relationship supports previous articles in the academic literature indicating that having a child with ASD can improve parental growth, resiliency, enrichment, compassion, and emotional maturity (King et al. 2012;Meleady et al. 2020). The first theme in the current study concerned the psychological and emotional experience of parents of children with ASD.Negative experiences expressed by participants, including feeling pressure, hopelessness, and tension between the spouses, were countered by expressions of positive effects on the marital relationships, such as growth from adversity.Previous studies have also shown that parenting children with ASD can produce elements of growth, including empowerment and personal strength, existential perspective, spiritual-emotional experience, interpersonal growth, and professional growth (Phelps et al. 2009;Waizbard-Bartov et al. 2019).These findings align with COR theory, which posits that during challenging life circumstances, individuals can experience concurrent processes of resource loss and resource gain (Hobfoll 1989(Hobfoll , 2011)). The second theme found in this study was that a sense of partnership can be a key factor in the quality of the marital relationship.When parenthood is complementary and parents have a common goal, the sense of partnership can facilitate a higher quality of marital relationship.On the other hand, when spouses struggle to maintain the marital alliance, when they exhibit differing a itudes, and when there is an uneven distribution of responsibilities between them, the lack of a shared sense of partnership can potentially damage the quality of the relationship between parents of ASD children.This supports previous studies on parents of children with ASD that reported the communication between the couple to be a key factor in maintaining a marriage (Gupta et al. 2023).Likewise, dyadic coping of parents of children with disabilities, which includes facing the challenges as a team, can have an important role in preserving the marital relationship (Brien-Bérard and des Rivières-Pigeon 2023). The findings from this theme highlight how the presence or absence of a unifying parental partnership acts as either a resource protective factor or a risk factor for resource depletion, respectively, aligning with the COR theory's principles regarding the conservation and investment of key resources during challenging life circumstances (Hobfoll 2011). A strong marital partnership with shared goals and responsibilities can be viewed as a vital interpersonal resource for parents navigating the challenges of raising a child with ASD.When spouses exhibited a complementary parenting approach with common objectives and an equitable division of responsibilities, this fostered a sense of partnership-a dynamic the theory suggests represents a crucial resource gain that can help offset resource losses experienced by these parents. Conversely, when parents struggled to maintain a cohesive marital alliance due to differing attitudes or an imbalance in caregiver duties, this undermined their interpersonal resource of partnership.The theory posits that such resource loss begets further depletion, potentially exacerbating other losses and hampering parents' ability to cope effectively. The final theme maintained that formal support systems and having a strong relationship before the child's ASD diagnosis can be the facilitator of a higher quality of marital relationship.On the other hand, having limited mental, emotional, and/or physical resources can be a barrier to the marital relationship.Other studies have also found that formal support systems, such as support from professionals, can help preserve a positive marital relationship (Brien-Bérard and des Rivières-Pigeon 2023; Solomon and Chung 2012), and that good communication and having a previously strong marital foundation based on common expectations can help parents of children with ASD keep their marriage strong (Ramisch et al. 2014). This theme exemplifies the gain spiral and loss spiral concepts central to COR theory (Hobfoll 2011).The finding that formal support systems and strong pre-existing relationships facilitated marital quality represents initial resource gains that enabled further resource accrual-the gain spiral dynamic.Conversely, limited personal resources acting as a barrier to marital quality reflects how initial resource deficits can precipitate cascading loss spirals across life domains like relationships. It is surprising that social support was not mentioned as a facilitator for marriage quality in the examined studies, given the substantial impact it has been shown to have on parents of children with ASD.For instance, Marsack and Samuel (2017) found that informal social support partially mediated the relationship between caregiver burden and parents' quality of life.Similarly, He et al. (2022) identified perceived family support as a significant predictor of relationship satisfaction among parents of children with ASD. Limitations and Directions for Future Research The current study has several limitations.First, as with all qualitative research, there is also always a possibility for human error in the data analysis; these errors may be a result of fatigue, erroneous interpretation, and personal bias (Bengtsson 2016).While the qualitative data offer valuable insights, they may lack the robustness and generalizability of quantitative approaches.A follow-up study utilizing quantitative data is recommended to supplement these findings. In the current study, we used an online qualitive survey, and participants were asked to answer two questions, one of them in written detail.This data collection method, although useful in the sense that it allows the potential for a rich amount of data, has several limitations.For example, this method does not allow for follow-up questions.In addition, this platform may create a bias against people who find it difficult to express themselves through written text and may, therefore, opt not to participate in the study (Braun et al. 2021). Another limitation is that a high percentage of participants in this study were female.Fathers are often underrepresented in studies that examine parents of children with ASD (Desmarais et al. 2018;Gerow et al. 2018;Ilias et al. 2018;Martin et al. 2019); this is also true specifically in the context of the marital relationship (Sim et al. 2016).Our study faced the same limitation; therefore, we suggest that future research focus on marital relationships from the perspective of fathers. The Israeli population is characterized by many cultural differences.Consequently, the dynamics of parenthood are expected to be perceived and practiced divergently across these distinct groups.Thus, while this study of the general population allows it to be projected on the diverse populations of other countries, it does not delve deeply into any one specific culture.Hence, it is recommended that forthcoming research endeavors in Israel direct their a ention towards investigating particular subpopulations, specifically, the Arab and ultra-Orthodox communities. Our study did not specifically address the involvement or impact of stepparents, which may influence the dynamics of caregiving and familial relationships in parents of children with ASD.Further research considering the role of stepparents is needed to fully explore the factors affecting marital quality in these families Additionally, future research should explore the longitudinal trajectory of marital satisfaction over time, particularly in comparison to families with children with other disabilities or without disabilities.Longitudinal studies would offer valuable insights into how marital satisfaction evolves in response to various factors over the course of family life.Lastly, future research should more closely examine the nuanced associations among parents' age and gender, the child's age and gender, and the severity of ASD symptoms in relation to relationship quality and divorce risk, as well as how having both affected and unaffected children within the same family impacts marital dynamics. Practical Implications The implications of this study point to the need for policymakers to adopt a familycentered approach when a empting to assist parents of children with ASD.The familycentered approach is based on the premise that all family members, and the dynamic between those family members, are affected by the child's situation.For that reason, all family members should be taken into account when offering services and interventions (Franck and O'Brien 2019;Kokorelias et al. 2019).In the context of our study, in addition to granting services to parents of children with disabilities as individuals, social support should be offered to help parents in preserving and strengthening their marital relationship.As shown in this study and others, the marital relationship can act as a resource in itself for handling the challenges that come with parenthood to children with ASD (Brien-Bérard and des Rivières-Pigeon 2023). Within the focus on parents of children with ASD as couples, we suggest that interventions focus on their communication abilities with each other.As shown in the current study, discussion of many issues, such as mutual expectations and the distribution of responsibility, may resolve miscommunication and thus benefit these parents. Conclusions Parenting children with ASD can have unexpected effects on the quality of marital relationships.Many factors may help parents leverage their relationship as a tool to be er handle the challenges faced by parents of children with ASD, while other elements can adversely affect the marital relationship.Knowing this, we can be er work to enhance the quality of the marital relationship for parents of children with ASD.
2024-05-29T15:14:51.439Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "c963780d987b75fc5888d8938aace5c23917c65e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0760/13/6/287/pdf?version=1716818699", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "efa2db0adf4c224393780dbef03833ca6f6cce6c", "s2fieldsofstudy": [ "Psychology", "Medicine", "Education" ], "extfieldsofstudy": [] }
270807073
pes2o/s2orc
v3-fos-license
Antibody-drug conjugates combinations in cancer treatment Antibody-drug conjugates (ADCs) have emerged as a promising class of anticancer agents. Currently, the Food and Drug Administration has granted approval to 12 compounds, with 2 later undergoing withdrawal. Moreover, several other compounds are currently under clinical development at different stages. Despite substantial antitumoral activity observed among different tumor types, adverse events and the development of resistance represent significant challenges in their use. Over the last years, an increasing number of clinical trials have been testing these drugs in different combinations with other anticancer agents, such as traditional chemotherapy, immune checkpoint inhibitors, monoclonal antibodies, and small targeted agents, reporting promising results based on possible synergistic effects and a potential for improved treatment outcomes among different tumor types. Here we will review combinations of ADCs with other antitumor agents aiming at describing the current state of the art and future directions. Introduction Antibody-drug conjugates (ADCs) represent one of the most rapidly expanding classes of anticancer drugs.Over the last few years, several ADCs have been approved as monotherapy for cancer treatment (Table 1 and 2) and many others are currently in clinical development [1]. ADCs consist of three main components: a monoclonal antibody (mAb), a linker, and a cytotoxic drug (also known as the payload) (Figure 1).The payload is connected to the mAb through the linker.Once the mAb component binds to its target antigen, the complex antigen-ADC is internalized in the tumor cell, and the payload is delivered and released at the tumor site.The linker is presently classified as cleavable and non-cleavable.Among cleavable linkers, there are varying degrees of stability: linkers less stable may trigger the bystander effect when cleaved, releasing the payload far from the targeted tumor cells and causing the destruction of neighboring cells.Non-cleavable linkers are stable in circulation and release the payload after internalization in response to lysosomal enzymes.Considering that the bystander effect is recognized as a significant component of ADC activity, optimizing linker stability is crucial for ADC effectiveness.The proportions of the three components circulating in the bloodstream differ based on the type of linker used and the overall integrity of the molecule [2]. Despite the initial activity, tumor cells eventually develop resistance to ADCs limiting their use [3,4].Several mechanisms of resistance have been described, including changes at the antigen level (such as altered expression or mutations), changes in endocytosis mechanisms and vesicular trafficking, defects of lysosomal activity (pH, proteolytic enzymes), imbalance in proapoptotic and antiapoptotic factors, alteration of signaling pathways, increased activity of the drug efflux pumps [3,4].To overcome resistance several strategies are being actively investigated including improvements of the ADC components such as modifications of the cytotoxic agent, in order to reduce the affinity of efflux pumps, modifications of the linker, use of bispecific or biparatopic ADCs, and the development of combination strategies [3].Combination strategies with other anticancer agents have been regarded as a potential approach to enhance the efficacy of ADCs and overcome resistance, ultimately improving treatment outcomes [3][4][5].The ideal combination should be with a drug that contributes to the antitumor effect in a synergistic way, while it has minimal overlapping toxicities [6]. As of today, combinations of ADCs with other drugs have been approved mainly in the field of hematologic malignancies (Table 3).Among them, brentuximab vedotin (BV) and polatuzumab vedotin (PV), have been developed in combinations with chemotherapeutic agents and with rituximab for the treatment of various types of B-cell and T-cell lymphoma [7][8][9][10].Additionally, gemtuzumab ozogamicin (GO) in combination with chemotherapy has been approved for the treatment of acute myeloid leukemia (AML) [11].More recently, the combination of enfortumab vedotin (EV) and pembrolizumab has received Food and Drug Administration (FDA) approval for treating locally advanced or metastatic urothelial carcinoma (la/mUC), a combination that demonstrated improved overall survival (OS) compared to the standard of care [12].Despite these approvals, there is still a lot of space for improvement, and many are ongoing trials evaluating ADCs in combinations with various anti-cancer agents.Among the most studied combinations, those with chemotherapy face often the challenge of toxicities, which may depend on off-target toxicities but also on characteristics of the ADC such as the presence of cleavable linkers and high drug-to-antibody ratio [6].On the other hand, despite many clinical trials evaluating combinations with immune checkpoint AEs: adverse events; R/R: relapsed or refractory; AML: acute myeloid leukemia; EFS: event-free survival; DA: daunorubicin and cytarabine; CT: chemotherapy; AVD: doxorubicin-vinblastine-dacarbazine; PFS: progression-free survival; ABVD: doxorubicinbleomycin-vinblastine-dacarbazine; PTCL: peripheral T-cell lymphoma; sALCL: systemic anaplastic large-cell lymphoma; CHP: cyclophosphamide-doxorubicin-prednisone; CHOP: cyclophosphamide-doxorubicin-vincristine-prednisone; CR: complete response; HL: Hodgkin lymphoma; ABVE-PC: doxorubicin-bleomycin-vincristine-etoposide-prednisone-cyclophosphamide; NR: not reached; BR: bendamustine-rituximab; DLBCL: diffuse large B-cell lymphoma; NOS: not otherwise specified; HGBL: highgrade B-cell lymphoma; R-CHOP: rituximab-CHOP; R-CHP: rituximab-CHP; OS: overall survival; AVEPC: doxorubicinvincristine-etoposide-prednisone-cyclophosphamide; la/mUC: locally advanced or metastatic urothelial carcinoma; Pembro: pembrolizumab inhibitors (ICIs), only the combination mentioned above of EV with pembrolizumab demonstrated an OS benefit compared to the previous standard therapy [6,12].Finally, the hypothesis that a dual ADC-targeted agent blockade could improve therapeutic efficacy remains intriguing, in particular with the advancement of new-generation ADCs [6].Here, we will review the main clinical results of combinations of ADCs with other anti-cancer drugs. Combinations in clinical development: ADCs combined with chemotherapy The synergy between the cytotoxic payload delivered by the ADC and the chemotherapeutic agent arises from the dual cytotoxic impact on the tumor cells, similar to traditional chemotherapy combinations using drugs with distinct mechanisms of action.This approach aims at preventing the development of resistance typically associated with single-agent therapy.However, combining ADCs with cytotoxic agents poses challenges, primarily due to the risk of overlapping toxicities.The ideal chemotherapeutic partner should thus have the ability to enhance effectiveness and not increase toxicities.A deeper understanding of the cell cycle and the phases in which the different agents act and knowledge of how chemotherapy influences the modulation of surface antigen expression may help in deciding the combinations to develop [6].Numerous combinations of ADCs with cytotoxic agents are in development for both solid tumors and hematologic malignancies.Certain combinations have already received approval for treating lymphoma and leukemia (Table 3).In the following paragraphs, we will outline the main clinical findings of combinations of ADCs with chemotherapy (Table S1). Gemtuzumab ozogamicin GO is a recombinant mAb targeting the CD33 antigen, conjugated via a cleavable linker to a cytotoxic antibiotic derivative of calicheamicin [13].In May 2000, the FDA granted accelerated approval to GO for patients with CD33-positive relapsed AML who were not suitable for conventional chemotherapy [13,14].However, the phase III trial SWOG S0106 found no statistically significant difference in outcomes from the addition of GO to chemotherapy [daunorubicin and cytarabine (DA)] compared to chemotherapy alone, with a higher mortality rate in the combination arm.Based on these results, in June 2010, Pfizer voluntarily withdrew GO from the market [15].Nonetheless, GO was further evaluated with DA using alternative fractionated dosing schedules in the phase III ALFA-0701 trial and as monotherapy in the phase II AML-19 trial [16,17].Administration of lower fractionated doses of GO in combination with chemotherapy in the ALFA-0701 trials resulted in significant improvements in event-free survival (EFS) and OS, although with a higher frequency of grade 3 (G3) or higher adverse events (AEs) in the GO group (predominantly infections and skin toxicities) [16].Based on these results, GO was re-approved by the FDA in 2017 with DA or as a monotherapy for the treatment of patients with CD33-positive newly-diagnosed AML.In June 2020 the approval was extended to the pediatric population based on the results of the AAML0531 trial, which demonstrated better outcome for patients receiving the combination of GO and chemotherapy compared to chemotherapy alone [18].Other studies have explored the combination of GO and other chemotherapy regimens in AML patients, such as cytarabine and mitoxantrone [19] and high-dose cytarabine, mitoxantrone, and all-trans retinoic acid [20].Additionally, ongoing research is exploring the association of GO, mitoxantrone and etoposide (NCT03839446) and a liposomal cytarabine-daunorubicin (NCT05558124) for the same patient population. Brentuximab vedotin BV is an anti-CD30 mAb conjugated through a protease-cleavable linker to the anti-mitotic cytotoxic agent monomethyl auristatin-E (MMAE) [21].Based on its demonstrated clinical efficacy and its approval for treating patients with Hodgkin lymphoma (HL) and anaplastic large cell lymphoma [7][8][9], BV was further investigated in combination with other treatments.In a phase I trial involving treatment-naive HL patients, BV was assessed in combination with the standard ABVD (doxorubicin-bleomycin-vinblastine-dacarbazine) regimen or the modified AVD (ABVD without bleomycin) regimen [22].Results showed a comparable rate of complete response (CR) but a significantly higher rate of G3 AEs, in particular pulmonary toxicities, in the ABVD arm.Based on these results, it was concluded that BV should not be used in bleomycin-containing regimens like ABVD [22].The phase III ECHELON-1 study compared BV plus AVD to the standard ABVD regimen in patients with previously untreated stage III or IV HL.The experimental arm resulted in improved progression-free survival (PFS) and OS rates compared to the ABVD group [23,24], with a manageable safety profile.Based on these results, in March 2018 the FDA approved the combination of BV with AVD for the treatment of previously untreated stage III/IV HL [25].The superiority of BV plus chemotherapy, compared to chemotherapy alone, was demonstrated also in patients with previously untreated peripheral T-cell lymphoma (PTCL).In phase III ECHELON-2 trial, the standard CHOP (cyclophosphamide-doxorubicin-vincristine-prednisone) regimen was compared to the experimental arm of BV with CHP (a modified CHOP with the omission of vincristine due to overlapping neurotoxicity with BV), showing an improvement in both PFS and OS in the experimental arm, with a similar rate of G3 or higher AEs [26,27].This combination has been approved by the FDA in November 2018.In November 2022, a significant advancement in treatment emerged when the FDA approved a third combination involving brentuximab with chemotherapy.This approval represents a notable addition to the available therapeutic options for pediatric patients aged 2 years and older with previously untreated high-risk classical HL (cHL).The phase III study AHOD1331 demonstrated superior outcomes with the combination of BV alongside doxorubicin, vincristine, etoposide, prednisone, and cyclophosphamide in comparison to the standard ABVE-PC (doxorubicin-bleomycin-vincristine-etoposide-prednisone-cyclophosphamide) arm [28].Numerous other trials are currently ongoing, exploring combinations with different chemotherapeutic regimens in hematologic malignancies A large phase III trial including 1,500 patients has compared the remodeled combination regimen BrECADD (BV, added to etoposide, cyclophosphamide, doxorubicin, dacarbazine, dexamethasone) versus the escalated BEACOPP (bleomycin-etoposide-doxorubicincyclophosphamide-vincristine-procarbazine-prednisone) regimen in patients with newly diagnosed advanced risk HL.Preliminary results showed no inferiority of the new regimen [29]. Polatuzumab vedotin PV consists of an anti-CD79b mAb conjugated to MMAE through a cleavable linker [10].PV, which was not approved as a single agent, in June 2019 received FDA approval for the treatment of patients with relapsed or refractory (R/R) diffuse large B-cell lymphoma (DLBCL) in combination with bendamustine-rituximab (BR) based on the results of a phase Ib/II trial that compared its efficacy and safety to bendamustine and rituximab [30].Among other combinations with chemotherapy in different lymphoma subtypes, the most significant has been the phase III POLARIX trial that compared the combination with rituximab-CHP (R-CHP) to the standard rituximab-CHOP (R-CHOP) regimen in patients with untreated DLBCL.The experimental arm resulted in PFS benefits with similar OS rates [31].This study led to the FDA approval of the combination PV-R-CHP in April 2023 for the treatment of untreated DLBCL, not otherwise specified (NOS), or high-grade B-cell lymphoma (HGBL). Inotuzumab ozogamicin Inotuzumab ozogamicin (INO), an anti-CD22 antibody conjugated to a calicheamicin payload via a cleavable linker [32], received FDA approval for use as a single agent in patients with R/R B-cell precursor acute lymphoblastic leukemia (ALL), based on the phase III IN-NOVATE trial [32].A single-arm phase II trial investigated the efficacy of INO with mini-hyper CVD (cyclophosphamide-vincristine-methotrexatecytarabine), with or without blinatumomab, in patients with B-cell ALL.It showed promising efficacy in terms of OS and even more favorable survival outcomes in the blinatumomab arm [33].In the subgroup of older patients (≥ 60 years) with Philadelphia chromosome-negative B-cell ALL, more than 70% of the patients experienced G3-4 hematologic toxicity.Consequently, there is a need to further adjust and refine the combination regimen to enhance tolerability [34].Other studies evaluated the efficacy and safety of INO in combination with various chemotherapy agents. Trastuzumab emtansine Trastuzumab emtansine (T-DM1) is an ADC that combines the human epidermal growth factor receptor 2 (HER2)-targeting humanized mAb trastuzumab with a maytansinoid toxin-DM1, through a non-cleavable linker [35].It became the first ADC approved for the treatment of a solid malignancy, based on the phase III EMILIA trial which included patients with advanced breast cancer (BC) and resulted in a benefit in PFS and OS compared to lapatinib plus capecitabine [36,37].Following the results of the KATHERINE study, T-DM1 was also approved for patients with HER2-positive BC with residual disease after neoadjuvant therapy [38] and has been evaluated in other HER2-positive solid tumors [39,40].Combination therapies of T-DM1 with chemotherapy regimens (such as docetaxel and capecitabine) did not result in any improvements and were associated with increased toxicity [41,42].The phase II TRAXHER2 trial evaluated the efficacy and safety of T-DM1 in combination with capecitabine compared to T-DM1 alone in patients with metastatic BC (mBC).Patients in the combination arm experienced a higher rate of G3-4 AEs, without any significant benefit in clinical outcomes [41].An increased rate of AEs resulting from overlapping toxicities was also demonstrated in two phase Ib/IIa studies that evaluated T-DM1 in combination with docetaxel and paclitaxel, with or without pertuzumab, in patients with mBC or locally advanced BC (LABC).While the combination showed significant clinical activity, its clinical use is limited due to the occurrence of AEs, leading to frequent dose reductions and interruptions [42,43].Therefore, there is a need to seek a different partner that could be safely combined with T-DM1.A potential combination with promising preclinical data could involve gemcitabine, which has been shown to upregulate the expression of HER2 in pancreatic ductal adenocarcinoma cells and BC cells [44,45].Presently, no ongoing clinical trials are evaluating this combination. Trastuzumab deruxtecan Trastuzumab deruxtecan (T-DXd) is an ADC consisting of an anti-HER2 mAb and a topoisomerase I inhibitor, the exatecan derivative DXd [46].T-DXd was approved by the FDA for the treatment of HER2positive and HER2-low mBC patients [47,48] and HER2-positive gastric adenocarcinomas [49].Additionally, the FDA has granted accelerated approval for T-DXd in the treatment of metastatic HER2mutant non-small cell lung cancer (NSCLC) [50] and breakthrough therapy designations for treating patients with HER2-positive metastatic colorectal cancer (mCRC) and advanced HER2-positive solid tumors [51,52].T-DXd combinations are currently being investigated in ongoing clinical trials.The phase I/IIb Destiny-Breast07 trial is exploring various regimens, including combinations of T-DXd with paclitaxel for patients with HER2-positive mBC [53].Additionally, another phase Ib study, Destiny-Breast08, will assess five different regimens, incorporating combinations of T-DXd with capecitabine, anastrozole, and fulvestrant in HER2-low mBC patients [54].The investigation of treatment combinations involving T-DXd and chemotherapy extends beyond BC.In advanced HER2-positive gastric cancer, the phase Ib/II Destiny-Gastric03 trial is currently assessing T-DXd in combination with cytotoxic chemotherapy agents [5fluorouracil (5-FU), capecitabine, oxaliplatin] and/or immunotherapy agents [55]. Mirvetuximab soravtansine Mirvetuximab soravtansine (MIRV) is an ADC consisting of a humanized folate receptor alpha (FRα)targeting mAb connected to the maytansinoid DM4, which induced mitotic arrest by suppressing microtubule dynamics [56].Based on the results of the phase III SORAYA trial, the FDA granted MIRV priority review and subsequently accelerated approval in November 2022 for patients with platinumresistant epithelial ovarian cancer expressing FRα [57].MIRV has been studied in a phase Ib trial in platinum-sensitive, relapsed ovarian cancer patients in combination with carboplatin.The combination demonstrated clinical activity and a manageable safety profile [58]. Anetumab ravtansine Anetumab ravtansine (AR), an ADC composed of a fully human IgG1 anti-mesothelin mAb linked to the tubulin inhibitor DM4 via a cleavable linker, has demonstrated high cytotoxic activity in preclinical studies against mesothelin-expressing tumors such as mesothelioma, pancreatic cancer, NSCLC, and ovarian cancer [59].Encouraging clinical activity was demonstrated in patients with advanced or metastatic solid tumors, particularly in mesothelioma patients [60].Results from a phase Ib trial showed that the combination with pegylated-liposomal doxorubicin exhibited clinical activity and tolerability in patients with platinumresistant ovarian cancer [61]. Depatuxizumab mafodotin Depatuxizumab mafodotin (Depatux-M) is an ADC that targets the epidermal growth factor receptor (EGFR).It consists of the humanized recombinant mAb ABT-806, which is linked via a non-cleavable linker to the anti-microtubule agent monomethyl auristatin-F (MMAF) [62].In the phase II trial INTELLANCE 2, the combination of Depatux-M and temozolomide (TMZ) was investigated in patients with recurrent EGFRamplified glioblastoma.This study compared Depatux-M alone or in combination with TMZ versus lomustine or TMZ [63].The combination arm showed improved OS compared to the control arm, suggesting a potential clinical benefit.The most common AE in the Depatux-M arms was reversible corneal epitheliopathy G3-4 [63].A multicenter study conducted by the Italian Association of Neuro-Oncology further investigated this combination treatment in patients with recurrent glioblastoma and reported similar results.However, larger prospective studies would be necessary to confirm its efficacy and further explore its safety [64].There are currently no ongoing studies. Lorvotuzumab mertansine Lorvotuzumab mertansine (LM) is a humanized anti-CD56 mAb linked via a cleavable linker to the maytansinoid DM1 [65].It was evaluated in a phase I/II trial in combination with carboplatin and etoposide, in comparison to carboplatin and etoposide alone.This study involved patients with untreated extensive-stage small-cell lung cancer but yielded disappointing results both in terms of safety and efficacy [66].The drug is no longer being developed, there are no ongoing studies with LM. Combinations in clinical development: ADCs combined with ICIs The rationale for the development of combinations with ICIs lies in their complementary immunomodulatory effects.ADCs target specific tumor antigens and may enhance tumor antigen presentation and T-cell infiltration an effect that can be complemented by ICIs [67].Numerous ADC combinations with ICIs have been explored in preclinical and early clinical studies.The recent FDA approval of EV in combination with pembrolizumab for patients with la/mUC marks a significant milestone in the development of new combinations [12].Here we will review clinical trials evaluating combination therapies of ADCs with ICIs (Table S2). Enfortumab vedotin EV is an ADC directed against Nectin-4 and comprises a fully human mAb linked to MMAE [6].It demonstrated survival benefits as monotherapy in pretreated patients with la/mUC [6] and received FDA approval in December 2023 in combination with pembrolizumab [5].The effectiveness of the combination relies on EV's ability to trigger immunogenic cell death and boost the infiltration of T-cells.Pembrolizumab further enhances the anti-tumor immune response, complementing EV's actions [7].The approval was based on the results of the EV-302/KN-A39 trial which demonstrated significant improvements in PFS and OS for patients with la/mUC treated with EV and pembrolizumab compared to platinum-based chemotherapy, confirming EV with pembrolizumab as the new standard of care for first-line la/mUC [5]. Further expanding the clinical exploration of EV combinations, the VOLGA trial (NCT04960709) assesses its combination with durvalumab and tremelimumab in neoadjuvant and adjuvant settings in patients with muscle-invasive BC (MIBC).This trial targets a patient population ineligible for cisplatin-based chemotherapy, addressing a significant unmet need in MIBC management.The rationale relies on using EV's capability to induce immunogenic cell death in conjunction with the immune-modulating effects of two ICIs.This aims to improve disease control before surgery and delay the recurrence [8]. Brentuximab vedotin The combination of BV and ICIs has been a focus of several clinical trials.The phase I/II trial, CheckMate 436, evaluated the combination of BV and nivolumab in patients with R/R primary mediastinal B-cell lymphoma (PMBL).This trial showed significant anti-tumor activity and a manageable safety profile, emphasizing the efficacy and safety of BV with nivolumab [68].Furthermore, BV was also examined in another phase I/II study involving patients with R/R HL in combination with ipilimumab, nivolumab, or both.These combinations demonstrated high activity and maintained generally favorable safety profiles, with follow-up reports indicating benefits in PFS [9,69,70].Currently, ongoing phase II and phase III trials (NCT04561206, NCT03138499) aim to further assess the combination of nivolumab with BV.Additionally, a small cohort study involving BV and pembrolizumab was conducted as a single-center retrospective analysis on 10 patients with R/R HL.The study revealed impressive results in objective response rate (ORR) and complete metabolic response rate, along with a rapid median time to best response [71].An ongoing phase II clinical trial (NCT04609566) is set to evaluate the efficacy and safety of this combination in patients with metastatic solid tumors after progression on prior programmed cell death 1 (PD-1) inhibitors [72].Other studies are also underway, assessing the combination of BV and pembrolizumab in R/R HL, R/R T-cell lymphoma, and recurrent PTCL (NCT05180097, NCT05313243, NCT04795869). Trastuzumab emtansine Based on evidence suggesting that T-DM1 could elicit antitumor immunity and render the tumor cells sensitive to ICIs [73], the drug has been evaluated in combination with atezolizumab and pembrolizumab in various clinical trials.The phase 2 KATE2 trial evaluated T-DM1 with atezolizumab in patients with previously treated HER2-positive advanced BC.Although it did not show a significant improvement in PFS for the overall population, subgroup analysis indicated a PFS advantage for patients with programmed cell death ligand 1 (PD-L1) positive tumors [74].These findings have led to the initiation of the phase III KATE3 trial (NCT04740918), focusing on patients with HER2-positive and PD-L1-positive LABC/mBC [75].Furthermore, a phase Ib trial investigated atezolizumab with T-DM1 in HER2-positive early BC (eBC), LABC, or mBC, showing an acceptable safety profile, along with an enhanced adaptive immune response in eBC tumors compared to those with mBC [76].In another phase I study, investigating the combination of T-DM1 and pembrolizumab in patients with HER2-positive mBC, the regimen exhibited clinical activity and was well tolerated.However, biomarker analyses were constrained due to the small sample size of the cohort, highlighting the need for larger studies to determine predictive markers of response [77,78]. Trastuzumab deruxtecan Following the results of the phase II DESTINY-Breast01 and phase 3 DESTINY-Breast04 trials in BC patients, along with data from preclinical models [79] new combination strategies are being investigated, incorporating T-DXd and ICIs in HER2-expressing tumors.A phase Ib study assessed T-DXd in combination with nivolumab for HER2-expressing advanced breast or urothelial cancers.This study reported promising results, with a disease control rate (DCR) of 90.6% in HER2-positive patients, and 75% in those with HER2low BC, an acceptable safety profile, and a benefit in PFS [80].The phase Ib/II BEGONIA trial delved deeper into T-DXd, this time combining it with durvalumab for untreated HER2-low expressing triple-negative BC (TNBC).Preliminary results were impressive, demonstrating a 100% ORR.Further data is anticipated to elucidate the impact of PD-L1 expression on these outcomes [81].Additionally, the combination of T-DXd with pembrolizumab is currently under investigation in an ongoing phase Ib trial, targeting patients with HER2-expressing advanced/mBC or NSCLC [82].Another study in the pipeline is the phase Ib/II trial in gastric cancer, DESTINY-Gastric03 (NCT04379596).In addition, ongoing clinical trials are evaluating the safety and antitumor activity of T-DXd, durvalumab, and pertuzumab for HER2-positive mBC (NCT04538742, NCT04784715). Sacituzumab govitecan Sacituzumab govitecan (SG) is composed of the mAb anti-Trop2 linked to the active metabolite of irinotecan, SN-38 [83,84].The drug is FDA-approved as a single-agent treatment for breast and urothelial cancer.The TROPHY-U-01 Cohort 3 evaluated the combination of SG with pembrolizumab in patients with metastatic urothelial cancer who progressed after platinum-based regimens, showing an encouraging ORR and clinical benefit rate, as well as a manageable safety profile [85].In addition to this, ongoing research is evaluating the activity of SG in various other clinical contexts and at earlier stages of treatment.For instance, the EVOKE-02 phase II trial is assessing SG in combination with chemotherapy and ICIs as a firstline treatment for patients with non-oncogene addicted NSCLC (NCT05186974).Similarly, a phase I/II study explored the potential of SG when combined with ipilimumab and nivolumab as a first-line therapy for cisplatin-ineligible advanced urothelial carcinoma (UC), demonstrating antitumor activity for this patient population representing an unmet medical need [86]. Tisotumab vedotin Based on the findings from the innovaTV 204 trial, the FDA granted accelerated approval to tisotumab vedotin (TV), which targets tissue factors and is linked to MMAE, for patients with recurrent or metastatic cervical cancer (r/mCC) [87].The dose expansion arms of the phase Ib/II trial innovaTV 205/GOG-3024/ENGOT-cx8, evaluated TV with carboplatin as first-line treatment or with pembrolizumab as first or second-/third-line treatment in patients with r/mCC.The study met its primary endpoint demonstrating promising anti-tumor activity and acceptable safety profiles [88]. Mirvetuximab soravtansine Preclinical data suggest that MIRV may activate monocytes and upregulate immunogenic cell death markers in ovarian cancer cells [89].Building on these findings, the phase Ib/II FORWARD II study delved further into the potential of MIRV, in combination with pembrolizumab and bevacizumab, focusing on patients with platinum-resistant ovarian cancer.The combination of MIRV with pembrolizumab was generally well tolerated, with few G3 AEs [90].Complementing these findings, additional research is being conducted in patients with endometrial cancer (NCT03835819). Ladiratuzumab vedotin Ladiratuzumab vedotin (LV) is a novel ADC that combines anti-LIV-1 mAb with MMAE via a proteasecleavable linker [91].LIV-1 is a transmembrane protein with zinc transporter and metalloproteinase activity, primarily expressed in melanoma, breast, and prostate cancers, while having limited expression in normal tissues [91].Early-phase studies have shown promising antitumor activity, particularly in heavily treated metastatic TNBC [92].The combination of LV with pembrolizumab has been evaluated in the firstline therapy of patients with TNBC demonstrating a good tolerability profile and clinical activity [93]. Ongoing research is currently exploring LV in combination with atezolizumab for locally advanced and metastatic TNBC (NCT03424005). Disitamab vedotin Disitamab vedotin (RC48-ADC) is an anti-HER2 ADC composed of a novel anti-HER2 mAb (hertuzumab), coupled with MMAE by a cleavable linker [94].Promising data have been observed in both HER2-positive and HER2-negative populations with la/mUC [95].In a phase Ib/II trial RC48-ADC was studied in combination with toripalimab, an anti-PD-1 antibody known for its clinical activity in UC [96,97].The combination showed an ORR of 75% in patients with la/mUC.The ORR was even higher for patients who were HER2 positive and PD-L1 positive.However, antitumor activity was also observed in patients with HER2 2+, 1+, 0, and in those with a PD-L1 level below 1 [96].The same combination was explored in patients with HER2-expressing advanced gastric or gastroesophageal junction with similar, positive, findings [98]. Anetumab ravtansine A study with AR combined with pembrolizumab in pleural mesothelioma patients showed a higher stable disease rate and median PFS than pembrolizumab alone, although these weren't statistically significant, possibly due to a smaller sample size [99].Furthermore, a phase Ib study in pancreatic cancer showed a good DCR and tolerability for AR combined with immunotherapy or chemotherapy [100]. Belantamab mafodotin Belantamab mafodotin (BM) is a novel ADC developed using a B cell maturation antigen (BCMA)-targeted mAb.BCMA, a part of the tumor necrosis factor (TNF) receptor superfamily, is expressed on both normal and malignant plasma cells, as well as late B-cells [101,102].The antibody component is linked to MMAF through a protease-resistant linker [103].BM's efficacy in treating R/R multiple myeloma (MM) has been evaluated in several clinical studies, demonstrating benefits in PFS, OS, and a manageable safety profile [104].These results initially led to its FDA approval for monotherapy in R/R MM patients who had undergone four or more lines of therapy.However, in November 2022, this approval was withdrawn following the outcomes of the DREAMM-III study, which did not meet the FDA's accelerated approval guidelines (NCT04162210).In experimental studies, combining BM with an OX40 agonist has been shown to enhance anti-cancer effects, resulting in increased activity of T cells and dendritic cells within tumors [105].Clinical trials such as the DREAMM-5 study are exploring this approach, investigating the combination of BM with various immune therapies, including anti-PD-1 and anti-inducible T-cell costimulator (ICOS) antibodies, and a γ-secretase inhibitor [106].A preliminary analysis of 23 patients in this trial indicated that BM combined with anti-ICOS displayed encouraging clinical activity and a manageable safety profile through dose modifications [106].Additionally, the DREAMM-4 study, which investigated the combination of BM and pembrolizumab, concluded that this combination yielded a favorable ORR and had a safety profile comparable to BM monotherapy [107]. Datopotamab deruxtecan Datopotamab deruxtecan (Dato-DXd) is a novel ADC comprising a humanized anti-TROP2 IgG1 mAb linked to a potent DNA topoisomerase I inhibitor via a cleavable linker [108].Early phase trials from the TROPION series evaluated the efficacy and safety of Dato-DXd in multiple tumors at different stages, revealing promising clinical activity in both NSCLC and TNBC [109].Encouraging results from early trials have led to further exploration of Dato-DXd in combination with ICIs.For instance, The TROPION-Lung02 trial investigated Dato-DXd with pembrolizumab ± chemotherapy in metastatic NSCLC patients, reporting an acceptable safety profile and clinical activity.This has led to ongoing studies like TROPION-Lung07 and TROPION-Lung08, which aim to explore Dato-DXd in combination with ICIs, with or without chemotherapy, potentially as first-line treatments [110].In metastatic TNBC, the phase Ib/II BEGONIA trial evaluated the combination of Dato-DXd and durvalumab, demonstrating a highly encouraging ORR of 79% regardless of PD-L1 expression level, with a safety profile consistent with the known profiles of both agents [111]. Additionally, other studies are assessing the same combination in different stages of BC, ranging from perioperative treatment to therapy of advanced disease (NCT06112379, NCT05629585, NCT06103864).Finally, TROPION-PanTumor03 is set to evaluate Dato-DXd both as monotherapy and in combination with other antitumor agents across various solid cancer types (NCT05489211). Combinations in clinical development: ADCs combined with targeted therapy (mAbs and small molecules) Combinations of ADCs with small targeted therapies such as a tyrosine kinase inhibitor (TKI) or others or with mAb hold substantial promise as they may offer increased selectivity, potentially enhancing the therapeutic effectiveness of the treatment.Here we will review clinical trials evaluating combination therapies of ADCs with naked mAbs and small targeted agents (Table S3). Brentuximab vedotin The ECHELON-3 study evaluated a novel combination therapy of BV, lenalidomide, and rituximab for R/R DLBCL in patients ineligible for hematopoietic stem cell transplantation (HSCT) or CAR-T therapy.The study involved 10 patients, revealing a 70% ORR with a manageable safety profile, indicating the promising efficacy of this triplet regimen in R/R DLBCL, with the randomized study phase currently ongoing [112]. Polatuzumab vedotin PV is currently being evaluated in combination with rituximab and bispecific antibodies [113][114][115][116].The phase Ib/II study combining PV with mosunetuzumab, a bispecific antibody targeting CD20 and CD3, in relapsed/refractory B-cell non-HL demonstrated promising safety and efficacy, especially for elderly patients with limited treatment options [116].Additionally, a phase II study evaluating rituximab with either PV or pinatuzumab vedotin in a similar patient population showed efficacy, with a preference for rituximab-PV due to longer response duration and a better safety profile [115].A phase Ib/II study evaluated PV combined with obinutuzumab and lenalidomide in patients with heavily pre-treated refractory follicular lymphoma [69].Additionally, PV was studied in combination with bcl-2 inhibitor venetoclax, and as part of a triplet therapy with both venetoclax and rituximab [117,118].The phase Ib study investigated the combination of PV with venetoclax and rituximab in R/R DLBCL, showing promising activity and a favorable safety profile [117].The same combination was explored in the patients with R/R follicular lymphoma, replacing rituximab with obinutuzumab, also yielding encouraging results [119,120]. Inotuzumab ozogamicin The combination of INO and rituximab was explored in a phase I/II trial in patients with DLBCL of follicular lymphoma, showing high antitumor activity and a manageable safety profile [121].A phase III trial failed to demonstrate the superiority of the experimental arm compared to the standard [122].A phase I trial evaluated the combination of INO with temsirolimus in patients with R/R CD22-positive B-cell non-HL.Due to the high rate of toxicities at therapeutic doses, it was concluded that further development of this drug combination was not feasible, despite demonstrating clinical activity [123].Another early-phase trial explored the combination of INO with bosutinib for R/R Philadelphia chromosome-positive ALL or the lymphoid blast phase of chronic myeloid leukemia, demonstrating clinical activity in terms of ORR and a good tolerability profile [124]. Loncastuximab tesirine Loncastuximab tesirine (LT) is an anti-CD19 ADC, linked to a pyrrolobenzodiazepine dimer cytotoxin, SG3199 [125].The results of the LOTIS-2 trial led to the FDA's approval of LT as a single agent for treating patients with R/R large B-cell lymphoma (DLBCL, transformed DLBCL, and HGBL) [125].Results from a phase I/II study exploring the combination of LT and ibrutinib in patients with DLBCL and mantle cell lymphoma demonstrated antitumor activity and manageable toxicity [126].Current evaluations include its combination with rituximab in R/R follicular lymphoma and various DLBCL settings (NCT04998669, NCT05144009, NCT04384484). Trastuzumab emtansine In the phase III trials KAITLIN, MARIANNE, and KRISTINE, the combination of T-DM1 with pertuzumab, whether used in early or advanced HER2-positive BC, did not show improved clinical activity compared to the standard of care [127][128][129].The phase II trial TEAL explored the combination of T-DM1, lapatinib, and nab-paclitaxel versus trastuzumab, pertuzumab, and paclitaxel in HER2-positive BC in the neoadjuvant setting.The experimental arm was associated with higher activity compared to the standard arm [130].The phase III study HER2CLIMB-02 investigated the combination of T-DM1 and tucatinib in advanced HER2positive BC, presenting results at the San Antonio Breast Cancer Symposium 2023.This combination significantly improved PFS compared to the control arm, also showing responses in patients with brain metastasis.However, it was associated with a higher rate of AEs, although generally manageable [131]. Neratinib, an irreversible panHER inhibitor, has the potential to overcome trastuzumab resistance by inhibiting downstream pathways [132].In a small cohort of patients with HER2-positive mBC, the combination yielded an ORR of 63% with an acceptable safety profile [133].Other studies are exploring the combination of T-DM1 with ribociclib or alpelisib in patients with HER2-positive mBC demonstrating good tolerability and promising activity [134,135]. The combination of T-DM1 and pertuzumab was explored in the HERACLES-B trial for patients with HER2-positive advanced colorectal cancer, but the trial failed to meet its primary endpoint (ORR ≥ 30%) [136].Another study investigating the combination of osimertinib plus T-DM1 in patients with advanced EGFR mutant and HER2-positive NSCLC exhibited limited efficacy [137]. Enfortumab vedotin A phase I trial, evaluated EV with SG in mUC demonstrating significant clinical activity with the evidence of complete responses [138].EV with erdafitinib is under evaluation in a phase I study involving patients with metastatic urothelial cancer (NCT04963153).Another trial is investigating EV in combination with cabozantinib in subjects with locally advanced or metastatic urothelial cancer (NCT04878029). Sacituzumab govitecan Preclinical evidence suggests a potential benefit of combining SG with polyadenosine-diphosphate-ribose polymerase (PARP) inhibitors in models of TNBC [139].The combination of SG and rucaparib has been evaluated in the phase Ib SEASTAR study in patients with advanced TNBC, advanced platinum-resistant ovarian cancer, and solid tumors with mutations in homologous recombination repair genes.Despite signs of activity, further investigation is required due to safety concerns, particularly the high rate of myelosuppression [140].Several studies are investigating SG plus talazoparib in metastatic TNBC (mTNBC) [141], and berzosertib [a potent and selective small-molecule Rad3-related kinase (ATR) inhibitor] in SCLC [142] and homologous recombination-deficient neoplasms who are progressive to PARP inhibitors (NCT04826341).Preliminary results from the phase I trial investigating SG and berzosertib have recently been published: objective responses were observed in 3 of 12 evaluable patients, and the ongoing phase II expansion cohorts are currently evaluating the efficacy [143]. Mirvetuximab soravtansine The phase Ib/II FORWARD II evaluated MIRV, in combination with pembrolizumab and bevacizumab in patients with platinum-resistant ovarian cancer.The combination with bevacizumab is supported by evidence indicating enhanced antitumor activity, attributed to bevacizumab's capacity to facilitate tumor penetration and exposure to the ADC [144].It demonstrated notable effectiveness, yielding improved responses in patients regardless of their platinum sensitivity status.The combination was particularly effective in patients with high FRα expression tumors and in those who had not previously received bevacizumab [145].These findings suggest that MIRV, in combination with bevacizumab, could represent a promising alternative to standard therapies for ovarian cancer, even for patients who have received prior treatments.A phase 1 study is currently assessing the combination of MIRV with rucaparib in patients with recurrent endometrial, ovarian, fallopian tube, or primary peritoneal cancer (NCT03552471).Another phase Ib trial is evaluating MIRV alongside SL-172154, a fusion protein consisting of human signalregulatory protein alpha (SIRPα) and CD40L linked via a human Fc, in patients with platinum-resistant ovarian cancer (NCT05483933). Belantamab mafodotin The combination of BM with lenalidomide and dexamethasone has been evaluated in two clinical trials: the BelaRd study for naive MM patients and the DREAMM-6 study for R/R MM patients.Both studies indicated a rate of G3 AEs up to 94% across various dose levels.However, these AEs were generally manageable with dose modifications, and there were notable signs of clinical activity [146,147].The phase III trial DREAMM-8 is currently exploring BV with dexamethasone and pomalidomide in R/R MM (NCT04484623).Other ongoing combination regimens include BM with lenalidomide and daratumumab in relapsed or newly diagnosed MM (NCT04892264), BM plus bortezomib and dexamethasone (NCT04246047), among others. Anetumab ravtansine A phase II trial assessing the combination of AR with bevacizumab, compared to paclitaxel with bevacizumab in patients with platinum R/R ovarian cancer, reported poorer outcomes with the AR and bevacizumab combination, leading to the study's termination [148]. Patritumab deruxtecan Patritumab deruxtecan consists of an anti-HER3 mAb attached to a topoisomerase I inhibitor via a cleavable linker [149].Preclinical findings showed that the therapy with EGFR-TKI increases HER3 expression, thus improving the anticancer activity of patritumab deruxtecan [150] and providing a rationale for an ongoing study which is evaluating patritumab deruxtecan plus osimertinib in patients with advanced EGFR-mutated NSCLC (NCT04676477). Coltuximab ravtansine Coltuximab ravtansine (SAR3419) is an ADC consisting of an anti-CD19 mAb conjugated with a cleavable linker to DM4 [151].It has shown promising activity as a single agent in a phase II study in R/R DLBCL with benefits in PFS and OS [152].A phase II trial was conducted in combination with rituximab in subjects with R/R DLBCL.The primary goal of ORR was not met, and there are no ongoing studies at the moment exploring this drug [153]. Moxetumomab pasudotox Moxetumomab pasudotox (MOXE) is an ADC composed of a mAb anti-CD22 linked to pseudomonas exotoxin A (PE38).The drug received FDA approval in 2018 for the treatment of patients with pretreated hairy-cell leukemia (HCL) [154].In July 2023 the company AstraZeneca decided to remove the drug from the market due to lack of use and the availability of other treatment options [155]. Discussion and future directions Over the last years there has been a significant increase in the number of ADCs entering preclinical and clinical development.In addition to the approved single-agent compounds, some of them have been approved also in combination with other anti-cancer agents, while many others are being tested in different combinations and phases of clinical development.Combination therapies have been considered as a possibility to increase the efficacy of ADCs.The most significant results have been achieved by combining ADCs with chemotherapy and more recently with ICIs.The approved combinations with chemotherapy have been developed in the field of hematological malignancies.As reported above, BV has been approved in combination with traditional chemotherapy in HL and T-cell lymphoma [23,27], added to an established chemotherapy regimen by replacing one of the chemotherapy drugs (due to overlapping toxicity).While successful, combinations of ADCs with standard chemotherapy present also some challenges, in particular the definition of the correct dose and treatment schedule and thus setting a balance between toxicity and efficacy.In cases like the reapproval of GO for the treatment of AML with DA, lower fractionated dosing schedules were necessary [16].The latest drug approved in combinations for hematologic malignancies is PV.This drug has not received approval as monotherapy but has been directly approved in combination with rituximab and bendamustine or R-CHP for the treatment of relapsed and treatment naive DLBCL respectively, the latter based on improvement only of PFS.Thus when combining ADCs with chemotherapy particular attention to toxicity and careful dose escalation schemes should be adopted.In addition, clear clinical benefits and superiority over the standard chemotherapy regimen should be demonstrated in randomized trials. Beyond combinations with chemotherapy, preclinical evidence supporting the combination of ADCs with ICIs has prompted several clinical trials aimed at evaluating the safety and efficacy of such combinations.The combination of EV and pembrolizumab was approved by the FDA in December 2023 for the treatment of urothelial cancer, based on results from a phase III trial, which demonstrated improved OS [12].Results from other combinations are awaited. Despite their potent antitumor activity observed in different tumor types, the use of ADCs still presents several challenges, including safety but also patient selection, two factors that may become even more relevant when considering combination strategies.With regard to safety, there is a notable difference between ADCs and the naked mAb.ADCs have dose-limiting toxicities that are associated with the chemotherapy agents they are linked to, the composition of the ADC, and target expression in normal tissues.On the other hand, selecting those patients more likely to benefit remains largely an open question.Indeed despite required, antigen expression has not been clearly associated with antitumor activity in most of the cases [156,157]. Currently many trials are ongoing (Figure 2) that may better define in the near future the role of this class of compounds in the treatment of cancer and their incorporation in combination regimes.The emergence of novel constructs, such as bispecific ADCs, which allow simultaneous targeting of multiple antigens, potentially enhancing specificity and efficacy [158,159], immunostimulatory antibody conjugates (ISACs) or immune checkpoint-targeted drug conjugates (IDCs), aiming to fuse the cytotoxicity of ADCs with immune-stimulatory properties, thereby amplifying the antitumor immune response [6,160], may also open new possibilities for innovative strategies.Investigating predictive biomarkers and developing innovative preclinical models addressing the complexities of the tumor microenvironment could facilitate the translation of findings into clinically relevant strategies.Exploring these innovative modalities and their integration into combination strategies holds the potential to change the landscape of cancer therapy. Conclusions Over the past decade, considerable progress has been made in the development of ADCs.A growing number of clinical trials are now exploring novel ADCs and their combinations with other therapies.Among these combinations, those involving chemotherapy were among the first to result in approvals for hematologic malignancies.However, they require special consideration due to associated toxicity.On the other hand, combinations with ICIs may present fewer overlapping toxicities.Future trials will need to address the optimal selection criteria for patients most likely to benefit from these combinations.Recently, the FDA approved the first combination of an ADC with an ICI for patients with urothelial cancer.Meanwhile, ongoing trials investigating combinations with small targeted agents and mAb across various tumor types have produced limited results thus far. To ensure the successful development of treatment combinations based on ADCs in the future, it is crucial to establish preclinical rationale, conduct careful early clinical trials, and define clear efficacy endpoints for evaluation in phase II and III clinical trials. Figure 1 . Figure 1.Antibody-drug conjugate (ADC) specificity.Graphic representation of the different components and specificity of an ADC.Each ADC is different from the others depending on the target, payloads, and linker.It can be combined with different drugs.BCMA: B cell maturation antigen; EGFR: epidermal growth factor receptor; HER2: human EGFR 2; FRα: folate receptor alpha; Trop-2: trophoblast cell surface antigen-2; MMAE: monomethyl auristatin-E; MMAF: monomethyl auristatin-F Figure 2 . Figure 2. Trials ongoing.Graphic representation of ongoing studies: active not recruiting, recruiting, active not yet recruiting.Data obtained from ClinicalTrial.gov, updated as of January 2024.CT: chemotherapy; mAb: monoclonal antibody; Mono: monotherapy; Ph: phase; TA: targeted agent; MIX: combination of three or more drugs Table 1 . The first approval of ADCs as a single agent by the Food and Drug Administration (FDA) and/or European Medicines Agency (EMA)
2024-06-29T15:15:52.407Z
2024-06-27T00:00:00.000
{ "year": 2024, "sha1": "a1d3082f657b23c3151e2c70725dafd9720655f7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.37349/etat.2024.00243", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5aad88c91c3f90105823495a313eebe115b3c62a", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
249648933
pes2o/s2orc
v3-fos-license
Transcriptome Analysis and Single-Cell Sequencing Analysis Constructed the Ubiquitination-Related Signature in Glioma and Identified USP4 as a Novel Biomarker Background Glioma, the most frequent malignant tumor of the neurological system, has a poor prognosis and treatment problems. Glioma’s tumor microenvironment is also little known. Methods We downloaded glioma data from the TCGA database. The patients in the TCGA database were split into two groups, one for training and the other for validation. The ubiquitination genes were then evaluated in glioma using COX and Lasso regression to create a ubiquitination-related signature. We assessed the signature’s predictive usefulness and role in the immune microenvironment after it was generated. Finally, in vitro experiment were utilized to check the expression and function of the signature’s key gene, USP4. Results This signature can be used to categorize glioma patients. Glioma patients can be separated into high-risk and low-risk groups in both the training and validation cohorts, with the high-risk group having a significantly worse prognosis (P<0.05). Following further investigation of the immune microenvironment, it was discovered that this risk grouping could serve as a guide for glioma immunotherapy. The activity, invasion and migration capacity, and colony formation ability of U87-MG and LN229 cell lines were drastically reduced after the important gene USP4 in signature was knocked down in cell tests. Overexpression of USP4 in the A172 cell line, on the other hand, greatly improved clonogenesis, activity, invasion and migration. Conclusions Our research established a foundation for understanding the role of ubiquitination genes in gliomas and identified USP4 as a possible glioma biomarker. INTRODUCTION Glioma is the most frequent primary malignant tumor of the nervous system, accounting for 80% of all malignant tumors in the central nervous system and having a very bad prognosis (1). Gliomas are classified into four categories by the World Health Organization (WHO), with the first two types being low-grade gliomas (LGG) and the last two being high-grade gliomas (HGG) (2)(3)(4). Current conventional treatment options such as surgery, chemotherapy (temozolomide, etc.), and radiotherapy are still very limited in glioma (5). It is worth mentioning that the presence of the blood-brain barrier(BBB) has long been considered a challenge for the drug treatment of gliomas, to the extent that the FDA has only approved a few medications for the treatment of gliomas (6)(7)(8).. Glioma is also regarded as an immunosuppressive tumor, with the tumor microenvironment expressing and secreting a large number of immunosuppressive factors such as programmed cell death ligand-1 (PD-L1), cytotoxic T lymphocyte-associated protein 4 (CTLA-4), and Indolamine 2,3-dioxygenase (IDO), among others (9)(10)(11). The question of how to stimulate antitumor immunity in glioma is still being researched. Exploring the tumor microenvironment of glioblastoma and developing new biomarkers to aid prognostic assessment and therapy of glioma is so critical. PTM (post-translational modification) is a covalent process in which proteins are sometimes modified by the addition of modifying groups and other times hydrolyzed to remove modifying groups, affecting their properties (12). The main forms of PTM include phosphorylation, glycosylation, acetylation, ubiquitination, carboxylation, ribosylation, and the pairing of disulfide bonds (13)(14)(15). Among them, ubiquitination is a widespread PTM mode that is considered to be highly correlated with autophagy (16,17). E1 ubiquitin-activating enzyme activates the c-terminal glycine residue of ubiquitinprotein in an ATP-dependent way during ubiquitin modification, followed by E2 ubiquitin-conjugating enzyme and E3 ubiquitin ligase covalently attaching to the lysine (Lys) residue of the substrate protein (18)(19)(20). These substrates labeled by ubiquitin molecules are then recognized by the autophagy system and proteasome-mediated autophagy further occurs (21). Ubiquitination is a protein modification process widely existing in organisms, which is involved in homeostasis regulation and a series of pathophysiological processes (22). Ubiquitination, in particular, is expected to play a crucial role in cancer, as it regulates a variety of pathways and alterations in the microenvironment (23). Moreover, several key proteins involved in ubiquitination have been identified as promising targets for cancer therapy. Hence, it is time to explore the role of ubiquitination in glioma. Now, bioinformatics analysis has provided us with new insights into cancer transcriptome changes (24). Through bioinformatics analysis, we can carry out cancer survival analysis and immune microenvironment analysis, thus providing new biological markers for the precise treatment of cancer. The most widely used databases are the TCGA and GEO databases, which are widely utilized for cancer bioinformatics analysis. In this study, we downloaded glioma data from TCGA database and GSE162631 data set from GEO database. Among them, GSE162631 is a single cell sequencing data set of glioma published in 2021, consisting of 4 tumor samples and 4 normal controls adjacent to tumors (25). In that study, Xie et al. revealed different states of brain endothelial cell (EC) activation and blood-brain barrier (BBB) impairment in gliomas. In this study, we investigated the involvement of ubiquitination-related genes in glioma using bioinformatics analysis of glioma data. The ubiquitination-related prognostic signature was developed to separate glioma patients into groups, with the high-risk group having a much worse prognosis. Furthermore, in glioma, the ubiquitin signature can be used to identify changes in immune infiltration and immunological checkpoints. Our research will aid in the evaluation of glioma prognosis and treatment development. Datasets Downloading and Filtering We obtained RNA-seq data from the Cancer Genome Atlas (TCGA) database (https://portal.gdc.cancer.gov/) for glioblastoma (GBM) and lower-grade glioma (LGG). The following criteria were used to choose participants: (1) Patients with a past pathological diagnosis of lower-grade glioma or glioblastoma; (2) Gene expression and clinical data are reported for each patient. A total of 692 patients were included in the analysis after screening. Half of the patients were randomly assigned to the training cohort, while the other half was assigned to the validation cohort. Identification of Genes Associated With Ubiquitination The GENECARDS database (https://www.genecards.org/) was used to find genes relevant to ubiquitination. All ubiquitinationrelated genes were found by searching for "ubiquitination" in the search box. For further investigation, we extracted the top 100 most relevant genes. Identification of Prognostic Ubiquitination-Related Genes Univariate COX regression was used to identify genes linked with patient survival in gliomas in order to investigate the prognostic significance of these ubiquitin-related genes. The analysis platform is R software (version 4.1.0), and the "Survival" R package is utilized for COX regression analysis. Construction of the Prognostic Model To build a prognosis model of ubiquitination, researchers used Least Absolute Selection and Shrinkage Operator (LASSO) regression after identifying ubiquitination genes having prognostic value. Each model gene was matched to generate the relevant coefficient after achieving the optimal LAMDA value, allowing the risk score of various patients to be calculated: Risk score =∑_(I =1)^nmb _I *(expression of ubiquitination associated gene I). Based on the median risk value as a cut-off, patients in various cohorts might be separated into high-risk and low-risk categories. The model's prognostic usefulness was then investigated using survival analysis to measure the prognostic difference between the two groups. The prognostic model's 1, 3, and 5-year ROC curves were also generated to assess the model's accuracy and robustness. Clinical Prediction Value of the Established Prognostic Model To avoid bias, univariate and multivariate COX regression studies were done to further examine the model's prognostic efficacy. To discover independent prognostic markers, risk scores and other clinical characteristics (age, sex, and Karnofsky performance score) were included in the analysis. Single-Cell Analysis of the Immune-Related Cellular Location of the Prognostic Associated Genes To analyze the single-cell data acquired from the GEO databases, we utilized the "Seurat" software (version 1.3.1). The PCA dimension reduction approach, as well as the t-Distributed Stochastic Neighbor Embedding (tSNE) method, were used to identify cell subclusters in dataset GSE162631. Using feature genes and the "SingleR" packages, the cells were re-clustered. As a result, the expression of several cells was demonstrated. Immune Microenvironment Analysis The tumor microenvironment and GBM/LGG-infiltrating immune cells were then assessed in silico. Based on bulk RNAseq data, ESTIMATE is an algorithm for predicting the presence o f i n v a d i n g s t r o m a l / i m m u n e c e l l s i n t h e t u m o r microenvironment. ESTIMATE was able to generate three scores based on single-sample Gene Set Enrichment Analysis (ssGSEA): stromal cell scores, immune cell scores, and ESTIMATE scores. CIBERSORT is a deconvolution technique that quantifies the proportions of distinct cell types by predicting the cellular composition of complicated tissues based on gene expression data. The link between risk score and tumor immune infiltration score was investigated using a total of seven methods. In Silico Prediction of Potential Antitumor/ Cytotoxic Drugs The R package "pRRophetic" is used to predict clinical chemotherapeutic response using tumor gene expression levels. pRRophetic is capable of forecasting possibly sensitive medications that are ideal for patients based on data obtained from a vast amount of data regarding the response of various tumor cell lines to anticancer drugs. We looked for medications that may be more effective in treating high-risk patients by utilizing pRRophetic to predict the IC50 of certain anticancer agents. Construcion of the Nomogram Based on the results of multivariate cox regression, nomogram is a versatile approach of merging several risk factors into a single plot. We were able to visually forecast a patient's survival probability using the nomogram created with the R packages 'DynNom'. Cell Culture and Antibodies American Type Culture Collection(ATCC) provided U87-MG and LN229 cells. Shanghai Institutes for Biological Sciences provided U251 and A172 cells (Shanghai, China). In four glioma cell line culture and in vitro investigations, Dulbecco's Modified Eagle Medium (DMEM, gibco, CA, USA) with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin solution was utilized. Lonza provided normal human astrocytes (NHAs), which were cultivated in astrocyte growth medium containing rhEGF, insulin, ascorbic acid, GA-1000, L-glutamine, and 5% FBS. All of the cells were grown at 37°C with 5% CO2. Abcam provided antibodies against USP4, E-cadherin, and Ncadherin. Cell Signaling Technology provided the b-actin. Quantitative Real−Time Polymerase Chain Reaction (qRT-PCR) Total RNA was extracted from cell lines with TRIzol reagent (Invitrogen, CA, USA) according to the manufacturer's protocol. cDNA was synthesized with the PrimeScript RT Reagent Kit (Takara, Nanjing, China).qRT-PCR was implemented utilizing AceQ Universal SYBR qPCR Master Mix (Vazyme, Nanjing, China) on an ABI Stepone plus PCR system (Applied Biosystems, FosterCity, CA, USA). Primers used in this study were listed as follows: USP4 ( Western Blotting RIPA buffer with protease inhibitors (Roche) was used to lyse cellular proteins, and an equal amount of proteins was electrotransferred onto a polyvinylidene difluoride membrane (Millipore). The main and secondary antibodies were used to incubate the protein, which was then identified using enhanced chemiluminescence methods. CCK-8 Assay Cell counting kit-8 test (CCK-8) was used to assess the proliferation capabilities of GBM cells (87-MG, LN229, and A172) according to the manufacturer's instructions. 96-well plates were used to seed the transfected cells. 10 ml of CCK-8 reagent was added to the test well at 24, 48, 72, and 96 hours after transfection and incubated for 2 hours at 37°C away from light. At a wavelength of 450 nm, the absorbance was measured. Colony Formation Analysis U87-MG, LN229, and A172 cells were transfected and maintained in 6-well plates for about 12 days. The cells were then stained with 0.1 percent crystal violet for 30 minutes before being rinsed with PBS. If the colonies were larger than 1 mm in diameter, they were counted. Migration and Invasion Assays Cell migration and invasion were measured using transwell assays. In the upper chamber, 2×10 4 cells were cultivated in 200 mL media without serum, while in the lower chamber, 600 mL complete medium was supplied for the migration assay. According to the manufacturer's protocols, additional Matrigel was employed for the invasion experiments (BD Biosciences, Bedford, MA, USA). Cells were fixed with 4 percent PFA and stained with 0.1 percent crystal violet solution after 24 hours of incubation at 37°C with 5% CO2. Wound Healing Cells were grown in 6-well plates for 24 hours before being scratched with a sterile pipette tip (20 mL). Each wound was examined by inversion microscopy(Olympus, Japan) at 0 and 24 hours after rinsing the cells with PBS to remove cellular debris. To examine the cell migration capacity, the total wound area was analyzed using ImageJ software. RESULTS Our flow chart is shown in Figure 1. Lasso Regression Was Performed to Construct a Ubiquitination-Related Signature in Glioma A total of 72 ubiquitin-related genes with prognostic value in glioma were identified by univariate Cox regression of 100 ubiquitin-related genes obtained from Genecards database. Through Lasso regression of the above 72 genes, we obtained a Risk Score formula consisting of 12genes: Risk Score=0.0137868801576863*UBE2D3 +0.00798080087559059*UBE2D2+(-0.00918452991096172)*USP7 +0.0065644500970919*GRN+0.00751875352337762* UBE2S+0.000192089285965977*UBB+(-0.00497853632444545) *UBE2G2+(-0.0968360751270892)*BTRC+0.00754166909940448* CUL1+0.104123805908217*USP4+0.0289487761185436*SIAH2 +0.0271749540402458*UBE2Z (Figures 2A, B). Among them, UBE2D3, UBE2D2, GRN, UBE2S, UBB, CUL1, USP4, SIAH2, and UBE2Z were associated with poor prognosis of glioma(HR>1, P<0.001, Figure 2C). USP7, UBE2G2, and BTRC were associated with better prognosis of glioma(HR<1, P<0.001, Figure 2C). Using this formula, each patient can be calculated to obtain a risk score. Patients in different cohorts can be separated into high-risk and low-risk groups based on the median value. Figure 3A showed the survival status, score curve, and expression of model genes of the high-risk and low-risk groups of the training cohort. Figure 3B showsed the validation cohort analysis results. The dot plot of survival status, whether in the train cohort or the validation cohort, indicates that as the risk score grows, the survival time of patients gradually concentrates near the bottom, indicating a worse prognosis ( Figures 3A, B). Survival analysis on the training cohort showed that the high-risk group has a significantly worse prognosis than the low-risk group ( Figure 3C). The same result was found in the survival analysis of the validation cohort ( Figure 3D). Then the results of subgroup survival analysis on train cohort showed that high risk score is associated with poor prognosis in different genders and age groups ( Figure 3E) and the results were verified in the validation cohort ( Figure 3F). Univariate and Multivariate Cox Regression Were Used to Evaluate the Independent Prognostic Value of Risk Scores in Gliomas To determine the independent prognostic usefulness of risk score, we used univariate and multivariate Cox regressions. First, univariate Cox regression in the training cohort revealed that age and risk score are independent prognostic predictors of glioma ( Figure 4A). Age and risk score were also revealed to be independent prognostic predictors of glioma in validation cohort analysis using univariate Cox regression ( Figure 4B). After that, the multivariate Cox regression was run. Age and risk score were determined to be independent prognostic predictors of glioma in the training cohort using multivariate Cox regression ( Figure 4C). Gender, age, and risk score were confirmed to be independent prognostic predictors of glioma in a second validation cohort analysis using multivariate Cox regression ( Figure 4D). We then created ROC curves for this signature in both the train and validation cohorts to assess its accuracy. The area under the curve (AUC) of 1, 3, and 5 years was 0.869, 0.925, and 0.868, respectively, according to the ROC curve of the training cohort ( Figure 4E). The AUC of 1, 3, and 5 years was 0.854, 0.867, and 0.796, respectively, according to ROC curves for the validation cohort, demonstrating that the signature can accurately determine the prognosis of patients with glioma ( Figure 4F). Immune Infiltration Analysis in High-Risk and Low-Risk Groups Tumor formation and progression are influenced by the immune microenvironment. Understanding the effects of the immunological microenvironment on tumor prognosis and treatment is beneficial. As a result, we looked at how immune infiltration differed between the high-risk and low-risk groups. To begin, we used multiple algorithms to create an immunological infiltraion heat map for high-risk and low-risk groups, with red representing high invasion levels and blue representing low invasion levels ( Figure 5A). Following that, we conducted a correlation analysis between immune cells and risk ratings, finding that many immune cells were substantially connected with risk scores (Figures 5B-I). Analysis of Immune Checkpoint (ICP) and Microsatellite Instability (MSI) Tumor immunotherapy is a promising treatment option. Immune checkpoint-related gene expression and microsatellite instability are crucial indications for evaluating immunotherapy's effectiveness. Between high-risk and low-risk groups, we looked at differences in immune checkpoint-related genes and microsatellite instability. The high-risk group had higher levels of expression of immune checkpoint genes, according to the findings ( Figure 6A). The high-risk group had less microsatellite instability ( Figure 6B). Microsatellite instability decreased as the risk score grew, according to correlation analyses ( Figure 6C). Single-Cell Sequencing Analysis Based on Public Databases We used the Seurat package to process the single-cell transcriptome data. Raw data from GSE162631 were downloaded. A total of 4 GBM samples were downloaded for further analysis. The data with a mitochondrial RNA percentage larger than 0.10 were filtered, and we eventually acquired 51,449 cells that meet the standard. PCA reduction plot showed no significant differences in cell cycles ( Figure 7A). Meanwhile, we selected the top 3000 variable features, which were labeled in red ( Figure 7B). And the top 10 variable features were labeled. In Figures 7C, the principle component analysis showed the distribution of the samples, and the results showed no significant batch effects. After dimension reduction and the tSNE clustering, the immune cells were identified using their feature genes ( Figure 8A). It could be clearly recognized that the cells were divided into approximately 6 categories of cell types, namely endothelial cells(EC), neutrophils, T cells/B cells, mural cells, tumor-associated macrophages(TAMs), didenric cells and microglias ( Figure 8B). The genes involved in the signature were identified in the single-cell tumor microenvironment atlas ( Figure 8C). In a fleeting glimpse, we could see the expression of the gene UBE2D3, UBE2D2, GRN, UBB was ubiquitous in immune-microenvironment in the GBM patients, and UBE2G2, CUL1, and USP4 showed moderate expression in immune cells. Besides, almost all ubiquitination associated genes were invariably expressed in TAMs and Dendritic Cells, indicating the strong correlation of ubiquitination within those immune cells. Drug Sensitivity Analysis According to the findings, high-risk patients have a worse prognosis. Therefore, in order to conduct precise intervention in high-risk patients, we performed drug sensitivity analysis to identify drugs that might be effective. inhibitory concentration (IC50) in the high-risk group, meaning that the high-risk group was more sensitive to these drugs ( Figure 9). The Nomogram Was Constructed to Further Evaluate the Prognosis of Glioma Patients By merging the clinical parameters of glioma patients, a nomogram was created to evaluate the prognosis of glioma patients at 1, 3, and 5 years ( Figure 10A). In Vitro Experiments Were Performed to Verify the Function of the Key Gene : USP4 The role of USP4 in glioblastoma was confirmed since its HR value was the highest in the signature. First, a survival analysis using the GEPIA database revealed that increased USP4 expression in glioma patients was linked to poor outcomes ( Figure 10B). USP4 was found to be linked to a variety of immune cells in immunological investigation ( Figure 10C). GSVA analysis of USP4 and ubiquitination related pathways is presented in Supplemental Figure S1. In vitro tests were then carried out to confirm USP4's function. To begin, qRT-PCR revealed that USP4 expression was up-regulated in all four glioma cell lines when compared to normal control NHAs cell lines ( Figure 11A), with the highest expression in U87-MG and LN229 cell lines, so gene knockdown was performed in these two cell lines ( Figure 11B, *P<0.05, **P<0.01). In the U87-MG and LN229 cell lines, both siRNAs drastically reduced USP4 expression ( Figure 11C). USP4 knockdown significantly reduced the activity of U87-MG and LN229 cell lines in the CCK-8 experiment ( Figure 11D, **P<0.01). The ability of the U87-MG and LN229 cell lines to form colonies was dramatically reduced following USP4 knockdown ( Figure 11E, **P<0.01). After knocking out USP4, the migratory and invasion capacities of the U87-MG and LN229 cell lines were dramatically reduced ( Figure 11F, **P<0.01). The ability of U87-MG and LN229 cell lines to heal was dramatically reduced following USP4 knockdown ( Figure 11G, **P<0.01). Following that, plasmid overexpressed USP4 in the A172 cell line ( Figure 12A). The colony forming ability of the A172 cell line was greatly improved after overexpression of USP4 ( Figure 12B, **P<0.01). The viability of the A172 cell line was dramatically improved following USP4 overexpression in the CCK-8 experiment ( Figure 12C, **P<0.01). In a transwell experiment, USP4 overexpression greatly improved the A172 cell line's migration and invasion capacity ( Figure 12D, **P<0.01). In wound healing assays, USP4 overexpression greatly improved the migratory ability of the A172 cell line ( Figure 12E, **P<0.01). The knockdown and overexpression efficiencies of USP4 were confirmed by Western-blotting, and the link between USP4 and EMT-related proteins N-cadherin and E-cadherin was investigated. A statistically significant association was established between USP4 and the EMT proteins E-cadherin and N-cadherin ( Figure 12F). N-cadherin expression was dramatically reduced in the siUSP4-1 and SiUSP4-2 groups when the USP4 gene was knocked down in the U87-MG cell line, whereas E-cadherin expression was significantly raised. Ncadherin expression was greatly reduced when the USP4 gene was knocked down in the LN229 cell line, whereas E-cadherin expression was significantly increased. After USP4 was overexpressed in A172, N-cadherin expression was drastically increased, but E-cadherin expression was significantly decreased. DISCUSSION Glioma, the most frequent and difficult-to-treat malignant tumor of the central nervous system, has a significant impact on patients' quality of life and places a significant cost on human health (26). Of these, glioblastoma and wild-type IDH are the most malignant subtypes and have a high mortality rate once diagnosed (27). Existing conventional treatments seem to have limited benefits in glioma (28). Postoperative recurrence and drug resistance are still a major problem in clinical management of glioma (29). The high heterogeneity and complex immune microenvironment of gliomas are considered to be the main reasons for poor prognosis and poor therapeutic effect (30). In the microenvironment of glioma, there exist crosstalk of multiple signaling pathways and biological mechanisms, leading to its continuous growth and development (31). Ubiquitination, a frequent kind of post-translational protein modification, has been linked to cancer development (32). degrading substrate proteins by proteasome (33). Since substrate proteins may be carcinogenic or suppressive, ubiquitination also plays a dual role in cancer (34). At present, many key enzymes in the ubiquitination process are considered as promising targets for cancer therapy (35). In addition, the importance of ubiquitination in glioma has been hypothesized. Chen et al. discovered that RNF139, an E3 ligase, plays a tumor suppressor role in glioma by modulating the PI3K/AKT signaling pathway and encouraging glioma cell apoptosis (36). Liang et al., on the other hand, discovered through cell studies that ubiquitin specific proteinase 22 (USP22) increased glioma cell proliferation, migration, and invasion, as well as promoting glioma growth and development (37). Thus, different members of the ubiquitination system may be foes or friends in gliomas. Detailed analysis of these members is needed to determine their role in glioma. Ubiquitination alters intracellular protein interactions by The role of ubiquitination-related genes in gliomas was investigated in this study. On the genecards database, 12 ubiquitin-related genes with prognostic significance were discovered using univariate Cox regression. Following that, Lasso regression of these 12 genes was used to create a predictive signature associated with ubiquitination in gliomas. A risk score for each patient can be determined using this signature. Based on the median risk value, glioma patients in the cohort can be split into two groups: high-risk and low-risk, with the high-risk group having a much worse prognosis than the low-risk group. This serves as a guide for glioma prognosis and risk assessment. Immune research revealed that the high-risk and low-risk groups had different amounts of immune infiltration. Furthermore, we can show that the high-risk group had a higher expression trend of immune checkpoint associated genes, but lower microsatellite instability. In addition, we mapped the expression of the genes in the signature in distinct cells using single-cell analysis. Finally, cell studies were utilized to confirm that USP4, the most important HR gene in the signature, was expressed and functioned in gliomas. The GSE162631 dataset is made up of only one cell. The authors investigated the activation status of distinct brain epithelial cells (EC) in gliomas, as well as the status of bloodbrain barrier disruption, in the original study of this data set. We used this data set to investigate the expression of 12 model genes at the single-cell level in our research. This serves as a guide to comprehending the function and heterogeneity of this prognostic model in various cells. Although immunotherapy has achieved initial success in many solid tumors and is considered a landmark discovery in cancer treatment, its application in gliomas is still limited (38). Furthermore, our understanding of the glioma immune microenvironment is still insufficient. The presence of the blood-brain barrier(BBB) is thought to be a barrier to drug action in intracranial tumors, attenuating their efficacy (39). It should also be mentioned that gliomas have long been considered "cold" tumors with a high degree of immunosuppression (40). As a result, more research into the immune microenvironment of glioblastoma is required to offer a foundation for immunotherapy. Our research discovered that high-risk glioma patients exhibited a higher expression trend of immune checkpoint-related genes and less microsatellite instability. This serves as a reference for glioma immune stratification and can assist guide glioma immunotherapy. Ubiquitin-specific Protease 4(USP4) is the gene with the highest HR in our constructed signature and is associated with poor prognosis in gliomas. Our cell tests revealed that USP4 was highly expressed in glioma, and that knocking down USP4 expression dramatically reduced the activity, invasion, and migratory ability of glioma cells. This adds to the evidence that USP4 has a function in gliomas. USP4, a cysteine protease from the DUBs family, is involved in deubiquitination in cells. Many prior research have suggested that USP4 has a function in malignancies. PAK5-DNPEP-USP4 increases the growth and progression of breast cancer, according to Geng et al., and overexpression of USP4 is linked to a poor prognosis in breast cancer (41). USP4 expression was similarly linked to increased breast cancer invasiveness, according to Cao et al. (42). Yang et al. discovered that the USP4/SMAD4/CK2 axis increases esophageal cancer progression (43). USP4 was also discovered to be a potential target for gliomas in our research. In conclusion, patients can be adequately classified and immunologically assessed using the ubiquitin-related prognostic signature in gliomas. Our research could lead to new approaches to glioma detection and therapy. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS QT, JX, and HW contributed conception and design of the study; CM, JX, and QT collected the data; JX and QT performed the statistical analysis; QT, WW, and HW wrote the first draft of the manuscript. All authors contributed to manuscript and approved the submitted version. Wound healing experiments showed that the migration ability of A172 cell line was significantly enhanced after USP4 overexpression (**P < 0.01). (F) Western-blotting assay was performed to verify the knockdown and overexpression efficiency of USP4 and explore the relationship between USP4 and EMT-related proteins.
2022-06-15T13:09:38.187Z
2022-06-14T00:00:00.000
{ "year": 2022, "sha1": "1d3c780f9083b400523887c0d221555a59e0b030", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1d3c780f9083b400523887c0d221555a59e0b030", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
266833150
pes2o/s2orc
v3-fos-license
Fine needle biopsy versus fine needle aspiration in the diagnosis of immunohistochemistry-required lesions: A multicenter study with prospective evaluation ABSTRACT Objectives The superiority of EUS–guided fine-needle biopsy (EUS-FNB) over fine-needle aspiration (FNA) remains controversial. This study aimed to compare the efficacy of FNB and FNA in immunohistochemistry (IHC)-required lesions, including, type 1 autoimmune pancreatitis (AIP), neuroendocrine tumor (NET), mesenchymal tumor, and lymphoma. Methods In this multicenter study, specimens from all eligible patients who underwent EUS-FNB/FNA with these specific lesions were prospectively evaluated. Demographics, adequacy of specimens for IHC, diagnostic accuracy, and integrity of tissue were analyzed. Subgroup analysis and multivariate logistic regression were also performed to control confounders. Results A total of 439 patients were included for analysis. Most lesion types were type 1 AIP (41.69%), followed by NET, mesenchymal tumor, and lymphoma. FNB yielded specimens with better adequacy for IHC (82.41% vs. 66.67%, P < 0.001) and higher diagnostic accuracy (74.37% vs. 55.42%, P < 0.001). The superiority of FNB over FNA in adequacy for IHC (odds ratio, 2.786 [1.515–5.291]) and diagnostic accuracy (odds ratio, 2.793 [1.645–4.808]) remained significant after control of confounders including needle size, lesion site, lesion size, and endoscopists. In subgroup analysis, FNB showed higher diagnostic accuracy in AIP and mesenchymal tumor, whereas no statistically significant difference was observed in NET and lymphoma. Conclusions FNB was superior to FNA needles in obtaining tissues with better adequacy and integrity. These results suggest that FNB should be considered a first-line modality in the diagnosis of IHC-required lesions, especially AIP and mesenchymal tumor. However, a randomized controlled trial with larger sample size is needed to further confirm our findings. INTRODUCTION EUS-FNA has been widely used to diagnose lesions in and around the gastrointestinal (GI) tract. [1]Although EUS-FNA is the preferred sampling method, compared with its satisfying performance of acquiring cytological specimens, FNA needles are less capable of obtaining core tissues for histological assessments, especially in the absence of rapid on-site evaluation (ROSE). [2]However, for certain neoplasms such as neuroendocrine tumor (NET) or chronic inflammation, procurement of core tissue is essential for cytological evaluation and performance of immunohistochemistry (IHC) to establish a diagnosis. In need of acquiring more core tissue for detailed examination and immunostaining, multiple techniques of EUS-FNA were adopted to improve the diagnostic yield but resulted in little success. [3]9][10][11][12][13][14][15] Whether FNB is superior to FNA remains highly controversial.For pancreatic adenocarcinoma and lymph node metastasis, cytology is often adequate for diagnosis, without a significant difference in diagnostic efficiency between FNB and FNA observed. [7,15][21] Even for pancreatic cancers, a larger quantity of tissue enables molecular profiling and next-generation sequencing, which are vital for risk stratification and targeted therapy or immunotherapy. [14,22]rrent published research comparing FNB needles with FNA in NET or AIP exclusively already explored the advantages of FNB in core tissue acquisition and diagnostic yield.However, most studies mainly focused on diagnostic accuracy and core tissue length, whereas sample quality and tissue adequacy remained as secondary outcomes. [16,17,20,21]Also, the significance of conclusions was limited by sample size.In this context, we aimed to investigate lesions where IHC is necessary to confirm the diagnosis.Especially in cases of uncertain cytological diagnosis, IHC is essential to make a definitive diagnosis.The authors conducted a real-world study with prospective sample evaluation to determine the difference in histologic yield between FNA and FNB needles in 439 patients with IHC-required lesions including AIP, NET, mesenchymal tumor, and lymphoma. Study design This study was a real-world, multicenter, single-blinded study comparing the efficacy of the FNB and FNA needles in obtaining adequate tissue specimens with prospective sampling evaluation.ROSE was not available during the process.This trial was conducted from April 2015 to July 2022 at Peking Union Medical College Hospital and Tongji Hospital, Tongji Medical College affiliated to Huazhong University of Science and Technology, 2 major tertiary care centers in China.The study was performed in compliance with the Declaration of Helsinki.The protocol was approved by the institutional review boards of each participating center and registered at ClinicalTrial. gov (NCT05565066). Patients and interventions Patients who underwent EUS-guided sampling at the 2 centers of this research and were finally diagnosed with either (1) type 1 AIP, (2) NET, (3) mesenchymal tumor, and (4) lymphoma were considered eligible according to the inclusion criteria.All patients with definitive or probable type 1 AIP were diagnosed based on the International Consensus Diagnostic Criteria (ICDC). [23]Patients with NET, mesenchymal tumor, or lymphoma required diagnosis according to histopathological findings (surgical or provided by EUS-FNB/FNA.Exclusion criteria were patient age younger than 18 years, pregnancy, uncorrectable coagulopathy (platelet count <50,000/mm 3 , international normalized ratio >1.5), acute pancreatitis in the preceding 2 weeks, severe cardiorespiratory dysfunction precluding endoscopy, and failure to provide informed consent.All consecutive patients provided informed consent. Participating endoscopists were required to meet the following criteria: (1) have performed more than 100 EUS-guided tissue sampling procedures to date or at least 50 in the last 12 months and (2) willing to comply with the study requirements, including presenting the possibility to participate in the study to all subjects eligible.After confirming the eligibility criteria were fulfilled, investigators would select puncture needle type according to the needles available then and lesion characteristics.Needles used in this study in-cluded 19G, 22G, and 25G FNA (either EchoTip Ultra from Cook or Expect needle from Boston Scientific) and 19G, 20G, 22G, and 25G FNB (EchoTip ProCore from Cook or Acquire from Boston Scientific).A more detailed description of intervention procedure is provided in Appendix 1. Specimen evaluation The aspirated samples from each pass were expelled onto separate slides with a stylet.After this, 0.1 mL of sterile saline was flushed into the needle and followed with 5 mL of air.The macroscopically visible core tissue was transferred into Eppendorf tubes containing 10% formalin for histological examination and subsequently embedded in paraffin.Specimen sections were cut and stained with eosin and hematoxylin.(Sections of suspected AIP were further stained with IgG4, CD38, and CD138; sections of NET were stained with CgA, Syn, and CD56; sections of suspected mesenchymal tumor were stained with c-kit, CD34, DOG-1, α-SMA, Desmin, and S-100; a section of suspected lymphoma was stained with CD3, CD5, CD19, CD20, CD22, CD30, CD45RO CD79a, PAX5, and BCL2; Supplementary Figure 1, http://links.lww.com/ENUS/A348:Additional IHC markers were stained as needed.)Two pathologists blinded to the type of needles used and clinical information, independently assessed all tissue samples obtained.When the 2 experts made a different diagnosis, the agreement was reached by consulting a third pathologist and carefully discussing the findings. The tissue integrity for histological analysis was scored from 0 to 5 as follows [Figure 1]: score 5, sufficient material for adequate histological interpretation (core tissue length > 1 Â 10 high power field [HPF]); score 4, sufficient material for adequate histological interpretation (core tissue length < 1 Â 10 HPF); score 3, sufficient material for limited histological interpretation; score 2, sufficient material for adequate cytological diagnosis; score 1, sufficient material for limited cytological diagnosis (no representativeness); and score 0, inadequate for diagnosis, based on previously reported system. [24]Those cases with histological characteristics resembling AIP, NET, mesenchymal tumor, or lymphoma but without IHC evaluation were excluded from the final diagnosis (not including failed IHC cases). Outcomes The 2 primary outcomes of this study were to compare the IHC success rate and diagnostic accuracy of the specimens from FNB needles versus FNA ones.Adequate histological core to perform IHC was defined according to the following criteria: (1) adequacy to provide histological diagnosis, and (2) after cutting the sections stained with hematoxylin and eosin, the remaining tissue thickness > (4 Â n) μm (n refers to the number of necessary markers to diagnose the specific disease; each section requires a minimum of 4 μm of thickness).Because specificity was not involved in this study, the diagnostic accuracy was defined as the true positive values divided by the total number of samples.The secondary outcomes were to compare the specimen quality, namely, core tissue length and tissue integrity scores in the samples obtained by FNB and FNA needles.To seek the potential merits of FNB or FNA needles in different situations, we further compared the efficacy of the 2 types of needles in the subgroup analysis (lesion type and lesion size). Statistical analysis In this study, the demographic and clinical characteristics of the patients were summarized with mean and SD, and ranked data were expressed as median and interquartile range.Categorical parameters including sex, lesion type, lesion site, adverse events, adequacy for IHC, and diagnostic accuracy were expressed in terms of the number of cases and percentage.Qualitative variables were compared using the χ 2 test or Fisher exact test, whereas Student t test and the Mann-Whitney U test were used for quantitative variables.The effect of FNB or FNA on IHC success rate and diagnostic accuracy was determined using multivariate logistic regression to control the potential confounders, including needle size, lesion site, procedures in different time spans, and different endosonographers.Statistical significance was defined as P < 0.05 (2-tailed).All statistical analyses were performed using SPSS V.26.0. Patient and lesion characteristics From April 2015 to July 2022, 458 patients were enrolled in this study, and 19 patients were excluded because of suspected pancreatic cancer or lack of definitive diagnosis.Therefore, the remaining 439 patients were analyzed: 199 in the FNB group and 240 in the FNA group.Technical success occurred in all cases [Figure 2].Table 1 illustrates the baseline clinical characteristics of the recruited patients.There were no significant differences in age, sex ratio, tumor size, or tumor location between groups.Of the 439 patients, 163 (37.13%) were classified as type 1 AIP in final diagnosis, according to the ICDC. [23]Two hundred seventy-six patients (99 NET, 99 mesenchymal tumor, 58 lymphoma) were diagnosed based on surgical or EUS-guided tissue sampling histology.Adverse events were minimal (1 minor upper GI hemorrhage after puncture in each group treated with hemostatic clip placement and 1 mild pancreatitis in the FNB group) and not statistically different between the FNB and FNA groups (1.00% vs. 0.42%, P = 0.592). Neuroendocrine tumor There were no statistical differences in the number of CgA-positive and Syn-positive specimens between the 2 groups.However, the number of CD56-positive specimens from FNB group patients was significantly higher than specimens from the FNA group (74.29% vs. 51.56%,P = 0.028). Multivariate logistic regression Multivariate 5), whereas the differences were no longer remarkable in NET and lymphoma. DISCUSSION As highlighted in the background, published studies comparing the diagnostic yields between FNB and FNA needles produced conflicting results. [2,7,9,25,26]The current guidelines of endoscopic tissue sampling endorsed no particular needle type to improve diagnostic accuracy. [27]owever, the guidelines still indicated the advantages of FNB in obtaining more tissues for diagnosis and genetic profiling, especially when ROSE is not available.Histological evaluation in combination with IHC is essential for the diagnosis of AIP, NET, mesenchymal tumor, and lymphoma.IHC provides important information to differentiate neoplastic and nonneoplastic lesions and identify the tumor subtype.Thus, we conducted this real-world, multicenter study to compare the efficacy of FNB and FNA in diagnosing those IHC-required lesions.The results of our analysis demonstrated that FNB needles yielded specimens with better adequacy and quality for IHC.Also, a higher diagnostic accuracy of FNB was observed, potentially due to the superiority of specimen quality. To date, this multicenter trial remains the largest real-world study to compare the efficacy of EUS-FNB and FNA sampling in IHC-required lesions.Different from previous studies, we collected a large series of patients with multiple types of lesions.Type 1 AIP was the most frequent in this study, followed by NET, mesenchymal tumor, and finally lymphoma.In terms of location, pancreatic lesions (60.55%) accounted for the majority, with the rest being other retroperitoneal sites, GI tract, mediastinum, and pelvic cavity in that order.Twenty-gauge is not available for FNA needles, and 19-gauge is much rarer in FNB needles, which partially explains the different compositions of needle sizes.To make our comparison more reliable, we further performed multivariate logistic regression to eliminate the potential baseline bias, which was unique to our study.Among the 198 NET and mesenchymal patients recruited in this study, 27 patients underwent post-EUS surgical resection based on the histological findings.In the meanwhile, 107 patients were not turned to surgery because of no evidence of malignancy.This again emphasized the importance of EUS for medical decision making.EUS-FNA seems to be accurate for diagnosing pancreatic cancer or making a preliminary diagnosis for AIP and NET. [17,20]However, it was not satisfying enough to provide personalized management of specific lesions.Especially in cases of AIP, FNA is often incapa-ble of acquiring enough densely fibrotic tissue. [21]For spindle cell lesions, the imperative diagnosis was impossible in the absence of IHC staining. [28]In our study, FNB resulted in specimen cores with better adequacy and quality, which was pivotal for diagnosing the above diseases.In general, 82.41% of specimens from FNB were adequate for IHC staining, significantly higher than the rate of 66.67% from the FNA group ( P < 0.001).Similar to other studies, we observed that fewer number of passes was required to achieve diagnosis in the FNB group (FNA vs. FNB: 4 [3-4] vs. 3 [3-4], P < 0.001). [9]More importantly, the tissue acquired by FNB had better integrity score (FNA vs. FNB: 4 [3-4] vs. 3 [0.5-4],P < 0.001) and longer core tissue length (FNA vs. FNB: 0.5 [0.4-0.8] vs. 0.7 [0.5-1.0],P < 0.001). [2,9,18]As our previous study indicated, the architecture of tissue was better preserved through FNB obtained. [9]Conversely, the pauci-cellularity nature of FNA led to the tissue collected being distorted or consumed during IHC sectioning. [19,28] highlighted in previous studies, the major advantage of FNB in AIP was to decrease cytologically inconclusive cases. [21,29]In our study, based on the ICDC, 37.23% of FNB cases and 59.55% of FNA ones were uninformative ( P = 0.003).A higher success rate of IgG4 IHC staining accounted for the improvement.We also found IgG4-positive cells >10/HPF in 30.85% of FNB cases compared with 12.36% in FNA ( P = 0.002), which was reported as 16% to 78% in previous studies. [21,30]The relatively higher rate of uninformative cases could be related to the spans of a long period and the IHC staining of elastic fiber, and the symbol of obliterative phlebitis was not routinely stained in China.High density of fibrosis, as a common feature of AIP tissue, along with the lack of cellular constitution including plasma cells, often leads to an ambiguous diagnosis.Larger tissue cores obtained by FNB are more likely to contain more IgG4-positive plasma cells.A sufficient amount of tissue provides more details to make an accurate diagnosis.After eliminating all confounding factors including needle size and different endoscopists, the differences in IHC rate and diagnostic rate were still significant, indicating that FNB may be a better choice for AIP than conventional FNA.Regarding the mesenchymal tumor, similar to the results of 69.30% to 100% in previous studies, [28,31] our IHC success rate of the FNB group was 86.00%, higher than the rate of 63.72% in FNA ( P = 0.009).Notably, the diagnostic accuracy of FNB was also significantly higher than that of FNA (82.00% vs. 55.10%,P = 0.004).GIST was the most common lesion type of mesenchymal tumor in our study.In diagnosis of GIST, c-kit and DOG-1 were commonly used as indicative markers.We further observed a higher rate of c-kit or DOG-1 positivity in the FNB group.Although FNA may be accurate for detecting spindle cell lesions, IHC staining is essential to make differential diagnosis between GIST and leiomyoma/ leiomyosarcoma.Specimens obtained by FNB needles could provide a more accurate evaluation of mitotic activity for risk classification, [31] which is important to determine the next-step clinical management.Besides GIST, 5 leiomyoma/leiomyosarcoma, 12 schwannomas, and 11 other types of mesenchymal tumors were also included in this study. Possibly limited by sample size, the rate of α-SMAor desmin-positive specimen in leiomyoma and the rate of S-100-positive specimen in schwannoma were not statistically different between the 2 groups.However, a tendency indicating the superiority of FNB in Schwannoma (OR, 21.000; P = 0.067) was observed.To eliminate the influence of possible confounding factors, multivariate logistic regression was also performed in the analysis of mesenchymal tumors.FNB needles demonstrated a significantly higher diagnostic accuracy than FNA in the mesenchymal tumor subgroup; however, no difference in IHC staining was noted after the regression.As illustrated in previous studies, even if IHC was performed with FNA specimen, limited tissue could not ensure an accurate diagnosis. [28] the subgroup analysis of NET and lymphoma, no statistical differences were found in adequacy for IHC between FNB and FNA.Only for NET, the diagnostic accuracy of FNB was higher than FNA (91.43% vs. 73.43%,P = 0.038).However, after controlling for confounding factors, the differences were not statistically significant, probably because cellular components were much more abundant in NET and lymphoma than in AIP and mesenchymal tumors.Cell crushing was commonly observed in IHC staining of FNA/FNB samples, which may be the cause of insufficient immunostaining in AIP. [29]In contrast, although cell crushing also occurred in NET and lymphoma cases, the abundance of cellular components ensured that a certain number of tumor cells were stained properly.In our study, FNA also yielded a relatively satisfying diagnostic rate of NET.Despite a previous study concluding that FNB yielded higher sensitivity in the diagnosis of NET, [16] the result was not validated by a rigorous statistical comparison but simply the listing of the sensitivity of the 2 methods.Also, in stratification analysis according to lesion size, FNB yielded a higher diagnostic rate in the subgroup of lesion ≥20 mm, whereas no statistical difference was observed in the smaller lesion subgroup.Unfortunately, most of the benign NETs were <20 mm, [32] which is not favorable evidence for FNB application.As for lymphoma, although a tendency indicating the superiority of FNB in the diagnostic rate was present (OR, 2.609; P = 0.155), we did not observe a statistical difference possibly due to the small sample size. Despite being the largest study exclusively evaluating IHC-required lesions, we recognize some major limitations.First, although specimens in this study were again prospectively evaluated uniformly, this is a real-world retrospective study with a lack of randomization and therefore inevitably subjected to selection bias and confounding factors.During the long-time span of this study, the techniques of EUS and endoscopists' skills are constantly being refined, which also gave rise to unquantifiable effects.Similarly, multiple available needle sizes were used as it is a real-world study.In this study, ProCore accounted for a majority of the FNB needles.Given the limited use of other needles, we did not perform the comparative interclass analysis of different products.To eliminate these heterogeneities as much as possible, multivariate logistic regression was performed. In summary, FNB needles yielded specimens with better adequacy and quality compared with FNA for the IHC-required lesions without ROSE.These results strongly suggest that FNB should be preferably selected in the diagnosis of the IHC-required lesions, especially AIP, mesenchymal tumor, and NET with a size of ≥20 mm.However, larger randomized controlled trials are needed to confirm these findings.The improvement of diagnostic accuracy and classification of IHC-required lesions will certainly help gastroenterologists and surgeons manage challenging situations with more confidence. Clinical Trial Registration The protocol was approved by the institutional review boards of each participating center and registered at ClinicalTrial.gov (NCT05565066). Figure 1 . Figure 1.The tissue integrity assessments of specimens (hematoxylin and eosin stained).Example of (A) score 5, sufficient material for adequate histological interpretation (core tissue length > 1 * 10 HPF, original magnification Â100); (B) score 4, sufficient material for adequate histological interpretation (core tissue length < 1 * 10 HPF, original magnification Â100); (C) score 3, sufficient material for limited histological interpretation (original magnification Â40); and (D) score 0, inadequate for diagnosis, based on previously reported system (original magnification Â40).Scores 2 and 3 are measurements of cytological results and are thus not exhibited here. Figure 3 . Figure 3.Comparison of specimen quality between FNB and FNA in terms of (A) percentage of adequate specimens, (B) number of passes to acquire adequate specimens, and (C) length of core tissues for all lesions. analysis was then performed to control needle type, needle size, lesion site, FNA/FNB procedures in different time spans, and different endosonographers as confounding factors.Based on the results of multivariate logistic regression and controlled for the aforemen- tioned variables, needle type was still the significant predictor for higher success rate of IHC (OR, 2.786[1.515-5.291];P=0.001)andaccuratediagnosis(OR, 2.793 [1.645-4.808];P≤0.001;Table4).However, only in the AIP and mesenchymal subgroup, FNB still resulted in higher diagnostic accuracy after adjusted with all the confounding factors (AIP: OR of 3.861[1.471-10.870],P=0.008; mesenchymal tumor: OR of 3.802[1.239-12.500],P = 0.021; Table Table 5 Multivariate logistic regression analysis according to lesion typesModel 1 was adjusted for needle size, endosonographer, and operation year.Model 2 was adjusted for needle size, lesion site, lesion size, endosonographer, and operation year.
2024-01-08T16:50:51.558Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "59ba593261e3c3ed8f9181233edafa6330ba764d", "oa_license": "CCBYNCSA", "oa_url": "https://journals.lww.com/eusjournal/fulltext/2023/11000/fine_needle_biopsy_versus_fine_needle_aspiration.3.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f797724c985adaedc3415e4a9433269cd549f69a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265424439
pes2o/s2orc
v3-fos-license
Development, validation and a GAPI greenness assessment for the determination of 103 pesticides in mango fruit drink using LC-MS/MS A robust method was developed using LC-ESI-MS/MS-based identification and quantification of 103 fortified pesticides in a mango fruit drink. Variations in QuEChERS extraction (without buffer, citrate, and/or acetate buffered) coupled with dispersive clean-up combinations were evaluated. Results showed 5 mL dilution and citrate buffered QuEChERS extraction with anhydrous (anhy) MgSO4 clean-up gave acceptable recovery for 100 pesticides @ 1 μg mL−1 fortification. The method was validated as per SANTE guidelines (SANTE/11813/2021). 95, 91, and 77 pesticides were satisfactorily recovered at 0.1, 0.05, and 0.01 μg mL−1 fortification with HorRat values ranging from 0.2–0.8 for the majority. The method showed matrix enhancement for 77 pesticides with a global uncertainty of 4.72%–23.89%. The reliability of the method was confirmed by real sample analysis of different brands of mango drinks available in the market. The greenness assessment by GAPI (Green Analytical Procedure Index) indicated the method was much greener than other contemporary methods. Introduction Mango (Mangifera indica), the king of Indian fruits and a member of the Anacardiaceae family, is one of the most significant and commonly grown fruits in India and other tropical nations.A rich profile of vitamins and minerals, good amounts of carbs, proteins, fats, and dietary fiber make mango a nutrient-dense and satiating choice for a balanced diet.It is a rich source of a plethora of phytochemicals like quercetin, isoquercitrin, astragalin, fisetin, gallic acid, and abundant enzymes (Siddiq et al., 2017). Considering its aesthetic values, strong aroma, delicious taste, high nutritive values, and antioxidant properties, the fruit is served as whole fruit, fruit juice, smoothies, ice cream, chutney, etc., and highly impacts on domestic and international trade.The most popular and globally consumed product of processed mango is mango fruit drink.Mangoes are infested by many pests thus vastly affecting the trade (Pena et al., 1998).To manage the losses by pests and diseases, numerous pesticides of different classes like insecticides and plant growth regulators are in use on mango (CIBRC, 2022).But, their unscientific use in agriculture has engraved the problem of residues in mango fruits (Mukherjee et al., 2007). Consequently, mangoes are no longer regarded as the king of tropical fruits in much of Europe; instead, they are now considered to be a prohibited fruit based on the fact that 207 consignments were returned by the European Union (EU) in 2014 (Business standards, 2014).With technological upliftment and increased socio-economic status of the people, food safety concerns in terms of pesticide residues are nowadays attaining wide focus (Nougadere et al., 2020).Therefore, it is crucial to keep an eye on pesticide residues in processed products like mango fruit drinks, especially in light of their consumption by the most vulnerable section of society, i.e., infants, children, and old and infirm persons, for whom any detectable pesticide residue raises the question about safety. The low concentration of analytes and the abundance of additives and interfering compounds that might be coextracted with analytes pose a challenge in detecting pesticide residues in food matrices, which in most cases negatively impacts the analytical results (Wilkowska and Biziuk, 2011;Tang et al., 2004 used liquid-liquid extraction followed by SPE for clean-up and GC analysis for quantifying four pyrethroid pesticides in apple juices.Zang et al., in 2014 used the QuEChERS-DLLME method for fruit juices of complex matrices (orange, lemon, kiwi, and mango) and found its suitability for the quantification of 10 pyrethroid insecticides.Rizzetti et al., in 2016 had developed a UHPLC-MS/ MS method for multi-residue determination of 74 pesticides in orange juice.Sivaperumal et al. (2017) used the UHPLC-Q-TOF/MS method for the determination of 68 pesticides in the mango fruit matrix.A UHPLC-MS/MS method was developed for quantification of 113 pesticides in green and ripe mangoes by Li et al., in 2018.However, methods for multi-residue pesticide analysis in processed foods are scant in number, and in the case of mango fruit drink, the Multiple reaction monitoring for most of the most commonly used pesticides in the Indian Scenario is not available so far.Zambonin et al. (2004) demonstrated varied recoveries for eight organophosphorus pesticides (diazinon, ethyl-parathion, fenitrothion, fenthion, malathion, methylparathion, methidathion, and phorate) in orange, grapefruit, and lemon due to significant variation in sample matrices, even though all three samples represent citrus fruits and belong to the Rutaceae family.In the multi-residue study for 22 GC-amenable and 21 LC-amenable pesticides made by Damale et al. (2023) using GC-MS/MS and LC-MS/MS on four different Indian pomegranate cultivars, resulted in a unique matrix effect and thus acute variation for each pesticide.Sarkar et al. (2022) also identified huge variations in the composition of citrus fruits (kinnow, mosambi, and orange) for phenolic compounds, flavonoids, and antioxidant potency.Therefore a method solely for mango fruit drink is needed for the identification and quantification of multi-residues with utmost importance. Hence, in this study, QuEChERS-based d-SPE extraction-cumclean-up coupled with advanced liquid chromatography tandem mass spectroscopy (UPLC-MS/MS) method has been developed for trace level determination of 103 pesticides in the mango fruit drink matrix.The approach offers excellent selectivity, high sensitivity, and a broad range of applications for the determination of multiple residues in mango fruit drinks.The evaluation of 103 pesticide residues in mango fruit drinks prevailing in the local market was also performed using the suggested approach. To evaluate the greenness of the developed method, the GAPI green chemistry tool was employed in the study starting from sample collection, extraction, and cleanup to final determination by the instrument. Standards Sigma-Aldrich Chemie GmbH, Germany provided Certified Reference Materials (CRM) for 103 regularly used pesticides in the Indian context, including acaricides, fungicides, herbicides, insecticides, plant growth regulators, and rodenticides.A list of the pesticides and their intended purpose, molecular weight, purity percentage of CRM, and MRL of pesticides recommended in mango are listed in Supplementary Table S1. Chemicals, solvents, and apparatus Ammonium formate, NH 4 HCO 2 [98% pure], was obtained from Sisco Research Laboratories Pvt.Ld., Mumbai, India.Anhydrous magnesium sulphate of >98% purity (used after heating at 600 °C for 6 h for removal of phthalates and traces of moisture) employed in the extraction process was procured from Thermo Fisher Scientific, India.Anhydrous sodium chloride of AR Grade (Merck, India), used for extraction, was pre-washed with acetone and activated at 600 °C for 6 h in a muffle furnace before use. Preparation of standard stock solution A primary stock solution of 1,000 μg mL −1 concentration for each pesticide was prepared in acetonitrile in an A-grade 10 mL volumetric flask (Borosil ® , India).An intermediate standard mixture of 103 pesticides of conc. 100 and 10 μg mL −1 and their working solutions of lower concentrations (1, 0.5, 0.1, 0.05, 0.01, 0.005, and 0.001 μg mL −1 ) were prepared from primary stock solution by serial dilution technique and volume made up using acetonitrile. Spiking of mango fruit drink with pesticides and sample processing A 200 mL Mango fruit drink (Pusa Mango drink) prepared by using organically grown pesticide-free mangoes, was procured from the Division of Post-Harvest Technology, IACR-IARI, New-Delhi, 110012. Mango fruit drink prepared as per recommended procedure (Sethi et al., 2006) from organically grown mango, was taken in a 50 mL Oakridge centrifuge tube and added with a standard mixture of 103 pesticides to attain the desired fortification level.After shaking the tube, the material was kept for 2 h in ambient condition (27 °C ± 1 °C), subsequently homogenized using a handheld homogenizer, and placed in an ultrasonic bath for 5 minutes before extraction. Optimization of sample preparation by QuEChERS extraction (original QuEChERS, modified buffered QuEChERS using citrate and acetate buffers) and clean-up procedures (using combinations of anhydrous MgSO 4 , PSA, C-18) were tried and are depicted in Figure 1.Once the QuEChERS extraction method is optimized, the effect of dilution on extraction/clean-up performance using varied combinations of clean-up agents was evaluated by diluting the mango drink at different levels (0, 2, 4, 5 mL) using milli Q water prior to extraction. Liquid Chromatography-⁻Tandem mass spectroscopy (LC-MS/MS) and method development Quantification of the target pesticides was done using Shimadzu LC-MS/MS-8030 (UHPLC model-Nexera, LC-30AD Liquid Chromatography, SIL-30AC auto-injector (Shimadzu Corporation, Kyoto, Japan) coupled with Triple Quadrupole Mass Detector.Zorbax Eclipse Plus C-18 column, 3 mm i. d., 10 cm length with 3.5 µm column particle size (Agilent Technologies, United States make) column was used.Optimization of LC-MS/MS parameters is a prerequisite to identifying and quantifying the residues of multiclass pesticides.In LC, the mobile phase was a mixture of A (80:20 5 mM ammonium formate buffer dissolved in water: methanol) and B (10: 90 5 mM ammonium formate buffer dissolved in water: methanol) used at a flow rate of 0.2 mL min −1 under gradient programming for 22 min runtime.Initially, mobile phases A and B were used in 45% and 55% proportion respectively for 1 min and gradually increased to 100% of mobile phase B within 13 min and maintained until 16.5 min.After 16.5 min, they were brought to the initial proportion of 45% (A) and 55% (B) and maintained until 22 min.A 2 μL sample volume was injected in each run.The Mass Spectrophotometer was operated under Electrospray Ionization (DUIS-ESI interface) in both positive and negative modes for optimization of unique multiple reaction monitoring (MRM) transitions for each pesticide separately. Nitrogen was used as nebulizing gas and drying gas at 3.0 L min −1 and 15 L min −1 flow rates respectively.Ultrapure Argon was used as Collision-induced dissociation (CAD) gas.Desolvation line temperature (DL) and heat block temperatures were maintained at 120 °C and 300 °C respectively.For each pesticide, retention time, Q1 pre-bias, Q3 pre-bias, and collision energy were optimized individually and are mentioned in Supplementary Table S1.Software Lab Solutions Version 5.86, was exercised in data acquisition and analysis. Single laboratory validation of the developed method The suitability and applicability of the developed multi residues analysis method were assessed by single laboratory validation as per the SANTE guidelines (2021).The parameters considered as per the guidelines were linearity, specificity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precession, and uncertainty measurement. Linearity The calibration curve (concentration-response) for a mixture of 103 pesticides injected under optimized method parameters was accomplished using 7 different concentration levels of 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, and 1 μg mL −1 .Correlation coefficients and regression equations for all the pesticides are given in Supplementary Table S1. Specificity To achieve the specificity of identification, the reagent blank was compared with the fortified sample.Detection of the target greater than the detection limit is considered to be the specificity criterion (Banerjee et al., 2019). Sensitivity The sensitivity of the developed method was measured in terms of the detection limit (LOD) and the quantification limit (LOQ) for 103 pesticides in a mango fruit drink.Method LOD was obtained by spiking the blank sample at different fortification levels.LOD and LOQ are considered the concentrations at which the S/N (signal-tonoise ratio) are ≥3/1 and ≥10/1, respectively (Banerjeee et al., 2019).LOQ was based on pre-determined acceptance criteria of 70%-120% recovery and ≤20% RSD.At each analysis, the signal-to-noise ratio of the quantifier transition peak was calculated using the Lab Solution software. Accuracy Accuracy in terms of recovery was studied in triplicates at 0.01, 0.5, 0.1, and 1 μg mL −1 .Recoveries lying between 70% and 120% were considered acceptable recoveries as per SANTE 2021 guidelines.Recoveries of the fortified pesticides in mango fruit drink were calculated against solvent standard (standard solution prepared in acetonitrile) (Eq. 1) as well as in matrixmatched standard (prepared through post-extraction spiking of blank samples) (Eq.2) and corrected recoveries were determined as per following equations.(2) Where, Recovery <70% = not acceptable, 70%-120% = acceptable, >120% = not acceptable. Precession-repeatability The precession of the protocol was confirmed in terms of intra-laboratory repeatability, which was assessed independently at each level of fortification (0.1, 0.05, and 0.01 µg mL −1 ) using the Horwitz ratio (HorRat) (Horwitz and Albert, 2006).The ratio (Eq. 3) is determined for each pesticide to determine whether the procedure is acceptable or not in terms of precision. HorRat RSD Prsd (3) Where, RSD stands for relative standard deviation and Prsd is predicted relative standard deviation, which is computed using the formula Prsd = 2C −0.15 , where C is the mass fraction of the concentration (1 ng/mL = 1 × 10 −9 ).The analytical approach is unquestionably suspected to perform worse than expected if the HorRat is more than 1; if the HorRat is <<1, it is suspected that the collaborative trial was improperly conducted and produced overly optimistic precision values; and if the HorRat is between 0.3 and 1, the method precision in terms of reproducibility is close to the predicted value. Estimation of uncertainty A fishbone diagram was created for potential contributors to the uncertainty after the potential causes of uncertainty were defined at the outset (Supplementary Figure S1).For all 103 pesticides in mango fruit drink, the uncertainty related to purity of CRM (Uc), analytical balances (Um), volumetric flask (Uf), micropipettes (Ug and Uh) (Ud and Ue), recovery (Ub) and instrument (Ua) results were assessed in terms of combined or total standard uncertainty and subsequently as extended or global uncertainty (Eq.5).(Banerjee et al., 2019). The global uncertainty was determined as shown below. Where, GU is global uncertainty and U e is expanded uncertainty, and k is coverage factor 2. Matrix effect The matrix effect is represented as peak enhancement (+ve) and suppression (−ve) and was studied by comparing calibration curves prepared in solvent (solvent standard) and in blank (matrix-matched standard) as per IUPAC technical report (Thompson et al., 2002;Banerjee et al., 2007;Jadhav et al., 2017;Shinde et al., 2021).The matrix effect was calculated using the following formula, Matrix Effect ME % ( ) Peak area of matrix matched standard −Peak area of the solvent standard × 100 Peak area of the solvent standard (5) If ME is positive (+), matrix enhancement and negative (−) means, matrix suppression. Method validation in real samples To validate, the recommended multi-residue approach was used to quantify any residues that might have been present in real-market mango fruit drink samples from 10 different brands or firms that were bought and kept in their original packaging until analysis.Utilising the newly developed modified QuEChERS (citrate) method, extraction and cleanup were carried out, and LC-MS/MS analysis was performed. Assessment of the developed method as per green chemistry Analytical methods with a green perspective, Multiple reaction monitoring (MRM) are being developed to help a variety of analytes be recognised in a single analytical run.The challenge, however, is that the molecules that must be identified are present at very low concentrations and have various physical and chemical properties based on their chemical makeup.One new idea in sustainable development is "green analytical chemistry."As a result, the recently evolving analytical techniques ought to satisfy the requirements of green chemistry.The green analytical techniques are made to use safe ingredients, consume as little energy as possible, and produce as little waste as possible while still being effective.As a result, the goal of most analytical techniques is to use environmentally friendly solvents and a smaller, more straightforward sample preparation stage (Soltani and Sereshti, 2022).To evaluate the greenness of the study, a unique Green Analytical Procedure Index (GAPI) tool was employed.GAPI is a semi-quantitative tool consisting of five pentagrams representing 1) sampling process, 2) sample preparation, 3) reagents and chemicals, 4) instrumentation, and 5) the general method, which provides sufficient data to assess and measure the environmental impact associated with each step of an analytical approach from sampling through the final instrumental analysis.The three major colours of the symbol-green, yellow, and red-denote low, medium, and high impact, respectively (Płotka-Wasylka, 2018) and provides sufficient data to evaluate the greenness of an entire analytical process, from sampling through the final instrumental analysis.The analytical process in GAPI comprises five primary steps: In the green assessment, 15 parameters were considered (Figure 2) and the greenness of the developed (M.IV. 3 Results and discussion Optimization of LC-MS/MS system For the identification and quantification of 103 pesticides, the instrumental method was optimized using Ultra Performance Liquid Chromatography-tandem Mass Spectroscopy [Shimadzu LC-MS/MS-8030].For the ionization, electron spray ionization operating in both positive and negative modes were employed. Method optimization was done by sequential molecular ion scan for the selection of the most abundant precursor ion and it was isolated in the first quadrupole.Different collision energies were optimized to obtain corresponding product ions and thus optimized the MRM transitions (Supplementary Table S1).ESI (+) ionization achieved the best results for most of the pesticides, while pesticides like bentazone, fipronil, flubendiamide, metaflumizone, and propanil exhibited higher abundance in ESI (−) mode.Cabrera et al., 2016 synthetic pyrethroids [alpha-cypermethrin (14.90)Were eluted after 17 min with a mobile phase mixture of 45% of A and 55% of B. Use of methanol in higher percentage for improving the separation, also improved the sensitivity (both ESI +/−) for many of the phenoxy acid and OP pesticides. To improve analyte signals, to obtain better reproducibility and chromatographic responses, 5 mM Ammonium formate buffer was used as a mobile phase modifier.Ammonium ions formed from ammonium formate buffer supress the sodium adducts formation during ionization, which wase quite common under acidic conditions.Thus, pesticides predominantly form [M + H] + for most of the pesticides and [M + NH 4 ] + molecular ions were formed by most of the synthetic pyrethroids (alphacypermethrim, bifenthrin, cyhalothirn-lamda, cyphenothrin, fenvelarate, flucythrinate, permetrin), carfentazone ethyl, cyhalofop-butyl, diclofop-methyl, lactofen.Similar results were noticed by (Hiemstra and de Kok, 2007;Riedel et al., 2010;Stotcheva, 2011) where pyrethroids, diclofop-methyl, etc. have shown much higher sensitivity, better reproducibility, and response due to [M + NH 4 ] + ionization when mobile phase buffers like ammonium formates or acetates were used.The above-mentioned method's optimised LC-MS/MS conditions provided excellent separation for the target analytes, 100 pesticides in the mango fruit drink.Pesticides along with their retention time during elution are given in Supplementary Table S1.Total ion chromatograms in overlay and their retention time of all detected pesticides are presented in Figure 3. 1].Among all these combinations, buffered citrate QuEChERS extraction (ME2) carried out using 2 g anhydrous magnesium sulphate (MgSO 4 ), 0.75 g sodium chloride (NaCl), 0.5 g trisodium citrate dehydrate Investigation of the QuEChERS method and in most of the clean-up combinations gave acceptable recovery (70%-120%) for most of the pesticides.A number of recovered pesticides using all three QuEChERS methods are given in Figure 4 and the recovery percentage of all the pesticides is given in Supplementary Table S1.With the use of citrate buffers, the pH of the extract rose to 5.29 from 4.05 (pH of juice) thus facilitating the extraction of low pH sensitive pesticides more efficiently by improving the selectivity from the coextractives, which yielded good recoveries for most of the acidic pesticides like alpha-cypermethrin, flucythrinate, etc.Similar results were observed by Prestes et al. (2009) where they used acetate and citrate buffers to extract low-pH susceptible compounds, such as thiabendazole and imazalil from the food matrix. Since mango fruit drink typically contains 80%-95% water, and separation of the analyte from water is a critical step in extraction.Acetonitrile, as an extracting solvent, provides extraction of a wide range of pesticides with variable polarities, and it can be easily separated from water.Once the QuEChERS extraction method was optimized, the effect of dilution using mili Q water at varied levels (0, 2, 4, 5 mL) and four clean-up combinations were studied for a maximum number of pesticidal recovery (Supplementary Table S3). In ME2-MC-A, ME2-MC-B, ME2-MC-C, and ME2-MC-D, the effect of dilution had a considerable impact on acceptable recovery.With the increase in dilution volume from 0 mL to 5 mL, the number of pesticides recovered was also increased in all the combinations (Figure 5).At 5 mL dilution, the treatment combination ME2-MC-D recovered the highest number of pesticides (100) in the acceptable range with <20% RSD compared to all other treatment combinations. Though mango fruit drink has sufficient water, the presence of antioxidants, sugar, and other compounds present in mango and preservatives used in fruit drink might get in the way of extraction GAPI assisted comparative assessment of the green profile of the proposed method with the existed methods for the residues analysis in mango fruit drink. TABLE 2 Green Analytical Procedure Index (GAPI) Parameters and comparison between the existing method and developed method for residue analysis in Mango juice. Index parameters M. I MII. M.III. MIV. the and instrumental identification and quantification of pesticides.Hence, dilution of this drink prior to extraction was effective in reducing the interfering matrix components.In the LC-MS/MS method, optimization using unique mass by weight-based quantifier and qualifier MRM transitions for each pesticide ensured the targeted detection and quantification in an acceptable range even in the diluted sample.Hence, 5 mL dilution and ME2-MC-D (anhy MgSO 4 alone as clean-up agent) was considered best for the maximum number of pesticides.Since anhy MgSO 4 was used alone in this clean-up treatment, it has given a good amount of acceptable recovery for the highest number of pesticides.Anhy MgSO 4, has not adsorbed any pesticides onto it thus ensuring good clean-up and recovered the maximum number of pesticides.The RSD of most of the pesticides was less than 20%, which shows the good precession of the method.Anhy MgSO 4 when used in extraction, increased the ionic strength of the aqueous mixture and helped in binding large amounts of water.It also absorbed traces of water left in the clean-up step.Sodium chloride in extraction helped in increasing the ionic strength of the aqueous phase and also aided in phase separation. In d-SPE i.e., clean up step for the removal of matrix, when C18 is used in clean-up, being hydrophobic, C-18 retained many non-polar fatty compounds.PSA (Primary and secondary amine) exchange material having bidentate structure with strong chelating effect, used as base sorbent for d-SPE clean-up caused retention of many interfering substances like organic acids, fatty acids, sugars and other polar compounds, and it also retained some acidic sulfonyl urea herbicides (azimsulfuron, bensulfuron-methyl, ethoxysulfuron, halosulfuron methyl, pyrazosulfuron-ethyl, triasulfuron), bentazone, bispyribac sodium, bromodiolane, and imazamox thus resulting in lower recoveries (<70%).It also adsorbed polar pesticides (fipronil, lactofen, propanil, and metaflumizone) and resulted in <70% recovery (Supplementary Table S2,S3).Here PSA probably caused the formation of ionic connections with the analytes that have the negative charge, thus responsible for the loss of acidic pesticides.Hence QuEChERS citrate extraction (ME2) with 150 mg of anhydrous MgSO 4 (MC-D) clean-up combination was further chosen to validate the method for other parameters like recovery, repeatability, etc., at 0.1 μg mL −1 , 0.05 μg mL −1 and 0.01 μg mL −1 fortification levels.Similar observations were noticed by He and Liu (2007); Lu et al. (2012) where Primary Secondary Amine (PSA) absorbed acidic pesticides like chlorpyrifos in apples and cucumbers resulting in poor recovery and false negative results. This secondary clean-up also serves to eliminate any residual water that remains from step one and also allows extraction salts to diffuse homogenously throughout the entire sample.The end result is a more thorough, overall extraction when compared to traditional SPE protocols.Fillion et al., 1995, quantified 199 pesticides in banana, carrot, and pear samples by employing GC/MS.Smallscale charcoal-celite column clean-up is used to get rid of coextractives.This method is tedious and time-consuming and requires a larger sample size and a lot of acetonitrile (>50 mL) per sample, and some pesticides had a large coefficient of variation due to large sample injection.In contrast, our method used QuEChERS extraction and cleanup, where only 5 mL of Albero et al., 2003, quantified nine organophosphorus pesticides in fruit juices using matrix solid-phase dispersion (MSPD) of juice samples on florisil, followed by the extraction of ethyl acetate with the aid of sonication, and analysis was performed in the Gas chromatography with nitrogenphosphorus detection.In contrast, our method has wider applicability by covering multiclass pesticides (103 pesticides) with triple quadrupole mass confirmation and sample preparation was much easier with the aid of QuEChERS. Specificity As per the SANTE guidelines (2021) to achieve specificity of any analyte, the peak response in reagent blank and blank control samples should be ≤30% of the fortified sample at LOQ (SANTE/ 11813/2021).Variations in QuEChERS extraction and clean-up combinations and different levels of dilutions were tried to ensure efficient extraction of all the fortified pesticides in the presence of undesirable interfering matrix components to ensure selective quantification.Optimization of the quantifier (Q1) and qualifier (Q2) MRM transitions, which unambiguously extracted the requisite pesticides in the presence of other pesticides and matrix interferences, allowed specificity of the pesticide for trace level identification and quantification in mango fruit drink matrix.MRM transitions for the specified pesticides under the study are given in Supplementary Table S1.The specificity of all the pesticides calculated from the peak in the reagent blank and the peak in the fortified sample at LOQ is given in Supplementary Table S4.The specificity of azoxystrobin is given in Supplementary Figure S2. Accuracy-recovery against the solvent standard and matrix-matched standard Accuracy was measured in terms of recovery by fortifying different concentrations of 103 pesticidal mixtures at 0.1, 0.05, and 0.01 μg mL −1 (Supplementary Table S4). Precision By calculating the HorRat ratio derived from the percentage of relative standard deviation (%RSD), the intra-laboratory repeatability for each pesticide at three fortification levels in mango fruit drink was assessed.With some exceptions, the majority of the pesticides had HorRat values between 0.2 and 0.8 (Supplementary Table S4), indicating the method's acceptable repeatability and robustness (Horwitz et al., 1980;Horwitz and Albert, 2006).In order to extract 74 pesticides from orange juice, Rizzetti et al., 2016 developed a buffered QuEChERS extraction process employing Ultra-high-performance liquid chromatography linked to tandem mass spectroscopy (UHPLC-MS/MS).The validation findings showed the recoveries in the range of 70%-118% with an accuracy of less than 19% RSD. Determination of uncertainty ISO/IEC 17025 mandates that the measurement uncertainty (U) must be established.Additionally, it must be shown that the laboratory's own uncertainty does not go above the default value of 50% used by regulatory bodies when making enforcement decisions.The uncertainty contributors like the purity of the CRM, analytical balances, the volumetric flask used to prepare standards, micropipettes, and recovery results for all the 103 pesticides were represented in fishbone Supplementary Figure S1.The total % uncertainty of the developed method ranged from 4.72% to 23.89% where bensulfuron-methyl had the lowest (4.72%) and carfentrazone-ethyl (23.89%) had the highest % uncertainty (Supplementary Table S4).Out of 103 pesticides, 24 pesticides had % uncertainty of <10%, 64 pesticides had shown 10%-20% and 15 pesticides had uncertainty in the range of 20%-24%.As per the SANTE document (SANTE/11813/2021), when the mean bias is less than 20% and the default expanded measurement uncertainty is up to 50% it is considered acceptable at the LOQ level.In our method also all 103 pesticides had a percentage uncertainty of <24% as per SANTE recommendation, whereas 88 pesticides had a percentage uncertainty of <20%, and 15 pesticides (carbaxin, carfentrazone-ethyl, clomazone, cyphenothrin, diflubenzuron, fenamidone, flufenoxuron, fenvelerate, hexythiazox, imidacloprid, isopropalin, phosalone, profenophos, tebuconaole, and thiaclorpid) had shown <24% uncertainty of 20%-23.89%was mainly due to large variation in sample recovery, that is 10%-20% of relative standard deviation (%).This large range of uncertainty is mainly attributed to recoveries, while the rest of the parameters [uncertainty related to purity of CRM (Uc), analytical balances (Um), volumetric flask (Uf), micropipettes (Ug and Uh) (Ud and Ue), recovery (Ub), and instrument (Ua)] considered for uncertainty have not caused significant variation.The developed method is best suited for the quantification of 24 pesticides that had <10% uncertainties, and for 64 pesticides for which % uncertainty ranged from 10% to 20%, the method provides moderate performance and for the rest of the 15 pesticides, the method has a poor performance.But considering the other benefits of the developed method, special emphasis needs to be given while performing recovery studies.Similarly, Banerjee et al., 2007;Jadhav et al., 2017 reported uncertainty of up to 20% in grapes and cardamom respectively.There are no reports available so far on the determination of the method's uncertainty in the previously established methods quoted in the manuscript (Fillion et al., 1995;Albero et al., 2003;Zhang et al., 2014;Deme and Upadhyayula, 2015;Rizzetti et al., 2016;Naz et al., 2021).Hence the present method is useful in determining the uncertainty, which is a practical strategy that encompasses trueness (bias) and reproducibility. Matrix effect In QuEChERS combined with d-SPE, the matrix effect is the major hindrance in analysing pesticide residues resulting from the matrix interference during ionization, identification, and quantification thus causing suppression or augmentation of the analytical signal.The matrix effect was prominent in the test sample, mango fruit drink, where signal enhancement was seen for most of the pesticides.Out of 103 pesticides, 77 pesticides had shown matrix enhancement where, matrix effect values were positive while 20 pesticides had shown matrix suppression of < -10% (some of the triazoles, synthetic pyrethroids, etc.).It was found that 21 pesticides had a matrix effect of <10% and 40 pesticides had matrix enhancement or suppression of 10%-20%.The matrix effect at LOQ for all the detected pesticides is given in Supplementary Table S4.In all the clean-up combinations, we could see that dilution had a considerable impact on producing acceptable recovery for numerous pesticides (Supplementary Table S4 and Figure 5), which might be due to the lowering of the matrix interference because of dilution.Banerjee et al., 2007 also found prominent matrix suppression of more than 30% for a greater number of pesticides, mostly organophosphates in grapes, and signal suppression of 20% was seen for the triazole group of pesticides.While our method had shown Matrix enhancement of >30% for 16 pesticides for some of the synthetic pyrethroids, triazoles, etc. Rajski et al. (2013) found that the matrix effect in almonds and avocado was eliminated by two and four times dilutions respectively and by the use of various sorbents such as PSA and C-18.Similar findings were reported by Ferrer et al., 2011, who found that the dilution strategy effectively eliminated the matrix effect for numerous analytes in juices like orange, leek, and tomato.However, the matrix impact was more pronounced in the presence of the matrix for particular pesticides, such as carbofuran. Market sample analysis The newly developed, single laboratory-validated Multiple reaction monitoring was employed for the estimation of pesticide residues in commercially available 10 mango drink samples in the Delhi (Indian) market.It was revealed that chlorpyrifos was detected in all the market samples, while bitertenol, tebuconazole, and tricyclazole were detected in some of the market samples of mango drinks.(Table 1).In the study, the detected pesticide residues of tebuconazole were less (<0.2 mg/kg) than the MRL values of raw mango fruit and no MRL values are available for the rest of the detected pesticides.Though CIBRC has recommended 36 pesticides including fungicides (12), insecticides (17), and plant growth regulators (7) in mango, MRL has been fixed for only 23 pesticides including a few heavy metals as per FSSAI, 2011.In the case of the mango fruit drink, neither any MRL values exist nor any systemic study is available so far in India or at the international level. GAPI (Green Analytical Procedure Index) assessment Many issues have been solved by new approaches, which also increase accuracy, repeatability, throughput, and economic benefit.The ability to analyse data from samples with a reduced initial size, even at the trace level, is also essential.In the present study, the GAPI (Green Analytical Procedure Index) tool comprising pictograms of 15 various parameters is used for green assessment of the developed Multiple reaction monitoring in mango fruit drink (M.IV.).These parameters were applied for sample collection, extraction, and cleanup to final determination by the instrument and compared with three existing methods in raw mango fruit (M_I, M_II, M_III).GAPI assisted comparative assessment of the green profile of the proposed method with the existing methods for the analysis of the residues in mango fruit drink is mentioned in Figure 6 and Table 2.The developed method analysed 103 pesticides in 22 min single run, whereas M.I. analysed only one pesticide cypermethrin in 10 min run method, M. II.quantified 10 synthetic pyrethroids in 10 min run time, whereas M. III.analysed 41 pesticides in 10 min (Organochlorines, organophosphates, carbamates, and synthetic pyrethroids) (Table 3).From the analysis, it can be concluded that our developed Multiple reaction monitoring encompassing QuEChERS extraction and clean-up method (M.IV.) is safer and much more green with respect to sample preparation, solvent and reagent usage, and instrumentation than the other methods quoted in the study. Conclusion The developed method using citrate QuEChERS extraction coupled with triple quadrupole LC-MS/MS for 103 pesticides was found effective in successfully identifying and quantifying most of the pesticides fortified in mango fruit drink samples.ESI (+/−) ionization operating in MRM mode improved the selectivity and sensitivity of the pesticides.Since extraction using citrate QuEChERS buffers gave the maximum number of pesticide recoveries, this extraction method was chosen for further analysis.Dilution of mango fruit drink at different volumes prior to extraction gave good recovery for adsorbent combinations, but compared to all dilution volumes and clean-up combinations, anhy MgSO 4 used alone in clean-up agent and 5 mL dilution gave the highest number of pesticides recovery.Matrix-matched calibration helped in compensating the matrix effect thus ensuring efficient recovery of the targeted pesticides.A single analyst can analyze roughly 20 samples in a 24-h cycle day (8 h work/day), and the instrumental method can acquire 40-42 samples per day including the run of calibrations standards for quantification.The method proved the fitness of the method as per SANTE guidelines (2021) and can be used for the intended and future purposes.The proposed method is very green in comparison with the other methods as per GAPI index parameters.Real sample analysis, i.e., mango fruit drink samples of different brands collected from the market when analyzed for residues using the developed method, gave residues for itertanol, chlorpyriphos, tricyclazole, and tebuconazole and the quantified residues of tebuconazole were less than the MRL values in raw mango fruit.However, information on MRL fixation in mango juice or mango fruit drinks is not available in both Indian and international scenarios.Hence more work needs to be done in the future to calculate the processing factor at various stages during the processing of mango into processed drinks or any other commodity, which is a crucial step in the fixation of MRL in processed mango fruit drinks and to ensure safety for human consumption. FIGURE 1 FIGURE 1 Flow diagram of optimization of modified QuEChERS extraction and cleanup methods in mango fruit drink. % Peak area of the pre − extraction spiked sample Peak area of the post − extraction spiked sample x 100 ) multiresidue LC-MS/MS method for 103 pesticides in mango fruit drink was compared with three other existing methods (M.I. and M. II. and M. III.) in mango drinks for the residue/multi-residue analysis.M.I. = Naz et al., Application of High-Performance Liquid Chromatography to the Analysis of Pesticides in mango juice.M.II.= Zhang et al., 2014.Determination of ten pyrethroids in various fruit juices: Comparison of dispersive liquid-liquid FIGURE 3 FIGURE 3Total ion chromatograms of detected pesticides. FIGURE 4 FIGURE 4 Recovery of pesticides by different QuEChERS extraction and cleanup combinations. FIGURE 5 FIGURE 5 Effect of dilution in Citrate QuEChERS extraction and cleanup technique on recovery of pesticides. TABLE 1 MRL values fixed by FSSAI in raw mango and pesticide residues detected in mango fruit drink sample using the developed LC-MS/MS method. TABLE 3 List of parameters used in comparative study of the developed method with the existing methods in mango fruit drink for residue analysis.
2023-11-25T16:14:08.171Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "ff622ea5164c3ce3c859acb7f5dd9a858569b9a5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2023.1283895/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "795974cd55d1ef47a1250cd6cd0ba681a1cb37ca", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
256609048
pes2o/s2orc
v3-fos-license
Τhe teacher’s key role in the challenge of the effective classroom management Classroom management includes the teacher’s actions in order to create and sustain a supportive and stimulating learning environment, through building up authentic relationships of interaction and cooperation with his/her students. The aim of this paper is to highlight the teacher’s key role in the challenge of the effective classroom management, as s/he has to cope with a highly heterogenous students’ population, his/her multifaced tasks and the dynamic changes in the educational field. The results showed that a successful classroom management is facilitated by the teacher’s professional readiness and his/her developed communication skills, who planning and organizing the educational process, according to students’ diverse educational needs and interests, providing them a high-quality educational work in a well-structured engaging and reflective learning environment and cultivating meaningful relationships of reciprocity, contributes, decisively, to their academic learning and socio-emotional development. Introduction Effective classroom management is considered, worldwide, as one of the most important indicators for assessing the teachers' effectiveness and should be perceived as part of school administration system, which includes participatory processes, student-centred teaching approaches, commitment to common educational goals, continuous quality evaluation of educational work, aiming at the improvement of education procedures and structures. This revised understanding of classroom management emphasizes on the creation of a learning-centred environment, redefining its structural function, as well as the processes of teaching, learning and outcomes' assessment (Rijal, 2014). At the same time, as the educational reform has placed particular emphasis on students' metacognitive skills, self-regulated learning strategies and team-collaborative learning, it has delineated a framework of higher requirements and skills for effective classroom management from teachers (Korpershoek, et al., 2016). Furthermore, given that technology and digital media overwhelm students' everyday life, they need to be used in new interactive teaching approaches, in order to renewal the educational processes, to motivate students in the construction and reflective management of the new knowledge and in an individual way of learning, by cultivating new literacies and skills that link their learning to real life (Delceva-Dizdarevik, 2014) In this context, the teacher, with his substantial presence, is the pillar of the classroom ecosystem, guiding a heterogenous students' population in a self-regulated learning, their self-evaluation and the development of social skills (Kumar, & Liu, 2019). Besides s/he motivates them to become active parts of the educational process, within a well-structured and flexible learning environment, that is permeated by freedom of expression, acceptance of diversity, individualized interest, and a sense of belonging, but also to commit themselves to compliance with the mutually agreed rules, which ensure an unhindered educational procedure (Önder, 2019). In recent years, however, in the light of providing equal educational opportunities, classroom management is one of the greatest challenges for the teachers, as their professional skills are tested by the inclusion of students with special educational needs, in the mainstream schools and the co-education of students with different ethnic / racial / cultural identities (Flower, et al., 2017;Thangarajathi, & Joel, 2010). The present literature study aims to deal with the ever-present issue of classroom management, attempting to highlights its challenging requires in the modern school and the teacher's key role, who as the main administrator of the highdynamic classroom environment, fosters the development of the necessary interaction with his/her students, through his/her updated teaching and pedagogical work, guiding them to reach clearly identified learning and development goals. Moreover, it analyzes how teacher's professional readiness and communication skills are the catalysts of the effective classroom management, as s/he organizes a collaborative learning environment of qualitive interactions, preventing any undesirable behavior or action, which may disrupt its orderly pace, hindering the learning and development of his/her students. Thus, the developing of flexible preventive strategies and implementing of inclusive practices aligning to the students' learning style and educational needs are presented, but also some inhibiting factors, making it clear that a successful classroom management requires, primarily, a successful communication, in order to be developed authentic relationships of reciprocity among all members of the classroom (Shakir, 2014;Emmer, & Stough, 2003). So, studying this paper, the reader will realize that the cultivation of meaningful relationships with his/her students, facilitates the teacher's effective classroom management. Methodology It is crucial to stress the teacher's key role in the challenge of the effective classroom management, which ensures his/her students' academic learning and socioemotional development, but also to draw attention to the significance of the teacher's professional readiness, abilities and primarily communicative skills to create a well-structured engaging and reflective learning environment with authentic relationships, which prevents disruptive behaviors, facilitating the classroom management and the well-being of all its members. Using the keywords "classroom management", "preventive strategies", "professional readiness", "communication skills" and "authentic relationships" scientific articles were chosen from Google Scholar and ResearchGate. We excluded articles that were earlier than the last decade, as the majority of our articles have a time horizon of publication from 2013 to 2022. Nevertheless, we also selected some scientific articles from previous years, because we considered them important for highlighting the main points of our research to achieve the results and reach conclusions. The purpose of our systematic review of the relevant literature was to focus on the study of the teacher's key role in the classroom management and the preventive strategies s/he uses to achieve this challenging and multidimensional goal, based on his/her specific characteristics, abilities and skills, but also the ways s/he can cope to deal with the factors that make it difficult a successful classroom management to maximize educational outcomes for all his/her students. The classroom The classroom is a structured framework for learning and development, which includes the physical space with the necessary logistical equipment, but also a dynamic climate of emotional, social and environmental interactions among teacher and students, as well as their specific characteristic and beliefs about the value of learning and school (Önder, 2019;Shakir, 2014;Erden, et al., 2016;Azubuike, 2012). At the same time, is a motivative ecosystem, characterized by immediacy, publicity, complex and often unpredictable structure, but also synchronisation, as many processes are carried out simultaneously within it (Djigic, & Stojiljkovic, 2011). In order to be an open learning community, it is required a safe, supportive and engaging environment, with open channels of communication, coherent educational goals, mutual relationships and cooperative practices. The teacher, with his/her students, creates the "ethos" of the classroom and guides them, through well-planned, engaging and reflective teaching and learning procedures, to reach predetermined educational goals and achieve their self-actualization (Adeyemo, 2012). The classroom management The term "management" was incorporated into the field of education from industry and encompasses the concepts of programming, organizing, decision-making, coordinating, controlling, communicating and directing the acts and actions of an organization's members by its leadership, with a view to rational use of its resources and the achievement of clear and structured performance objectives (Delceva-Dizdarevik, 2014). In particular, the concept of "classroom management" refers to the organization process by the teacher of the necessary academic tasks for effective teaching and learning within a specific context, in other words it means that the teacher, clearly, communicates to the students, within a collaborative learning environment, the academic expectations and desired behaviors (Adeyemo, 2012). Thus, while in the past, this term has been referring exclusively to discipline practices and behavioral interventions, over the last decades, it has changed and describes, according to Evertson & Harris (1999), in a holistic way, the teacher's actions and processes, aimed at organizing a supportive physical and psychological environment, which encourages the Research, Society andDevelopment, v. 12, n. 2, e20412240054, 2023 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v12i2.40054 creation of positive climate, dynamic interactions, the sense of community and a meaningful educational procedure in the classroom, promoting students' active learning and socio-emotional development (Thangarajathi, & Joel, 2010;Erden, et al., 2016). Similarly, Evertson and Weinstein (2006) explain that classroom management aims at creating and stabilizing an organized environment conductive to students' ethical and social development and their unhindered engaging in academic learning (Korpershoek, et al., 2016;Chandra, 2015). Therefore, classroom management is a multidimensional construct that includes three sub-variables: a. the teaching management b. the human resources management and c. the behavior management (Martin, & Shoho, 2000;Martin, et al., 2006). Preventive strategies for the effective classroom management Successful classroom management, which maximizes the quality and the quantity of teaching time and the educational work, requires the programming, coordination, control, monitor, organization of physical space, logistics, teaching and learning activities. As Grieser (2007) points out, critical thinking, fruitful questioning and inquisitiveness are motivated within a favorable learning environment, which has a positive impact on the quality of relationships that are developed between teachers and their students (Babadjanova, 2020). Therefore, the teacher's essential presence in the classroom, getting to know his/her students, his/her personal interest and the creation of a deep relationship of mutual understanding, provides the basis on which effective classroom management is founded (Kumar, & Liu, 2019). In this light, classroom management strategies aim to cultivate students' prosocial behavior and encourage their academic engagement and commitment, taking into account their new psychosocial reality (Niculescu, & Franţ, 2016). In primary education, these strategies focus on pedagogic methods and students' behavioral expectations, while in secondary education, on their orientation in a self-directed learning (Hans, & Hans, 2017). In more detail, Evertson and Weinstein (2006) stressed that, through the performance of their multidimensional educational work, teachers pursue to fulfill two purposes: a. the creation of a positive, motivating, supportive, safe and inclusive learning environment, which fosters students to become partners and participants in a meaningful learning, through structured educational processes and well-planned teaching and learning activities, with the selection of appropriate digital resources and means and b. the challenge of their moral-emotional development and self-regulation, which minimizes the likelihood of an inappropriate behavior, contributing to their academic and social achieving (Korpershoek, et al., 2016;Chandra, 2015). In order to maximize the benefits of educational processes for his/her students, teacher should act proactively, focusing, primarily, on the ways of creation of an interactive learning environment, which promotes their self-control, selfregulation, self-evaluation, through the awareness of their co-responsibility for their learning, encouraging students' initiatives and gaining intrinsic incentives. Additionally, s/he must take into account the cultural diversity of the classroom, the principals of differentiated teaching, the necessity to update his/her knowledge and practices, as well as the dialectic relationship of the learning provided in the classroom with modern society and real life. Very important, in this direction, is the quality and quantity of the teacher's professional experiences, which s/he gets in the daily classroom practice, his/her personal beliefs and attitudes, but also, his/her ability to effectively communicate, to exchange views with his/her colleagues, as well as his/her need for lifelong training, enabling him/her to model appropriate behaviors and set reasonable expectations of what his/her students can achieve, through the specific educational process (Korpershoek, et al., 2016;Thangarajathi, & Joel, 2010). Such educational practices, which balance teacher's control with students' cooperation, as well as teacher's demand for students' effort with his empathy and knowledge of their learning readiness, addressing the issue of classroom management as a creative challenge, are (Niculescu, & Franţ, 2016;Marzano, & Marzano, 2003): ➢ The development of a structured and predictable motivating learning environment, with stable procedures and routines, which makes the "classroom organization plan" feasible. This plan consists of the teacher's decisions about his/her students' learning, his/her cognitive adequacy, the creation and grouping of his/her educational material, the appropriate teaching time management, the use of alternative and innovative teaching strategies and the sequence's readjustment of flexible educational activities, in every stage of the teaching and assessment process, according to the students' individual learning pace, stimulating their interest and attracting their attention, as well as enhancing teacher's sense of self-efficacy and his/her students' esteem. ➢ The assignment to his/her students of active roles to carefully structured learning activities, under the discreet monitor and guidance of an enthusiastic teacher, who informs them, from the outset, of the goals and directions of his/her teaching. Determining the estimated time for each activity and task allows students to better organize their learning course and prevents negative emotions of stress, failure, frustration, while, at the same time, the operational transitions between activities ensure the coherence of educational process, preventing disturbing behaviors. In addition, the possibility of given more chances for responses, the selection of desired activities, the assessment procedures that permeate the daily teaching practice as learning opportunities, but also the expressed confidence in their abilities, the avoidance of criticism, the demonstration and rewarding of consistency, the necessary clarifications, the encouragement of self-action, the developing of instinct incentives, the teamcollaborative teaching methods, the peer learning, the teaching of metacognitive skills, the differentiation of educational activities according to the students' diverse educational needs and the equal acceptance as important of every ethnic identity, language, culture, are decisive enhancers of classroom's positive climate. ➢ The posting, from the begging of the school year, in a prominent place of the classroom, specific, positively worded, succinct rules that clearly describe observable and measurable behaviors. Of key importance is their negotiation with the students, in order to provide them a fair, but also a binding regulatory framework. The contribution and cooperation of his students is substantial for upgrading the quality characteristics of learning environment, having as starting-point their self-respect and seeking their compliance with behavior and learning performance expectations. In addition, cooperation with parents, school principal, other classroom teachers, school's coordinators and psychologists is considered necessary. ➢ The proximity, the observation of any detail or change in the classroom conditions, the vigilance, the continuous monitoring of the space, the attempt to halt the escalation of an undesired behavior, the immediate interventionwithout verbal attacks-in the inappropriate behavior, the attempt to reframe it, or the choice of other alternative strategies to improve it as well as the consistent and necessary provision of sanctions (Kumar, & Liu, 2019;Flower, et al., 2017;Adeyemo, 2012;Chandra, 2015;Babadjanova, 2020;Hans, & Hans, 2017;Cevallos, & Soto, 2020;Little, & Akin-Little, 2008). ➢ The integration of ICTs, as integral tools in the educational process, which reinforce the enthusiasm, the inquisitiveness, the motivation and increase the levels of their concentration, engaging, interaction and understanding. ➢ The use of multimedia resources, which offers an enriched instruction, through its multisensory interactive presentation, enhancing learning with auditory and visual stimuli. ➢ The management, from the early stage, of the students' anger and stress, due to their stressful schedule, the deadlines for completing their assignments, or the failure to communicate with a teacher, the conflicts with their classmates or family members, so that they are guided in self-regulation and self-control. The same management is required το deal with the teachers' elevated levels of stress and anxiety, due to the increased demands of their multidimensional educational work or the problems that arise in the management of students' behavior or even, of entire classes (Kapur, 2018). ➢ The implementation in the education field of the "entrepreneurial approach", with an anthropocentric orientation, where the effectiveness of the teacher's professional skills, as well as the efficiency of the educational structures and processes, in the classroom, is evaluated reflectively, so that classroom becomes a creative and resourceful learning lab, which fosters new knowledge, skills and attitudes (Rijal, 2014). Inhibitory factors According to Brophy (2006), the approach of the classroom management, through the creation of a supportive and collaborative environment, with clear learning and behavior expectations rather than through the imposition of disciplines is proved more effective (Adeyemo, 2012). In this regard, Lewis et al. (2008) observed that teachers' aggression and punishment had a negative impact on students' behaviours and that the disciplinary actions were corrosive to classroom's climate. On the contrary, Tartwijk et al. (2009), studying 12 teachers in the Netherlands, reported that they were very competent in classroom management, guiding their students with clear rules that they had in common proposed, using humour and reasonable arguments to prevent unaccepted behaviours and investing in the cultivation of authentic relationships with them. In the same vein, they were giving them continuous feedback, adapting their teaching style to their students' learning background and interests, justifying the necessity of each educational activity and empowering their commitment to learning (Postholm, 2013). These findings show that the teachers' authoritarian enforcement of rules and punishment provokes the students' resistance and negative reactions. In the contrast, the teachers, manifesting empathy and genuine interest in what concerns them, expressing their confidence and sharing with then the responsibility for their learning, highlighting their strengths, defining realistic performance expectations, rewarding them for their academic and extra-curriculum achievements, contributes to the development of endogenous incentives, which direct them to self-control and voluntary commitment to the educational achieving goals. On the other hand, classroom management is often an area of concern for new teachers and sometimes also, for experienced teachers, as it characterized of inconsistencies, since they called upon to cope with a highly heterogeneous students' population and the constant changes of the educational reform, even, without the support needed from the school principal. A common inhibitory factor is the manifestation of inappropriate attention seeking behaviours, related to the developmental characteristics of adolescence, the family context and the school climate. Such behaviours are the annoying talking, the distraction of attention, the hyperactivity, the disruption of the classroom lesson, the excessive delay on entering or leaving the classroom without teacher's permission, the failure to comply with classroom rules and teacher's instructions and the verbal or/and the physical violence to classmates or even the teachers (Adeyemo, 2012;Little, & Akin-Little, 2008). Other factors that trigger disruptive students' behaviours are the low levels of learning readiness, the reduced motivations to achieving specific goals, the lack of interest in the courses content, the difficulty of concentrating the attention, the absence of self-limits and the disregard of any rule, the indifference, as well as the lack of cooperation and taking responsibility from their parents (Emmer, & Stough, 2003). Another restrictive parameter is the inclusive orientation of the general classes, which requires from teachers the knowledge of differentiated instruction strategies. Particularly, Adelman & Taylor (2002) report that today's classrooms include students with emotional and behavioral disorders, at a rate of 12%-20%, while students with special educational needs amount to 18%, when the necessary resources to address their needs are missing. Nevertheless, they note that the teachers who effectively communicate with their students and differentiate their instruction, adapting it to their individual needs and learning styles and rewarding their efforts, manages to respond satisfactorily to their classrooms management (Martin, et al., 1998). Furthermore, Rademacher, Schumaker & Deshler (1999) observed that teachers, who improved the quality and the level of difficulty of the tasks assigned to their students with mild disabilities, were able to increase their engagement and to minimize disorderly behaviours (Hans, & Hans, 2017;Marzano, & Marzano, 2003). Besides, the family culture, which implies a different way of interpreting acceptable behaviours, may affect the children's behavioral and learning performance expectations and sometimes, conflicts with students of distinct cultural, racial or ethnic identity may arise. In their research, conducted in the US, Gregory et al. (2010) showed that teachers are less friendly and tend to remove, more often, the Latin and Indians backgrounds students, as well as students of color, from their classrooms. However, Gregory & Weinstein (2008) pointed out that when teachers showed attention and set academic performance expectations, for their students of color, they developed with them relationships of confidence and cooperation (Emmer, & Stough, 2003). Hence, Weinstein, Clarke & Curran (2013) emphasized the necessity for the teachers to be sufficiently aware and respectful of their students' cultural background, understanding the difficulties they face, due to the disparity between their family and school contexts and to create a welcoming classroom, in which they can develop a sense of community (Postholm, 2013). Large classes are an additional challenge, since it has been shown that smaller classes are more well controlled and they favor the development of a high interaction, intimacy and connectivity climate, the students' individualized support and higher engagement in the educational activities, so that they tend to have higher academic performance and more proper behaviors (Rijal, 2014). With regard to teachers, some inhibiting factors are their reduced communication skills, that make it difficult to have meaningful contact with their students, the shortage of adequate training and practice in classroom management, the ignorance of preventive approaches, the lack of professional experience of the newly appointed teachers, in order to sufficiently explain to their students their teaching goals, connecting them with real life (Flower, et al., 2017). Ignoring how they influence their students' behaviours with their own behavior in the classroom, instead of motivating them to take co-responsibility, they implement response strategies, by punishing them for their unruled behaviours, undermining the positive climate of the classroom (Postholm, 2013). Thus, as Oliver & Reschly (2010) argue, they focus on coping rather than preventing behavioral problems, overlooking to view themselves at the core of these problems, due to their failure in the classroom management, resulting in a sense of incompetence, intense stress and insecurity, low levels of job satisfaction and sometimes, symptoms of burnout (Önder, 2019;Flower, et al., 2017;Little, & Akin-Little, 2008). As characteristically showed a meta-analysis study, teachers who had, from the beginning of the school year, developed substantial relationships with their students experienced, over the course of the school year, 31% fewer behavioral problems, compared to their colleagues, who had typical relationships with their students (Marzano, & Marzano, 2003). Furthermore, Browers & Tomic (2003) observed that new teachers, considering that they do not have effective means, are reluctant to deal with an undesired behavior either because they prefer to ignore it rather than confront it or because they do not know how to react or even because they feel that it is Research, Society andDevelopment, v. 12, n. 2, e20412240054, 2023 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v12i2.40054 related to their professional inadequacy, as a result of which, it is, sometimes, consolidated as a norm, hindering the effective classroom management (Thangarajathi, & Joel, 2010). In this light, according to Santrock (2004), the teachers should investigate the cause that triggers an inappropriate behavior and highlight the positive aspects of their students' personality, considering whether the physical and psychological climate of the classroom, along with their own behavior, promotes their self-discipline and self-esteem as well as motivates highly their engagement in educational activities. In the same direction, the teachers' acquisition of self-awareness and cognitive adequacy, the development of a strong professional identity, communication skills, emotional intelligence and support networks, the knowledge and use of modern teaching and pedagogical strategies, case studies and practices in a variety of educational contexts, drive to the gradual strengthening of their classroom management skills (Adeyemo, 2012;Emmer, & Stough, 2003). Discussion There is no doubt that in these days, when the quality of educational processes is reflected in the learning environment, a holistic approach to the "classroom management" is attempted, with the challenge to transform it into an authentic experiential laboratory, in which the teacher's actions are not only related to the management of its physical infrastructure, but mainly, to the management of relationships and cooperation among its members, the set of clear norms and expectations, the integration of principles and values, the sharing of control and responsibilities (Rijal, 2014). Effective classroom management is driven by teachers' self-awareness and socio-emotional maturity, making them able to create a supporting environment of quality learning and development (Djiigic & Stojijkovic, 2011). This self-awareness enables them to be conscious present and be interested in their students' educational needs, emotions, concerns and interests, moving themselves from "you" to "we", providing them positive reinforces, focusing on what needs to be done, being fully aware of what is happening in the classroom and acting proactively for the benefit of the entire class (Postholm, 2013). Therefore, it is unambiguous that there is no a single strategy for all problems solving, but approaches, which utilize a variety of strategies and that only the teachers' cognitive equipment is not sufficient to guarantee their students' academic achieving, without developed communication capabilities and effective classroom management skills. The teachers' enthusiasm and motivation are reflected in the classroom's climate, the outcomes of the educational process and its self-evaluation, with the aim of optimizing it. As the teachers' self-awareness is enhanced, through gaining professional experience in the classroom practice, they set more realistic academic and behavioral expectations and get more effective classroom management strategies, aligning educational goals with their students' diverse educational needs and interests (Martin & Shoho, 2000). Moreover, in recent decades, significant social changes have been observed, which are related to the role of A.I. and technology in people's daily lives. The most important of them concern communication, diffusion and management information's and in the ability to assimilate and utilize the produced new knowledge. We have to underline that the role of Digital Technologies in education domain as well as in all the aspects of everyday life, are very productive and successful, facilitate and improve the assessment, the intervention, decision making, the educational procedures and all the scientific and productive procedures via Mobiles (Stathopoulou, et al., , 2019(Stathopoulou, et al., , 2020Vlachou et al., 2017;Papoutsi et al., 2018;Karabatzaki et al., 2018), various ICTs applications Papanastasiou, et al., 2018Papanastasiou, et al., , 2020Alexopoulou, et al., 2019;Kontostavlou, E., et al., 2019;Charami et al., 2014;Bakola et al., 2019;Kontostavlou et al., 2019;Alexopoulou et al., 2019), via AI Robotics & STEM Vrettaros, et al., 2009;Anagnostopoulou, et al., 2020;Lytra, et al., 2021;Pappas et al., 2016;Mitsea et al., 2020;Chaidi et al., 2021), and games Kokkalia, et al., 2017;. The New Technologies (NT) and more specifically Digital Technologies provide the tools for access, the analysis and transfer of information and for its management and utilization new knowledge. Information and Communication Technologies (ICT), unprecedented technological capabilities of man, have a catalytic effect, create the new social reality and shape the Information Society Drigas, & Koukiannakis, 2004Drigas, A., & Kontopoulou, M., 2016;Theodorou, & Drigas, 2017;Drigas, & Kostas, 2014;Bakola, et al., 2019Bakola, et al., , 2022Drigas, & Politi-Georgousi, 2019;Karyotaki, et al., 2022). Moreover, games and gamification techniques and practices within general and special education improves the educational procedures and environment, making them more friendly and enjoyable , 2015α Papanastasiou et al., 2017Kokkalia et al., , 2017Doulou et al., 2022;. Conclusions To sum up, the cornerstone of effective classroom management is the quality of teacher-student relationships. The creation and sustainability of a dialectic learning environment, which takes into account the teachers' different mindset and the classroom's emotional load, allows genuine communication links to be developed and promotes the mental well-being and welfare of all its members, facilitating its successful management. In this direction, a fundamental goal of classroom management, which should be in line with the objectives of the Curriculum, is students' self-management, through teachers' support, guidance, encouragement and the development of a dynamic learning environment of quality interactions, projecting patterns of proper behaviours and cooperation and utilizing proactive practices to minimize any classroom problem. Thus, teachers building up modernized educational strategies, which balance students' control with their cooperation, demonstrating empathy, having developed communication skills and programming any aspect of their educational work, based on their students' learning readiness, they are able to cultivate, in their classroom, a strong momentum of high academic achieving and proper behaviors, in order to mold the citizens of the future, with critical thinking, responsibility, social sensitivity and active participation in the community life. Future researches will examine modern educational techniques and methods, but also pioneering teaching styles that can be used by teachers in the classroom, based on emerging technologies as well as new pedagogical approaches and learning theories that may contribute to the creation of such a learning environment, which focuses on experiential learning and touches on the students' experiences and interests, facilitating effective classroom management and maximizing the educational outcomes, to deliver to students up to date competences and technologies, which reflect the globalized community and job market.
2023-02-06T16:06:55.523Z
2023-02-03T00:00:00.000
{ "year": 2023, "sha1": "0cfb03251e3abed3422dd6b113241a6ea82c4fa7", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/40054/32873", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5a22ede9e49d0e13351e57de34f6516099792956", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
21252753
pes2o/s2orc
v3-fos-license
Developmentally Dictated Expression of Heat Shock Factors: Exclusive Expression of HSF4 in the Postnatal Lens and Its Specific Interaction with (cid:1) B-crystallin Heat Shock Promoter* The molecular cascade of stress response in higher eukaryotes commences in the cytoplasm with the trimerization of the heat shock factor 1 (HSF1), followed by its transport to the nucleus, where it binds to the heat shock element leading to the activation of transcription from the down-stream gene(s). This well-established paradigm has been mostly studied in cultured cells. The developmental and tissue-specific control of the heat shock transcription factors (HSFs) and their interac-tions with heat shock promoters remain unexplored. We report here that in the rat lens, among the three mammalian HSFs, expression of HSF1 and HSF2 is largely fetal, whereas the expression of HSF4 is predominantly postnatal. Similar pattern of expression of HSF1 and HSF4 is seen in fetal and adult human lenses. This stage-specific inverse relationship between the expression of HSF1/2 and HSF4 suggests tissue-specific management of stress depending on the presence or absence of specific HSF(s). In addition to real-time PCR and immunoblotting, gel mobility shift assays, coupled with specific antibodies and HSE probes, derived from three different heat shock promoters, establish that there is no HSF1 or HSF2 binding activity in the postnatal lens nuclear extracts. Using this unique, developmentally modulated in vivo system, we demonstrate 1) specific patterns of HSF4 binding to heat shock elements derived from (cid:1) B-crystal-lin, Hsp70, to facilitate the entry of the super-shifted complex into the gel matrix. Under these conditions, no free (unbound) probe is retained in the gel. Complexes were detected by autoradiography. Induced transcription from heat shock promoters is mediated by the activation of transacting HSFs 1 (1,2). There are four known HSFs (HSF1, HSF2, HSF3, and HSF4). HSF3 is an avian HSF (3,4). Although yeast and Drosophila melanogaster have a single gene that encodes an HSF, higher eukaryotes, animals, and plants have multiple genes that code for HSFs (4 -6). HSF1 and HSF2 transcription factors have almost identical gene structures (4). The heat shock response starts with the cytoplasmic HSF and its trimerization and transport to the nucleus, where it binds to the heat shock element (HSE) in the heat shock promoter, activating transcription of the down stream heat shock gene(s) (1,4). Both HSF1 and HSF2 contain three hydrophobic repeats, HR-A, -B, and -C. HR-A and -B are involved in trimerization upon reception of the stress signal. HR-C, located at the carboxyl terminus, has been suggested to inhibit trimerization in the uninduced state. HSF4, on the other hand, does not contain the HR-C domain; it therefore exists as a trimeric unit and binds to the DNA constitutively (for review, see Ref. 4). HSF1 is considered to be the universal HSF and mediates expression of heat shock genes upon reception of a stress signal such as high temperature, whereas HSF2 is associated with developmental control. Although it has not been experimentally established, the assumption in this generalization is that all tissues and cells contain HSF1 as a pre-existing HSF in the cytoplasm to enable a cell or a tissue to mount a response to heat shock or stress. Furthermore, it is not yet clear whether each HSF activates a distinct set of target genes. The mechanism of the activation of heat shock promoter in response to stress, mostly studied in cultured cells, has been well elucidated; however, the developmental control of the heat shock promoter is not yet understood. It is clear, however, that the heat shock factors (7) and heat shock proteins do have developmental roles (8 -11). The only heat shock factor in the fission yeast is known to be required for growth at normal temperatures as well (12). We have shown previously that the increased expression of the small heat shock protein gene ␣B-crystallin in the postnatal (PN) rat ocular lens coincides with the appearance of a trimeric HSF-HSE complex that is formed between HSFs and the heat shock element (HSE-␣B) present in the heat shock promoter of the ␣B-crystallin gene (13,14). This trimeric complex appears in a developmentally (temporally) controlled fashion with highest efficiency around PN day 10, complementing the increased ␣B-crystallin expression at this stage. It is noteworthy that HSE-␣B is a canonical heat shock element (2,13), yet the appearance of the trimeric complex, as assessed by gel mobility shift assays, is developmentally dictated (absent in the fetal lens and appearing only in the postnatal lens) and tissue-specific (13). To understand the developmental control of the heat shock promoter of the ␣B-crystallin gene and identify the HSF that interacts with this promoter, we studied the expression of three members of the mammalian HSFs, HSF1, HSF2, and HSF4, in the rat lens. We show that there is very little HSF1 or HSF2 in the postnatal adult lens and that HSF4 predominates. Further, gel mobility binding assays done with [ 32 P]HSE-␣B and nuclear extracts of the fetal as well as postnatal lens tissues reveal that the heat shock promoter of the ␣B-crystallin gene selectively binds to HSF4 and not to HSF1 or HSF2. We further compared HSF4 binding activity in the PN day 10 nuclear extracts using three heat shock elements derived from three different heat shock promoters. These data revealed promoter specific HSF4 binding characteristics. EXPERIMENTAL PROCEDURES Animals-Harlan Sprague-Dawley rats were purchased from Charles River Laboratories (Wilmington, MA). For fetal tissues, rats pregnant for 18 days were sacrificed under ether, and the fetuses were dissected out and the organs harvested. Newborn rat pups of different ages were similarly sacrificed and different organs were harvested for the preparation of nuclear and cytoplasmic extracts. Human lenses were procured from the local eye banks (15). All the animal experiments were done in accordance with the guidelines set by Animal Research Committee, UCLA. Human lens materials were used according to the guidelines of the Institutional Review Board, UCLA. Antibodies-Antibodies for HSF1 and HSF2 were purchased from Chemicon International (Temecula, CA). These are rat anti-HSF1 and anti-HSF2 monoclonal antibodies raised against recombinant mouse full-length HSF1 and HSF2. Previously used (13) anti-HSF1 antibody (Affinity Bioreagents Inc., Golden, CO) cross-reacts with HSF4 (data not shown). Antibodies for HSF4 (total) and for HSF4b (specific for HSF4b isoform) were custom made (Sigma-Genosys). For HSF4 (total), polyclonal antiserum was raised against a 15-amino acid peptide YN-VTESNASYLDPGA (473-487 residues, GenBank accession number AB029349) from the C terminus of the mouse protein. For HSF4bspecific polyclonal antiserum, the peptide CRRVKGLALLKEEPA (283-297 residues; GenBank accession number AB029349) from the mouse HSF4 protein was used. The antibodies were characterized by immunoblotting and competition with the authentic original peptide and a nonspecific peptide. Preparation of Nuclear and Cytoplasmic Extracts-Rat tissues were processed using the NE-PER nuclear and cytoplasmic extraction reagents from Pierce Biotechnology with minor modifications, including washing of the nuclear pellets three times with cytoplasmic extraction buffer to avoid residual cytoplasmic proteins contaminating the nuclear fraction. Approximately 250 -300 mg of tissues were used for each extraction. Proteins were estimated using the micro BCA reagents (Pierce Biotechnology). For total cell extracts, cortical scrapings of human lenses were homogenized in 2% SDS and stored at Ϫ80°C. SDS-PAGE and Immunoblotting-Electrophoresis of the various extracts was performed under denaturing reducing conditions using the 12% Nu-PAGE Bis-Tris gels using MES-SDS running buffer (Invitrogen). About 40 -50 g of protein was used per lane for analyses as per the manufacturer's instructions. The electrophoresis was run for about 90 min at 200 V. After electrophoresis, the proteins were transferred to BA83 PROTRAN membranes (Schleicher & Schü ll) using the Surelock Xcell II transfer system (Invitrogen). The blotted membranes were washed with Tris-buffered saline (0.1 M Tris/0.9% saline) containing Tween 20 (0.1%) briefly and used for probing with antibodies or stored at 4°C. The immunoblotting was performed using the West-Dura chemiluminescence detection system (Pierce Biotechnology). The blots were blocked with SuperBlock in Tris-buffered saline for 90 min followed by incubation with the primary antibody (1:5000 for HSF1 and HSF2 and 1:9000 for HSF4) in the same buffer for 90 min, followed by five washes (5 min each) using Tris-buffered saline with 0.1% Tween 20. This was followed by incubation with the secondary antibody (1:250,000). For HSF1 and HSF2, goat anti-rat IgG and for HSF4 goat anti-rabbit IgG conjugated with horseradish peroxidase was used (Pierce Biotechnology). The blots were washed in Tris-buffered saline-Tween 20 as above, incubated for 5 min with the substrate, and exposed to x-ray film. Many of the antisera used showed nonspecific binding with crystallins, present in very high concentrations in the lens extracts. These are not shown in the immunoblots; in most cases, the gels were run longer to exclude them from immunoblotting. RNA Extraction, RT-PCR, and Real-time PCR-Rat tissues (200 -300 mg) were homogenized using the TRIzol reagent (Invitrogen), and RNA was isolated by following the manufacturer's protocol. The quality of the RNA was checked on 0.8% E-gel (Invitrogen). RT-PCR was performed using the RT-PCR core kit (Applied Biosystems) using random hexamer primers. In general, 1 g of total RNA was reversetranscribed in a 20-l reaction followed by PCR in a 50-l reaction containing 2.5 l of the reverse transcription reaction (RT) mix, 0.2 M concentrations of each primer, 1.5 units of platinum TaqDNA polymerase (Invitrogen) in the reverse transcription buffer. Optimal concentrations of MgCl 2 used in each of these reactions were determined for each primer set. The real time PCRs were performed using Finnzymes-DyNamo SYBR green qPCR kit and Opticon 2 (MJ Research, Boston, MA) in a 20-l reaction containing 1 l of the RT mix, 0.25 M concentrations of primer, and 10 l of the 2ϫ qPCR reaction mix. The annealing temperature for the experimental samples was set at 58°C; for ␤-actin reactions, it was set at 55°C. The fluorescence was measured at 80°C. The melting curve analyses were performed at the end of the reaction (after 40 cycles) between 50°C and 90°C to assess the quality of the final PCR products. The threshold cycles, C(t) values were calculated by fixing the basal fluorescence at 0.05 units. Five replicate reactions were performed for each sample and the average C(t) was calculated. ⌬C(t) values were calculated as C(t) sample Ϫ C(t) ␤-actin. The N-fold increase or decrease in expression was calculated by ⌬⌬C(t) method with the fetal C(t) value as the reference point. N-fold difference was determined as 2 Ϫ(⌬C(t) sample Ϫ ⌬C(t) fetal) (16). Gel Mobility Shift Assay-Gel mobility shift assays were performed as described previously (13). Nuclear or cytoplasmic extracts (ϳ20 -30 g of protein) and approximately 20 -30 fmol of 32 P-labeled oligonucleotides (double-stranded) were used in a typical assay. The reactions were carried out at 30°C for 15 min. For super-shift assays, 1 l of the antisera (anti-HSF1, anti-HSF2, or anti-HSF4) was added at the end to the reaction, and incubation continued for another 15 min at the same temperature. The entire volume of the reaction was electrophoresed on 8% acrylamide gel in a buffer containing 50 mM Tris, pH 7.9, 40 mM glycine, and 1 mM EDTA. The gels were run at 150 V for ϳ180 min, longer than usual, to facilitate the entry of the super-shifted complex into the gel matrix. Under these conditions, no free (unbound) probe is retained in the gel. Complexes were detected by autoradiography. Fig. 1 shows that HSF 1 and HSF2 are predominantly expressed in the fetal lens in the rat. The RT-PCR gel analyses (Fig. 1a) show a gradual decrease in the level of HSF1 as well as HSF2 RNA transcripts from the fetal to postnatal stages. The real-time PCR (Fig. 1b) shows a large dramatic decrease from fetal to PN day 20 in the content of these transcripts. Whereas HSF1 shows a 25-fold decrease in transcripts, the change in HSF2 transcripts is even larger, a 175-fold decrease over the same period (Fig. 2b). The data in the Fig. 1b are complemented by the immunoblotting data (Fig. 1c). Although it is difficult to be quantitative on these immunoblots, between the fetal and the PN day 3 stages, a significant decrease in seen in both HSF 1 and HSF2 protein levels. Beyond the PN day 3, neither of these proteins can be detected at appreciable levels ( Fig. 1c). HSF1 and HSF2 Are Mostly Expressed in the Fetal Lens- HSF4 Is the Predominant HSF of the Postnatal Lens-The pattern of HSF1 and HSF2 presence stands in stark contrast to that seen with HSF4 (Fig. 2). The RT-PCR gel analyses (Fig. 2a) suggests only a gradual increase in the intensity of the stained HSF4 band from the fetal to postnatal stages; however, the data obtained with real-time PCR (Fig. 2b) indicate that there is a substantial (about 25-fold) increase in HSF4 transcripts from the fetal stage to PN day 10. Again, this observation is complemented by the immunoblotting data presented in Fig. 2c, which was done using two different polyclonal antibodies specific for two different isoforms of HSF4. These two isoforms (HSF4a and HSF4b) are produced by alternative splicing involving exons 8 and 9 of the HSF4 gene (17). HSF4b is known to constitutively bind to DNA and activate transcription. HSF4a, on the other hand, has been suggested to repress transcription (17,18). In the data presented in Fig. 2c, there is very little difference in the mobility of the HSF4 protein band detected by anti HSF4 (total) (an antibody that will detect all HSF4 polypeptides that contain an unaltered C terminus) and the mobility of the protein detected by HSF4b-specific antibody, suggesting that the predominant reactive band on these immunoblots represents HSF4b (Fig. 2c). HSF4a is of smaller molecular mass and is made in very low amounts (17) and is thus undetectable in these immunoblots. Presence of HSF4 in the Human Lens Extracts-The data presented in Fig. 2 clearly establish that HSF4 expression is minimal in the fetal lens and predominant in the postnatal lens. Because of the reported association between HSF4 mutations and juvenile cataractogenesis (19), we sought to examine the status of HSF4 in the human lens (Fig. 3). Although these data are derived from experiments done with single lens extracts for each age, the pattern of postnatal expression of HSF4 seems to be more or less recapitulated in the human lens. HSF1 is seen mostly in the fetal lens extracts (Fig. 3a), whereas HSF4 detectability is retained even in lens extracts made from older human lenses (62 years was the oldest lens examined; Fig. 3b). HSF4 Is Maximally Expressed in the Ocular Lens-Considering that HSF4 is the predominant HSF in the postnatal lens, it was of interest to assess the distribution of HSF4 in different tissues in the rat. The data presented in Fig. 4 demonstrate that among the tissues examined in the PN day 10 rat, the ocular lens shows the highest expression of HSF4, both at the RNA (Fig. 4a) and protein (Fig. 4b) levels. Although appreciable amounts of RNA transcripts are found in the lung, muscle, and small intestines (Fig. 4a), very little protein was seen in these tissues (Fig. 4b). HSF4, Not HSF1 or HSF2, Interacts with HSE-␣B-The pattern of maximal postnatal expression of HSF4 around PN day 10 -15 ( Fig. 2c) is temporally consistent with the high expression of the ␣B-crystallin gene in the rat lens at around PN day 10 (there is a 10-fold difference in the number of ␣B-crystallin transcripts between the fetal and the PN day 10 stages 2 and the appearance of the HSE-␣B/HSF trimeric complex (13). But this only provides circumstantial evidence that the HSF in the HSE-␣B/HSF complex is probably HSF4. We identified the HSF in the HSE-␣B/HSF complex by super-shift Fig. 1. c, immunoblot analyses of lens nuclear extracts using anti HSF4 (total) and anti HSF4b antibodies. The numbers 62 and 49 indicate the molecular mass (K D ) markers. Note that the size of the reactive band (arrows) in the two blots is identical. In these immunoblots, only nuclear extracts were used because HSF4 is constitutively trimeric and present in the nucleus, as opposed to HSF1 and HSF2, which are also cytoplasmic (4). F, fetal; lanes marked 3, 5, 10, and 20 refer to age in postnatal days. analyses using specific antibodies such as anti-HSF1, anti-HSF2, and anti-HSF4, the same antibodies that were used for immunoblotting in Figs. 1 and 2. In these experiments, we used nuclear extracts rather than whole-cell extracts, as was done previously (13). In so doing, we can detect trimetric complexes between [ 32 P]HSE-␣B and the HSFs even in the fetal lens extracts, albeit at very low levels (see Fig. 5a, complex III). Fig. 5a shows that even when there is very little trimeric complex (Fig. 5a, control lane), only anti-HSF4 results in the super-shift of the complex III. Fig. 5b shows that data obtained from experiments in which PN day 10 lens nuclear extracts were used. In these experiments, no HSF1-or HSF2-related activity was detected in the trimeric complexes as ascertained by lack of super-shifted complexes. Super-shift was seen only when an anti-HSF4 antibody was used (Fig. 5b). HSF4 Interacts with HSE-Hsp70 and HSE-Hsp82-The experiments done with HSE-␣B (Fig. 5) were repeated with heat shock promoter sequences derived from rat Hsp70 (20) and D. melanogaster Hsp82 (21) . Fig. 6, A and B, show that in the PN day 10 nuclear extracts, complexes obtained with [ 32 P]HSE-Hsp70 and [ 32 P]HSE-Hsp82 only contained HSF4, as indicated by super-shift with anti-HSF4 (Fig. 6, A and B). In the fetal lens nuclear extracts, again it is anti-HSF4 that pro- (13) and the specific antibodies as indicated above the lanes. The trimeric complex (III) in the fetal lens nuclear extracts is extremely weak (a), whereas it is very robust in the day 10 lens nuclear extracts (b). At PN day 10, very little of complex I or II is seen (see Ref. 13). Anti-HSF1 positive control was done with recombinant HSF1, and for HSF2, the antibody activity was ascertained with cytoplasmic extracts containing HSF2 (data not shown). Free, free probe; Control, complete binding assay without antibodies. Super-shifted complexes are shown by arrows. duces a prominent super-shift (in both HSE-Hsp70 and HSE-Hsp82 assays) (Fig. 6, C and D). It must be noted that in the fetal lens nuclear extracts, super-shifted complexes, although much weaker in intensity, were also obtained with anti-HSF1 (Fig. 6, C and D). It was not clear from these experiments with fetal lens nuclear extracts whether anti-HSF2 produced any super-shifted complexes. Comparative Binding Profiles of HSF4 with Three Different HSEs-The data in Fig. 6 indicated that the HSF4 in PN day 10 lens nuclear extracts binds with the HSEs in the ␣B-crystallin promoter, Hsp70 promoter, and Hsp82 promoter. It was of interest to assess whether HSF4 bound to all the HSE sequences with similar or differential efficiencies. The data presented in Fig. 7 show a plot of the relative binding of HSF4 to [ 32 P]HSE-␣B, [ 32 P]HSE-Hsp70, and [ 32 P]HSE-Hsp82 as a function of the concentration of the protein in the nuclear extracts made from PN day 10 lens. Although it seems that the binding efficiency is much higher with HSE-Hsp70, the binding of HSE-␣B sequence stands out in that it gets saturated very early at lower concentrations; the binding of HSE-Hsp70 and HSE-Hsp82 does not show saturation at lower concentrations. The novel binding characteristics of HSE-␣B clearly stand out, particularly the early saturation and inhibition at higher concentrations. This distinct pattern of interaction between HSF4 and HSE-␣B compared with HSE-Hsp70 and HSE-Hsp82 is further supported by the data obtained in competition studies (Fig. 8). These assays show that the binding of [ 32 P]HSE-Hsp70 to HSF4 is equally well competed with by its homologous competitor, HSE-Hsp70, as well as by HSE-Hsp82, but not as efficiently by HSE-␣B (Fig. 8). DISCUSSION This investigation reports on the exclusive expression of HSF4 in the postnatal ocular lens and its specific interaction with the heat shock promoter of the ␣B-crystallin gene. Using real-time PCR, immunoblotting, and gel mobility shift assays coupled with the use of specific antibodies, we showed that there are no HSF1 and HSF2 binding activities in the postnatal ocular lens. The data presented in this article support two important observations: 1) all tissues do not contain all the HSFs and tissues such as the postnatal ocular lens express only one, HSF4, and 2) heat shock promoters, although canonical, show differential binding to HSF4. The observation of the singular presence of HSF4 in the postnatal ocular lens affects a number of perceptions about the stress response and its universality. Our data point to a developmental stage-and tissuespecific response to stress based on the differential presence or absence of specific heat shock factor(s) and their specific interactions with heat shock promoters. This is well exemplified by the situation in the ocular lens, wherein the small heat shock protein ␣B-crystallin gene is highly expressed (22). The heat shock promoter of the ␣B-crystallin gene has been shown to respond to heat and chemical stress in cells in culture (23)(24)(25). However, in experiments in which the intact rat lens, in organ culture, was exposed to heat stress, no induction of the ␣Bcrystallin gene was observed (26). Considering that the response of the HSF4 to stress activation is poorer compared with that seen with HSF1 (17), the present data (indicating HSF4 as the predominant HSF of the postnatal ocular lens), may explain why lenses, when exposed to heat stress, do not show appreciable change (induction) in the concentration of ␣B-crystallin (26,27). Three important corollaries follow from the observation of the inverse relationship of the expression of HSF1/2 and HSF4 in fetal and postnatal stages, respectively (Figs. 1-3). One, the simultaneous presence of HSF1 as well as HSF2 in the fetal stages may compensate for the absence of either one of these HSFs during early developmental stages. This may be the reason why no ocular lens abnormalities have been reported in mice null for HSF1 or HSF2 (28 -30). Considering that in the fetal nuclear extracts there is almost non-existent or very weak binding of HSF1 or HSF2 to various HSEs (Figs. 5 and 6), it is also possible that these transcription factors may be inactive at this stage. Second, mutations in HSF4 have been recently reported to be associated with the most prevalent form of early childhood cataracts (lamellar cataracts) (19). The almost exclusive expression of HSF4 in the postnatal ocular lens reported here corresponds remarkably to the timing of the appearance of this disease phenotype (juvenile cataractogenesis). Coupled with the demonstration that HSF4 and not HSF1 can be detected in the adult human lens extracts (Fig. 4), these data also provide a molecular basis for the association of late-onset cataract, such as Marner's cataract, with a mutation in the HSF4 DNA binding domain (19). How these mutations alter HSF4 DNA binding abilities remains to be investigated. Third, a knockout of the HSF4 gene would have no repercussions on the development of the lens but might have severe physiological consequences to the postnatal lens. The data presented in Fig. 5 lead to the conclusion that ␣B-crystallin is downstream of HSF4. We already know that mutations in ␣B-crystallin lead to cataractogenesis in the human lens (31). ␣B-crystallin has chaperone-like activities (22, Control represent complete binding assay without any antibody. In assays done with fetal lens extract using HSE-Hsp82, two super-shifted complexes were observed (D, lanes Anti-HSF1 and Anti-HSF4). It is not possible to assess whether super-shifts were obtained with anti-HSF2 in the fetal extracts. The autoradiographs are a little overexposed to allow detection of super-shifted complexes of weaker intensities. Note that no super-shifted complexes are obtained with anti-HSF1 and anti-HSF2 in day 10 lens extracts, suggesting that there are no active HSF1 and HSF2 proteins in these extracts as also indicated by immunoblotting data ( Figs. 1 and 2). 32). Thus, a malfunctioning HSF4 could result in the lack of appropriate concentrations of the ␣B-crystallin gene product, which could impair important physiological activities. Depending on the expression of HSF4 gene, this could have both temporal and spatial consequences in the generation and maintenance of the transparent phenotype of the ocular lens, as seen in the lamellar and Marner's cataracts. In addition to ␣Bcrystallin, HSF4 may activate a number of genes in the postnatal and the adult lens, as suggested by the presence of HSF4 in the adult human lens extracts ( Fig. 4; see also Figs. 6 and 7). The observation that the heat shock element of the ␣B-crystallin heat shock promoter selectively binds to HSF4 (Fig. 5) suggests that different HSFs may activate different downstream targets, leading to differential gene activity. HSF1 and HSF2 have been recently reported to activate various heat shock genes differentially (33). Differential binding of HSFs to HSEs in vitro has been reported (34,35). The data presented in Figs. 5 and 6 demonstrate that there are no HSF1 or HSF2 binding activities in the PN day 10 lens nuclear extracts. This presents a unique in vivo system that contains only one HSF, HSF4. The binding profiles of HSF4 to various HSEs are presented in Fig. 7. Although binding with HSE-Hsp70 seems more efficient (Fig. 7), it is very similar to binding by HSE-Hsp82 in its initial pattern. Both HSE-Hsp70 and HSE-Hsp82 show sustained binding for HSF4 over a long range of protein concentrations. On the other hand, the interaction with HSE-␣B seems very different; saturation of the binding is reached at much lower protein concentrations. The fact that the binding of HSF4 with HSE-␣B is inhibited at higher concentrations (compare binding with HSE-Hsp70 and HSE-Hsp82; see also inset, Fig. 7) may suggest the presence of a sequence-specific inhibitor in the lens nuclear extracts, but that remains a speculation at this time. The uniqueness of the binding of HSE-␣B to HSF4 in the day 10 nuclear extracts is further supported by the competition experiment indicating that HSE-␣B is not an efficient competitor for HSE-Hsp70 (Fig. 8). A clear insight into the significance of these binding profiles must await characterization of the expression profiles of HSP70 in the ocular lens. The data presented in this manuscript indicate that differential activation of downstream genes (e.g. ␣B-crystallin) may be brought about both by the timing of the expression of the HSFs and the specificity of the promoter-HSF interactions. Further studies will illuminate whether the specific patterns of binding affinity (Figs. 5-8), at various developmental stages The complexes obtained were quantitated by densitometry measurements of the autoradiographs (Alpha-Innotech Corp., San Leandro, CA), and relative intensities were plotted against the amount of extracts (micrograms of protein) used. Inset, a representation of the autoradiograph of the shifted complexes for each probe, although a much lighter exposure was used for the densitometry analyses. The numbers above the lanes in the inset indicate the amount of protein (in micrograms) used in each assay. These autoradiographic patterns were also analyzed by the exposure of the dried gel-shift gels to Storage Phosphor screens (Cyclone; PerkinElmer Life and Analytical Sciences), complexes located, excised and counted in a scintillation counter and the number of femtomoles of each probe bound determined (data not shown). These data further confirmed the patterns presented above. This experiment was repeated two to three times. FIG. 8. Competition of [ 32 P]HSE-Hsp70 binding to HSF4 by unlabeled HSE-␣B and HSE-Hsp82. Gel shift assays using PN day 10 nuclear extracts (ϳ20 g of proteins) were performed as above. The intensities of the bound complexes were measured as in Fig. 7 and plotted against the concentration of the unlabeled competitors. Molar excess amounts of unlabeled competitors used were 1, 5, 10, and 50 and added simultaneously with the labeled probe. are sequence dictated (genetic) or epigenetic, involving protein modifications (36).
2018-04-03T03:03:38.434Z
2004-10-22T00:00:00.000
{ "year": 2004, "sha1": "28e27d0568ddf8e1b36c5e8f592d78e669ff1da5", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/43/44497.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2fd33b6d3f45450beeb18309ff831b4b7d8bfe19", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
484958
pes2o/s2orc
v3-fos-license
Genomic Responses to Abnormal Gene Dosage: The X Chromosome Improved on a Common Strategy This new primer, which discusses a study by Zhang et al., provides an overview of the process by which chromosomes achieve dose compensation and the mechanisms underlying this phenomenon in Drosophila S2 cells. Mechanisms to guard genomic integrity are critical to ensuring the welfare and survival of an organism. Disruptions of genomic integrity can result in aneuploidy, a large-scale genomic imbalance caused by either extra or missing whole chromosomes (chromosomal aneuploidy) or chromosome segments (segmental aneuploidy). A change in dosage of a single gene may not compromise the well-being of an organism, but the combined altered dosage of many genes due to aneuploidy disturbs the overall balance of gene expression networks, resulting in decreased fitness and mortality [1,2]. Chromosomal aneuploidy is a common cause of birth defects-Down syndrome is caused by an extra copy of Chromosome 21, and Turner syndrome by a single copy of the X chromosome in females. Furthermore, methods that detect segmental aneuploidy have uncovered small deletions or duplications of the genome in association with many disorders, such as mental retardation. Chromosomal and segmental aneuploidies are also frequent in cancer cells in which changes in copy number paradoxically increase cell fitness but are unfavorable to survival of the organism. A fundamental issue in biology and medicine is to understand the effects of aneuploidy on gene expression and the mechanisms that alleviate aneuploidy-induced imbalance of the genome. Chromosomal aneuploidy is caused by non-disjunction of chromosomes in meiosis or mitosis, while segmental aneuploidy involves breakage and ligation of DNA. In contrast, the sex chromosomes provide an example of a naturally occurring ''aneuploidy'' caused by the evolution of a specific set of chromosomes for sex determination that often differ in their copy number between males and females. For example, in mammals and in flies, females have two X chromosomes and males have one X chromosome and a Y chromosome, resulting in X monosomy in males. How does a cell or an organism respond to such different types of aneuploidy, abnormal or natural? It turns out that the overall expression level of a given gene is not necessarily in direct relation to the copy number. Unique strategies have evolved to deal with abnormal gene dosage to alleviate the effects of aneuploidy by dampening changes in expression levels. What's more, the X chromosome has evolved sophisticated mechanisms to achieve complete dosage compensation, not surprisingly, since the copy number difference between males and females has been evolving for a long time. Gene Expression Responses to Altered Dosage in Aneuploidy There are two main outcomes from altered gene dosage in aneuploidy in terms of transcript levels-either levels directly correlate with gene dosage (primary dosage effect) or they are unchanged/partially changed with gene dosage (complete or partial dosage compensation) [3]. In the first scenario, a reduction of the normal gene dosage in a wild-type (WT) diploid cell from a symbolic dose value of 2 to a value of 1 after a chromosomal loss or deletion would produce half as many gene products, while an increase in gene dosage from 2 to 3, due to a chromosomal gain or duplication, would produce 1.5-fold more products (Figure 1). In the second scenario, the amount of products from altered gene dosage would either equal or nearly equal that in WT cells, due to complete or partial compensation ( Figure 1). Gene expression analyses of aneuploid cells or tissues in human, mouse, fly, yeast, and plant provide examples of both primary dosage effects and dosage compensation. Hence, changes in expression levels due to chromosomal aneuploidy do not affect all genes in the same manner. For example, in Down syndrome, 29% of transcripts from human Chromosome 21 are overexpressed (22% in proportion to gene dosage and 7% with higher expression), while the rest of genes are either partially compensated (56%) or highly variable among individuals (15%) [4]. Interestingly, dosage-sensitive genes, such as genes encoding transcription factors or ribosomal proteins, are more likely to be compensated to avoid harmful network imbalances [1,5]. This basal dynamic dosage compensation could be due to buffering, feedback regulation, or both, depending on the gene and the organism [4,[6][7][8][9]. Buffering, a passive process of absorption of gene dose perturbations, is due to inherent non-linear properties of the transcription system. In contrast, feedback regulation is an active mechanism that detects abnormal transcript abundance and adjusts transcription levels. Sex Chromosome-Specific Dosage Compensation Sex chromosome-specific dosage compensation evolved in response to the dose imbalance between autosomes and sex chromosomes in the heterogametic sex due to the different number of sex chromosomes between the sexes-for example, a single X chromosome and a gene-poor Y chromosome in males and two X chromosomes in females. Compensatory mechanisms that restore balance both between the sex chromosomes and autosomes and between the sexes vary among species [10,11]. In Drosophila melanogaster (fruit fly), expression from the single X chromosome is specifically enhanced two-fold in males, while no such upregulation occurs in females. X upregulation also occurs in Caenorhabditis elegans (round worm) and in mammals but in both sexes [6,12]. Silencing of one X chromosome in mammalian females and partial repression of both X chromosomes in C. elegans hermaphrodites have been adapted to avoid too high an expression level of X-linked genes in the homogametic sex. A unified theme in these diverse mechanisms of sex chromosome dosage compensation is coordinated upregulation of most Xlinked genes approximately two-fold to balance their expression with that of autosomal genes present in two copies. This process utilizes both genetic and epigenetic mechanisms to increase expression of an X-linked gene once it has lost its Y-linked partner during evolution. While the mechanisms of X upregulation in mammals and worms are not clear, Drosophila X upregulation is mediated by the male-specific lethal (MSL) complex [10,13]. The MSL complex binds hundreds of sites along the male X chromosome and modifies its chromatin structure by MOF (males absent on the first)-mediated acetylation of histone H4 at lysine 16. Other histone modifications and chromatin-associated proteins, including both activating and silencing factors, are also involved in the two-fold upregulation of the Drosophila male X chromosome [14]. How these modifications coordinately work to fine-tune a doubling of gene expression is still not well understood. Moreover, the basal dynamic dosage compensation response observed in studies of autosomal aneuploidy could also play a role in Drosophila X upregulation [3]. An important question is how much this basal response to the onset of aneuploidy contributes to sex chromosome-specific dosage compensation. Fine-Tuning of the Drosophila X Chromosome Adds a Special Layer of Regulation above a Genome-Wide Response to Aneuploidy In this issue of PLoS Biology, Zhang et al. [15] report that the exquisitely precise X chromosome upregulation in Drosophila utilizes both a basal response to aneuploidy and an X chromosome-specific mechanism. The beauty of their experimental system, the S2 cell line derived from a male fly, is that it has a defined genome with numerous segmental aneuploid regions, both autosomal and X-linked. Thus, genomic responses to aneuploidy could be queried both on autosomes and on the X chromosome, the latter being associated to the MSL complex. Using secondgeneration DNA-and RNA-sequencing, the authors carefully examined the relationship between gene copy number and gene expression in S2 cells before and after induced depletion of the MSL complex. By this approach the effects of the MSL complex on the genome have effectively been separated from those triggered by a basal response to aneuploidy. What Zhang et al. have found is that partial dosage compensation of both autosomal and X-linked regions occurs even in the absence of the MSL complex. This provides strong evidence that basal dosage compensation mediated by buffering and feedback pathways allows dosage compensation across the whole genome. In the presence of the MSL complex, X-linked genes, but not autosomal genes, become subject to an additional level of regulation, which increases expression independent of gene copy or expression levels. This feed-forward regulation of the X chromosome by the MSL complex ensures a highly stable doubling of expression specific to this chromosome. Note that this feed-forward regulation results in precise dosage compensation only when X dose is half of the autosome dose, while insufficient or excessive X-linked gene expression occurs at lower or higher X dose. Excessive X expression has also been reported when ectopic expression of MSL2 is induced in Drosophila females, which leads to binding of the MSL complex to both X chromosomes and lethality [16]. The new findings by Zhang et al. implicate two levels of regulation of the X chromosome: one basal mechanism that can regulate both the X and the autosomes in the event of aneuploidy; and a second feed-forward mechanism specific to the X and regulated by the MSL complex to ensure doubling of X-linked gene expression (Figure 2). The new study proposes that the basal compensation mechanism provides a 1.5-fold increase in gene expression and the feed-forward mechanism, another 1.35-fold, resulting in a precise two-fold increase in expression of X-linked genes. The specificity of the MSL-mediated mechanism to double X-linked gene expression is ensured by the existence of DNA sequence motifs specifically enriched on the X chromosome to recruit the MSL complex only to this chromosome [14]. Autosomal aneuploidy would only trigger a response of the basal dosage compensation pathway, which would result in a 1.5-fold increase in expression of genes located within a monosomic segment (Figure 2). It should be noted that since gene expression levels were measured relative to whole genome expression (due to normalization) a fold change in expression of genes in an aneuploid segment could also be interpreted as a fold change in expression of the rest of the genome. How did such a precise mechanism evolve to ensure appropriate expression of sex-linked genes? The feed-forward process mediated by the MSL complex is a highly stable epigenetic modification selected and maintained during the evolution of heteromorphic sex chromosomes (Figure 2). Heteromorphic sex chromosomes have arisen from an ancestral pair of autosomes, following inhibition of recombination between the proto-Y chromosome that carries the male determinant and its counterpart, the proto-X chromosome [13]. Gradual loss of Ylinked genes due to lack of recombination could have happened gene-by-gene or on a chromosomal segment-by-segment basis. The human Y chromosome apparently evolved by a series of large inversions leading to a rapid loss of large chromosomal segments [17]. If the lost Y segments contained dosage sensitive Figure 1. Expression levels change in response to altered gene dose in aneuploidy. The transcript output from a given pair of chromosomes in normal WT diploid cells is set as a value of 2. In case of aneuploidy (monosomy or trisomy), the amount of transcript would be strictly correlated with gene dose in the absence of a dosage compensation mechanism (No DC). In the presence of partial DC, the expression level per copy would be partially increased in monosomy or partially decreased in trisomy, relative to the diploid level. In the presence of complete DC, expression levels would be adjusted so that the amount of transcripts is the same in monosomic or trisomic cells compared to diploid cells. doi:10.1371/journal.pbio.1000318.g001 genes, this would probably have triggered a basal dosage compensation response as observed in autosomal aneuploidy ( Figure 2). However, this type of dosage compensation is dynamic and incomplete, as it is probably mediated by buffering or feedback mechanisms. An organism might tolerate partial imbalances as long as those were small, but extensive gene loss from the Y chromosome would eventually have caused a deleterious collective imbalance for multiple X-linked genes. A progressive increase in the size of aneuploid X regions could have reached a threshold of unsustainable stress on the basal dosage compensation process. To relieve this stress and survive X aneuploidy, specific mechanisms of dosage compensations targeted to the X chromosome would be desirable. Such mechanisms probably derived by recruiting pre-existing regulatory complexes, for example in the making of the MSL complex in Drosophila. Indeed, one of the components of this complex is MOF, a histone acetyltransferase also involved in autosomal gene regulation [10,13]. Homologues of Drosophila MSL proteins also exist in other organisms where they are involved in gene regulation and DNA replication and repair but do not appear to associate with the X chromosome, suggesting that the components of X chromosome-specific complexes may differ between organisms [18]. In conclusion, two mechanisms apparently collaborate to achieve the approximate two-fold upregulation of the Drosophila X chromosome: a dynamic basal dosage compensation mechanism probably mediated by buffering and feedback processes; and a feed-forward, sex chromosome-specific regulation chiefly mediated by the MSL complex. In mammals, upregulation of the X chromosome may also result from a combination of more than one mechanism, some applicable to aneuploidy that may arise anywhere in the genome and others that evolved to control the X chromosome. High X-linked gene expression in mammalian cells with two active X chromosomes-undifferentiated female embryonic stem (ES) cells [19] and human triploid cells [20]suggests that X upregulation does not default in these cells. Thus, in mammals, X upregulation may also be mediated by a highly stable feed-forward mechanism that acts on top of a basal aneuploidy response. In contrast, the sex chromosomes of birds and silkworms, ZZ in males and ZW in females, seem to lack a precise dosage compensation mechanism of the Z chromosome, possibly due to the absence of a feed-forward process [21,22]. The After the proto-Y chromosome evolved a gene with a male-determining function (green bar), it became subject to gradual gene loss on a gene-by-gene or segment-by-segment basis due to lack of recombination between the proto-sex chromosomes. If the lost region on the proto-Y chromosome contained dosage sensitive genes such as those that encode transcriptional factors (yellow bars), this would have triggered a basal dosage compensation response (yellow faucet) on the proto-X chromosome and led to a partial (1.5-fold) increase of expression (small arrows). The same basal dosage compensation process would also modify a deleted region on an autosome (A) in an abnormal cell. Dosageinsensitive genes (black bars) may escape this process. When broader regions were lost on the proto-Y chromosome, the collective imbalance effects of multiple aneuploid genes would have become highly deleterious and the increased load of aneuploidy could have stressed the basal mechanism of dosage compensation. Survival was achieved by recruiting regulatory complexes such as the MSL complex (red faucet) to aneuploid X segments (red regions), to further increase gene expression (big arrows) and rescue the X monosomy. This feed-forward sex chromosome-specific regulation would provide 1.35-fold increase in expression, which together with the basal dosage compensation (1.5-fold increase) would achieve the approximate two-fold upregulation of most genes on the present day X chromosome. In contrast, large-scale deleterious autosomal aneuploidy would be lost due to lack of a specific sex-driven compensatory mechanism. doi:10.1371/journal.pbio.1000318.g002 Z chromosome could have a biased paucity of dosage-sensitive regulatory genes, or else selection for sexual traits may have favored the retention of gene expression imbalances between males and females. Male and female mammals display significant expression differences of a subset of genes that escape X inactivation and thus have higher expression in females [23]. Whether such genes play a role in female-specific functions is unknown. Future work to uncover the actual molecular mecha-nisms underlying the basal and feed-forward regulatory pathways should help to fully understand the role of these processes in different organisms, both in response to the acute onset of aneuploidy and in evolution of sex-specific traits. Loss or dysregulation of dosage compensation mechanisms could be important in birth defects and in diseases, such as cancer, where aneuploidy is common; exploring approaches to enhance dosage compensation may be useful to relieve aneuploidy-related diseases.
2014-10-01T00:00:00.000Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "8886d4d2358821521660e8030cdd5c89c2fc1703", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.1000318&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8886d4d2358821521660e8030cdd5c89c2fc1703", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7763849
pes2o/s2orc
v3-fos-license
Characterization of the human cyclin-dependent kinase 2 gene. Promoter analysis and gene structure. Cyclin-dependent kinase 2 is a serine/threonine protein kinase essential for progression of the mammalian cell cycle from G to S phase. CDK2 mRNA has been shown to be induced by serum in several cultured cell types. Therefore, we set out to identify elements that regulate the transcription of the human CDK2 gene and to characterize its structure. This paper describes the cloning of a 2.4-kilobase pair genomic DNA fragment from the upstream region of the human CDK2 gene. This fragment contains five transcription initiation sites within a 72-nucleotide stretch. A 200-base pair sub-fragment that confers 70% of maximal basal promoter activity was shown to contain two synergistically acting Sp1 sites. However, a much larger DNA fragment containing 1.7 kilobase pairs of upstream sequence is required for induction of promoter activity following serum stimulation. The intron exon boundaries of seven exons in this gene were also identified, and this information will be useful for analyzing genomic abnormalities associated with CDK2. Cyclin-dependent kinase 2 is a serine/threonine protein kinase essential for progression of the mammalian cell cycle from G 1 to S phase. CDK2 mRNA has been shown to be induced by serum in several cultured cell types. Therefore, we set out to identify elements that regulate the transcription of the human CDK2 gene and to characterize its structure. This paper describes the cloning of a ϳ2.4-kilobase pair genomic DNA fragment from the upstream region of the human CDK2 gene. This fragment contains five transcription initiation sites within a 72-nucleotide stretch. A 200-base pair sub-fragment that confers 70% of maximal basal promoter activity was shown to contain two synergistically acting Sp1 sites. However, a much larger DNA fragment containing ϳ1.7 kilobase pairs of upstream sequence is required for induction of promoter activity following serum stimulation. The intron exon boundaries of seven exons in this gene were also identified, and this information will be useful for analyzing genomic abnormalities associated with CDK2. Cyclin-dependent kinases (CDKs) 1 are the catalytic subunits of a family of serine/threonine protein kinase complexes that are also composed of a cyclin regulatory subunit (1)(2)(3). Most members of the CDK family are involved in regulating the progression of the eukaryotic cell cycle at various stages throughout G 1 , S, G 2 , and M phases (4). Other CDKs are involved in regulation of other processes in the cell, including phosphate metabolism (5) and transcription (6,7). CDK2 is a member of the CDK family whose activity is restricted to the G 1 /S phase of the cell cycle. Several experiments demonstrated that CDK2 is essential for the mammalian cell cycle progression; micro-injection of antibodies directed against CDK2 blocked the progression of human diploid fibroblasts into S phase (8,9), and overexpression of a CDK2 dominant negative mutant in human osteosarcoma cells had a similar effect (10). CDK2 is subject to an elaborate series of post-translational modifications. Although it has no kinase activity itself, kinase activity is conferred by association of CDK2 with a regulatory subunit, cyclin A or cyclin E, and by phosphorylation of Thr-160. Conversely, CDK2 activity is repressed by phosphoryla-tion of Thr-14 or Tyr-15. Another layer of complexity is added to the regulatory scheme by CDK inhibitory proteins that can bind to CDK2 and inhibit the activity of the cyclin-kinase complex (4). While much attention has been given to the post-translational regulation of CDK2, we and others have found that CDK2 is also regulated at the transcriptional level. Horiguchi-Yamada et al. (11) reported a 3-fold increase in CDK2 mRNA in HL60 cells following stimulation with the phorbol ester 12-otetradecanoyl 13-acetate. Other groups (12) had similar findings with serum-stimulated human keratinocytes and human lung fibroblasts. Tanguay et al. (13) found induction of CDK2 expression in primary B lymphocytes following anti-IgM stimulation. These data suggest that transcriptional regulation of CDK2 could be important in the transition of cells from G 1 to S phase. Our interest in CDK2 transcriptional regulation originated from our observation that CDK2 protein is undetectable by immunohistochemistry in sections of normal rat carotid arteries but is rapidly induced in smooth muscle cells of rat carotid arteries after balloon injury (14). This manuscript reports the cloning of the human genomic DNA upstream of the coding region of CDK2. Most (70%) of the basal transcriptional activity of this promoter was localized to a 210-base pair (bp) fragment. Two Sp1 sites in this region were shown to contribute cooperatively to this transcriptional activity. The serum-induced activity of the promoter is located in a ϳ1.7-kilobase pairs (kb) region starting 680 bp upstream of the most proximal transcription initiation site. MATERIALS AND METHODS Plasmids and Constructs-pGL2-Basic (Promega) was used to generate luciferase reporter gene constructs. pCMV/SEAP (Tropix), which contains the secreted alkalaine phosphatase (SEAP) gene driven by the cytomegalovirus (CMV) promoter, was used in cotransfection experiments. pBR-␤-Puromycin, a plasmid expressing the puromycin resistance gene driven by the ␤-actin promoter, was a kind gift of L. Lee of the S. N. Cohen Lab (Stanford University) and was used to generate stably transfected cell lines by cotransfection. DSC34 was generated by cloning a ϳ2.4-kb AvrII-PstI fragment (Fig. 1, fragment B) from inverse PCR-amplified fragment A into pUC19 (New England Biolabs) digested with PstI and XbaI. DSC36 was generated by cloning a blunt-ended ϳ2.4-kb PstI-Asp718 fragment of DSC34 into HindIII-digested, bluntended pGL2-Basic, such that the CDK2 promoter directs transcription away from the luciferase gene. DSC37 was constructed the same way as DSC36, except that the CDK2 promoter directs transcription toward the luciferase gene. DSC40 was generated from DSC37 by deleting from the BamHI site in the insert to a BglII site in the poly-cloning region of pGL2-Basic. DSC40⌬4 -1, DSC40⌬6 -3, DSC40⌬9 -17, DSC40⌬10 -10, and DSC40⌬10 -16 were generated by exonuclease III/mung bean nuclease deletions (15) using NheI/SacI-digested DSC40. The end points of the deletions were determined by sequencing. DSC42 was generated from DSC37 by deletion of a BglII-Eco47III fragment. DSC51 was generated from DSC40 by deleting an Eco47III-Bsp120I fragment. DSC67 and DSC68 were generated from DSC40⌬9 -17 by site-directed mutagenesis (see below). PCR Amplifications-The positions of the 5Ј-end of all primers are * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EMBL Data Bank with accession number(s) U50730. RNase Protection Assays-The 250-bp Eco47III-PstI fragment from DSC34 was cloned into pT7T318U (Pharmacia Biotech Inc.) digested with HincII and PstI. The resulting plasmid was linearized with EcoRI and transcribed in vitro with T3 RNA polymerase in the presence of [␣-32 P]ATP to generate an antisense probe. Ribonuclease protection was performed as described (19) using RNA isolated from human umbilical vein cells (ATCC), according to Chirgwin et al. (20). Yeast RNA (Sigma) was used as a negative control. The size of the protected products was determined from a sequencing ladder run alongside the samples. DNase I Protection Assays-Protection assays were carried out as described (21), using purified Sp1 protein (Promega). An XmaI fragment from DSC40 (for Fig. 5, panel A) and a Bsp120I fragment from DSC68 and DSC69 (for Fig. 5, panels B and C) were radiolabeled using [␥-32 P]ATP and T4 polynucleotide kinase. These labeled fragments were digested with PvuII (for a fragment derived from DSC40) or BglI (for fragments derived from DSC68 or DSC69) to obtain fragments exclusively labeled at the 5Ј-end of the bottom strand. A primer (5Ј-CCGGGTCGGGATGGAACG-3Ј) starting at the 5Ј-end of the XmaI fragment was used to generate a parallel sequencing ladder. Site-directed Mutagenesis-DSC40⌬9 -17 was mutagenized using the U. S. E. Mutagenesis Kit from Pharmacia with the following oligonucleotides: 5Ј-TTTCCCTGGCTCCGAACCAGGC-3Ј and 5Ј-CACCA-GAGGCCCCGAACTGCTTCCCGCGTTT-3Ј, which are the Sp1 mutagenic oligonucleotides, and 5Ј-CATCGGTCGATGGATCCAGAC-3Ј, which was used to mutate the SalI site in the vector (mutated nucleotides are underlined). The SalI site change was used to enrich and screen for mutated plasmids. Mutagenesis was verified by sequence analysis. Cell Culture Methods-All tissue culture reagents were purchased from Life Technologies, Inc., except where indicated. NIH3T3 cells were grown in Dulbecco's modified essential medium containing 10% calf serum, 100 units/ml penicillin, G and 100 g/ml streptomycin. For serum stimulation experiments, cells were serum starved by growing them for 72 h in Dulbecco's modified essential medium containing 0.5% calf serum. Cells were stimulated with growth medium containing 10 ng/ml basic fibroblast growth factor and 1 ng/ml epidermal growth factor. Cells were transfected using LipofectAMINE (Life Technologies, Inc.) according to manufacturer's instructions. For transient assays 1.5 g of the luciferase-expressing plasmids were cotransfected with 0.5 g of pCMV/SEAP. Conditioned media were collected 24 h after transfection and assayed for SEAP using the Phospha-Light kit (Tropix). Cells were harvested and assayed for luciferase activity using the Luciferase Assay System (Promega). Transient transfections were repeated at least two independent times in duplicates. Stable cell lines were generated by cotransfecting 1.8 g of a luciferase-expressing construct and 0.2 g of pBR-␤-Puromycin, a plasmid expressing the puromycin resistance gene. Resistant cells were selected with puromycin (2 g/ml, Sigma) 24 h after transfection. After 5-10 days of selection, single resistant colonies were isolated and expanded. Cloning Genomic Upstream Sequences of the Human CDK2 Gene-Inverse PCR (17) was employed to clone a 4.2-kb genomic DNA fragment upstream of the known cDNA sequence corresponding to a BglII-BclI fragment (Fig. 1, fragment A). Sequence analysis revealed that this fragment contains an intron, located just upstream of the first BglII site in the coding region. A 2.4-kb AvrII-PstI subclone (fragment B), containing only exon sequences upstream of the translational start site, was used for subsequent transcription analyses. The cloned inverse PCR product was shown to be part of the CDK2 gene by identifying the sequence junction of the published CDK2 cDNA and the upstream sequence. In addition, in situ hybridization (data not shown) mapped the cloned CDK2 upstream sequence to the chromosomal locus 12q13, corresponding to a previously published report (22). Sequencing the Upstream Region of the Human CDK2 Gene and Mapping the 5Ј-End of the mRNA-The nucleotide sequence 1.1 kb upstream of the ATG translation initiation codon was determined from both strands as shown in Fig. 2. To determine the transcription start site, a ribonuclease protection assay was performed using RNA isolated from human umbilical vein cells and an in vitro transcribed RNA probe extending from the PstI site just upstream of the translation initiation codon to the Eco47III site 250 bp upstream (Fig. 3). Five transcription start sites were identified. The most downstream site was designated as nucleotide ϩ1 in Fig. 2. Three transcription start sites are clustered at positions ϩ1, Ϫ5, and Ϫ9. Two additional sites are located at positions Ϫ33 and Ϫ71. The Ϫ33 site maps close to the 5Ј-end of the longest published human CDK2 cDNA (23), which was isolated from HeLa cells, whereas the -9 start site maps close to the 5Ј-end of a different cDNA clone (15) that was also isolated from HeLa cells. No consensus TATA box was identified upstream to any of the transcription start sites nor was one identified anywhere else in the sequenced upstream region. Putative transcription factor binding sites were identified using manual scanning and the TFD data base (24) in conjunction with the MacPattern program (25) (see Fig. 2). Two consensus Sp1 elements were found to lie in proximity to the two upstream transcription start sites. Sp1 is known to guide initiation in some TATA-less promoters (26), and so we hypothesized that these elements might be functionally important in the human CDK2 gene. A binding site for YY1, another factor also known to determine the sites of initiation in some TATA-less promoters (26), was also identified upstream of the three transcription start sites clustered at positions ϩ1, Ϫ5, and Ϫ9. Other putative tran- scription factor binding sites identified in the upstream region of the human CDK2 gene include multiple AP-2, E2F, and p53 binding sites as well as single sites for AP-1, c-myb, oct, HiNF-A, and NFY/CTF, a CCAAT box binding factor. Functional Analysis of the Basal Activity of the CDK2 Promoter-The CDK2 promoter region was analyzed by transient transfection of luciferase reporter gene constructs into NIH3T3 cells. Luciferase activity was corrected for differences in transfection efficiency by cotransfection with a plasmid expressing the SEAP gene driven by the CMV promoter (see "Materials and Methods"). Deletion analysis of the CDK2 promoter (Fig. 4) revealed that a 210-bp fragment containing 100 bp upstream of the most proximal transcription start site (DSC40⌬9 -17) contains the required elements for approximately 70% of the promoter activity that is generated by a full-length construct (DSC37). A further deletion to nucleotide Ϫ15 (DSC40⌬10 -10) reduced the activity to less than 3% of that generated by the full-length construct (DSC37). This activity was similar to the background activity generated by the vector alone (pGL2-Basic). An internal deletion that removes all the transcriptional start sites (DSC51) also had no promoter activity above background as did a reporter construct containing the full-length sequence in the reverse orientation (DSC36). DNase I protection analysis of the region contained in DSC40 using HeLa nuclear extracts identified two protected regions, each of which contained Sp1-like binding sequences (data not shown). To test the importance of these Sp1 sites, a conserved GG sequence in each of the Sp1 sites was independently mutated to AA. A DNase I protection assay (Fig. 5A) demonstrated that the wild type DNA fragment was protected by purified Sp1 protein from DNase I digestion at two distinct regions (I and II); these regions were the same as those detected with HeLa nuclear extract. Mutating each of these Sp1 sites individually resulted in loss of protection in the mutated Sp1 site but did not affect Sp1 binding to the remaining wild type Sp1 site (Fig. 5, B and C). Transient transfection of NIH3T3 cells with con- structs analogous to DSC40⌬9 -17, except containing mutations in either one of the Sp1 sites (Fig. 6, DSC67 and DSC68), generated luciferase activity that was less than 25% of the activity generated by the full-length CDK2 promoter construct (DSC37), or approximately 30% of that generated by DSC40⌬9 -17. These results indicate that each of these Sp1 sites contributes to the observed transcription activity. Moreover, it also suggests that these sites act synergistically to generate transcriptional activity that is greater than the sum of activities each site can generate by itself. Analysis of Serum-induced Activity of CDK2 Promoter-To analyze the serum inducibility of the cloned CDK2 promoter region, stably transfected NIH3T3 cell lines expressing luciferase from CDK2 promoter deletion derivatives were established. Cells were serum starved for 72 h prior to being exposed to serum and growth factors (Fig. 7). Luciferase activity increased 3-fold 12 h after serum stimulation of cells stably expressing the full-length construct (DSC37). In contrast, no induction by serum was observed with cells stably expressing DSC40, which exhibits full basal activity, but is about 1.7 kb shorter than DSC37. The same results were obtained with two independently isolated cell lines stably expressing the same constructs (data not shown). CDK2 Gene Structure-PCR amplifications with pairs of primers that overlap most of the published human CDK2 cDNA sequence were used to determine the intron/exon junctions of this gene. Human genomic DNA and total human RNA were used as amplification substrates. Fragments from DNA amplification that were larger than the respective fragments amplified from RNA were cloned. Each cloned fragment was sequenced from both ends until an exon/intron boundary was reached. Seven exons were identified, and their positions are indicated in Fig. 8. DISCUSSION In this study, we have cloned and sequenced the upstream region of the human CDK2 gene and determined the transcription start sites for this gene by ribonuclease protection assay. Five transcription start sites spread over a 72-bp region were identified (Fig. 3). No consensus TATA box was identified in the FIG. 4. Deletion analysis of CDK2 promoter activity. Luciferase constructs are depicted on the left side. The 5Ј-end of each of construct relative to the proximal transcription start site is indicated. Luciferase activity was divided by the activity of the cotransfected SEAPexpressing construct to correct for differences in transfection efficiency (see "Materials and Methods") and is expressed as a percentage of DSC37 activity. Bars represent standard errors of the mean. entire upstream sequence. Thus, this promoter falls into the category of TATA-less promoters similar to all other cell cycle genes analyzed to date including: cdc2 (27), cyclin A (28), cyclin D1 (29,30), cyclin D2 and cyclin D3 (31), as well as Xenopus laevis cdk2 (32). A YY1 box, which in some TATA-less promoters is responsible for determining the transcription start site (26), is present just upstream from the three start sites located at positions ϩ1, Ϫ5, and Ϫ9. An Sp1 site was identified upstream of each of the remaining transcription start sites (Ϫ33 and Ϫ71), suggesting that these Sp1 regions may be responsible for localizing the start of transcription at these sites (26). Other putative transcription factor binding sites were also identified (Fig. 2). The presence of a c-Myb binding site is intriguing since c-myb was shown to transactivate the closely related human CDC2 gene (33). This could indicate that a transcription factor that positively regulates a G 2 event, like CDC2 induction, might also regulate a G 1 event such as CDK2 induction. Two putative p53 binding sites were identified within 200 bp of the 3Ј or most proximal transcription start site. p53 is a known tumor suppressor gene that has been postulated to be involved in induction of cell cycle arrest. It is perplexing to assume that p53 would induce CDK2 since this induction would most likely result in an accelerated cell cycle rather than a cell cycle arrest. Interestingly, a p53 site was also identified in the promoter region of the cyclin A gene, a regulatory partner of CDK2 (28). Further investigation of the possible involvement of p53 in CDK2 regulation is required. Functional analysis of the promoter region revealed that a construct (DSC40⌬9 -17) that contains DNA extending from nucleotide Ϫ100 to ϩ108 is sufficient for strong basal promoter activity (about 30% of the SV40 early promoter, data not shown). DNase I footprint analysis of the CDK2 upstream region with HeLa nuclear extract (data not shown) revealed only two protected regions, both of which are Sp1 like sites, contained within the DSC40⌬9 -17 clone. Further analysis indicated that these sites in fact bind purified Sp1 protein (Fig. 5). Furthermore, individually mutating each site abolished the DNase I protection only in the mutated site but not in the adjacent wild type site. This information indicates that Sp1 can bind to each of these sites in an independent fashion. The transcriptional activity of reporter gene constructs equivalent to DSC40⌬9 -17, but with individually mutated Sp1 sites (Fig. 6, DSC67 and DSC68), was less than 25% of the activity generated by the full-length CDK2 promoter construct (DSC37) and approximately 30% of that generated by DSC40⌬9 -17. This suggests that each of these Sp1 sites contributes to the basal activity of the CDK2 promoter. It also suggests that their combined effect is synergistic, since both sites generate transcriptional activity that is greater than the sum of the activities generated by each site independently. The level of CDK2 mRNA induction following stimulation of quiescent cells was reported to be 2-3-fold (11)(12)(13). Our attempts to detect this low level of serum-induced promoter activity using a transient transfection cell culture system produced ambiguous results, presumably because there is plasmid loss over time, and this loss masks the serum-induced promoter activity of the retained plasmids. To overcome this problem, NIH3T3 cell lines stably expressing the luciferase enzyme driven by various CDK2 promoter constructs were established. The basal luciferase activity of the cell lines in this study was comparable; however, only cells which contained about 2.4 kb of the upstream region of the CDK2 gene (DSC37) were induced by serum. The level of induction following serum starvation and maximal growth factor stimulation was about 3-fold, as was expected from the published literature and our own unpublished observations. The next longest deletion derivative, DSC40, which expressed full basal promoter activity in a transient transfection assay, was not induced by serum and growth factor stimulation. These data suggest that the information needed for serum induction resides in a ϳ1.7-kb segment, which starts 682 nucleotides upstream of the most proximal transcription start site. We found that the human CDK2 gene is made up of at least seven exons. However, our characterization would not detect exons located 3Ј to position 1295 in the published cDNA sequence (15). All the intervening sequences that were identified are contained within the coding region of the gene. Exon I is longer in CDK2 than in the characterized CDC2 genes (27,34) and is conserved in X. laevis cdk2 (32). Other differences between the CDK2 and the CDC2 gene structure include two additional introns located at amino acids 105 and 196 of the human CDK2 gene that are not present in the Sacchromyces pombe CDC2 gene. The CDK2 gene structure and sequence information published here may be useful for designing primers to investigate possible CDK2 gene mutations and rearrangements. Although CDK2 has not been implicated in oncogenic transformation, one of its regulatory partners, cyclin A, has been implicated in human hepatocellular carcinoma (35), and its other regulatory partner, cyclin E, has been shown to accelerate G 1 progression if overexpressed (36). It is thus plausible to assume that CDK2 mutations might play a role in malignancy and may prove worthwhile targets for exploration of genetic instability in tumors. In summary, the elements required for basal expression and serum induction of the human CDK2 promoter were localized to a ϳ2.4-kb fragment. Basal level expression of the CDK2 promoter is fully contained within 290 bp upstream of the most proximal transcription start site (DSC40⌬6 -3), and approximately 70% of the activity can be generated by a 200-bp fragment containing only 100 bp upstream of the most proximal transcription start site. Two Sp1 DNA binding sites identified in this region synergistically contribute most of the basal promoter activity of this region. The elements required for serum inducibility lie about 700 bp further upstream and are contained in a ϳ1.7-kb fragment. Multiple sites with homology to known transcription factor binding sites are located in the promoter region of the human CDK2 gene. Further analysis of these sites and their corresponding transcription factors is necessary for a more complete understanding of the transcriptional regulation of this gene. FIG. 8. Exon map of the human CDK2 gene. Boxes correspond in length to the exon size. The end of exon VII was not determined. Nucleotides are numbered relative to the most proximal transcription initiation site. Below the map, the exon/intron boundaries are aligned with each other and with the consensus splice acceptor and splice donor sequences. 100% conserved nucleotides are underlined.
2018-04-03T03:39:38.945Z
1996-05-24T00:00:00.000
{ "year": 1996, "sha1": "2e35101bbd5ab206faa02f69b51b77929633ab39", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/21/12199.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "23da76e89ec641bd50322e5fecced49660e71421", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253026686
pes2o/s2orc
v3-fos-license
Relationship between Aesthetic Experience and Anxiety Level of Middle School Students . In the contemporary society of aesthetic daily life, people not only examine commodities, environment and life with a stricter "aesthetic" vision, but also observe others and themselves, so that modern people's attention to appearance is beyond the past, which brings a series of problems of aesthetic value orientation. As middle school students, they are in the peak of psychological fluctuation in life, so the relationship between aesthetic experience and anxiety level is more worthy of study. This paper takes middle school students as the main research object to explore the relationship between their aesthetic experience and anxiety level. Introduction In recent years, hot words such as "facial anxiety" have repeatedly appeared in various forms on social media, which has a negative impact on the aesthetic experience of high school students [1]. "Appearance anxiety" refers to a lack of confidence in one's appearance in an environment that amplifies "appearance level." In the current discourse logic of "appearance level is justice", "appearance anxiety" has gradually derived into a social focus problem [2], which makes high school students in the new era unconsciously fall into the aesthetic dilemma. In 2021, Chinese media surveyed 2,063 high school students nationwide on the topic of "face anxiety." The results showed that 59.03% of high school students had some degree of facial anxiety, and the proportion of severe facial anxiety in boys was higher than that in girls, while the proportion of moderate anxiety in girls was higher than that in boys [3]. Psychological studies have found that women suffer more from facial anxiety than men. At the same time, face anxiety presents a certain degree of age structure stratification, with face anxiety problems often in the young people in their 20s. As the silhouette of high school students' aesthetic perception, the "appearance anxiety" caused by excessive attention to "aesthetic experience" reflects the existing problems of contemporary high school students' aesthetic perception [4]. Therefore, it is worth studying how to solve the problems mentioned above. The Anxiety Caused by Aesthetic Experience Excessive pursuit of "perfectionism", falling into the dilemma of inferiority. High school students are in the self-concept of the conscious development period, by "Yan Under the influence of the public opinion of "value first" and the pursuit of the image of "high appearance level", high school students form and strengthen the self-cognition of "I am not good-looking" [5]. The distorted selfcognition leads high school students to excessively pursue the perfection of their appearance. On this basis, a major contradiction between "idealized self" and "realistic self" has gradually emerged in high school students' appearance cognition. "Idealized self" is the subjective shaping of selfappearance, and "realistic self" is the objective presentation of self-appearance. The collision between ideal and reality makes high school students have a sense of difference. High school students can't accept this difference and then accept themselves negatively. They take the "ideal standard" as the basis to judge their "defects", and then fall into "appearance anxiety". The excessive pursuit of self-appearance makes these high school students' psychological quality level lower, lack of confidence, eager to be recognized by others but afraid of not being respected by others, and ultimately suffer from inner inferiority, causing irreparable psychological damage [6]. Desire to achieve "ideal self", blurred self-positioning. When high school students have a sense of difference between "ideal self" and "real self", they will further form the motivation to eliminate this sense of difference, which motivates high school students to embark on the road of "redemption" in pursuit of "ideal self". More and more high school students choose to have plastic surgery in order to have the so-called "standard beauty", not hesitate to take pain and risks, and even carry heavy debts because of their lack of spending power; because of the extreme aesthetics of "either thin or die" Impulsive consumption when applying for a card, using unscientific methods such as excessive fitness to manage your body, causing serious physical and mental health problems; trying your best to cover up your "defects" with "popular makeup", deviating from your true self, and unable to face yourself after makeup removal [7]. The external image that high school students aspire to have is influenced by external pressure, and it is a manifestation of looking at themselves with the standards of others. The excessively "idealized" aesthetic pursuit causes high school students to view themselves with a deformed aesthetic mentality, blurring their true and objective positioning, and finally falling into a whirlpool of anxiety. Widely recognized "appearance level first", narrow aesthetic vision. The aesthetic orientation of "appearance level first" and the worship of simple aesthetic standards lead to the deviation of the aesthetic emphasis of high school students, who pay too much attention to the external, ignore the improvement of their intrinsic value, and superficial aesthetic pursuit. Encouraged by this trend, "appearance level is justice" has become the life creed and unremitting pursuit of countless high school students. This superficial pursuit gradually makes high school students fall into self-trouble and bear the pressure of imprisonment. For example, high school students gain self-satisfaction by "upgrading" their appearance. Post the beautified photos on social platforms to seek recognition from others with the "perfect persona" presented; When I see a topic, picture or video about my appearance level, I subconsciously compare it with others, which leads me to think that I am inferior to others, and I fall into a spiral of jealousy. Finally, the theory of "only appearance level" has impacted the mainstream aesthetic discourse, diluted or even eliminated the positive aesthetic value orientation of high school students, narrowed the aesthetic vision of high school students, resulting in a one-sided view of social problems, and lost the ability to objectively judge problems. Analysis of the Causes of Anxiety Caused by Aesthetic Experience Maslow believes that the need for beauty has to do with the image of the person, and that beauty helps people become healthier. However, the cultural values of "appearance first" are making high school students lose in the enjoyment of sensory beauty gradually, and the paralysis of utilitarian beauty and superficial beauty is eroding the aesthetics of high school students. As Baudrillard puts it in The Consumer Society: "Just as the Wolf child becomes a Wolf by living with a Wolf, so we ourselves gradually become a functional human being." Only by revealing the driving force behind "appearance anxiety" can we seek targeted relief measures and help high school students get out of the aesthetic dilemma [8]. On the one hand, in the "virtual world", "filter" survival has become the norm. When the interactive social behaviors such as like, comment and forwarding become the necessary social activities for high school students, they also put the "beautiful photos" carefully polished by the beauty camera into the "information cocoon room" where high school students live. High school students gain a virtual sense of security in the social world beautified by the filter [9]. Recording the real self is no longer the only way for high school students to find, but more is to be immersed in the "virtual world" to restore the "ideal self" carnival. The sensory image of being white, beautiful, handsome and fresh has been constructed as a kind of body political discourse. High school students compare the polished and modified "beautiful photos" with their true self-image, and take the "appearance level" as the standard to judge others. As a result, the filtered way of social survival blurs the boundary between reality and ideal. When high school students jump out of the "safety barrier" created by them and find that there is a difference between the real self and the ideal self and the "perfect image" presented by Internet bloggers, "appearance anxiety" is easily amplified instantly. On the other hand, high school students lack of experience, aesthetic cognition is intuitive and simple. High school students have a keen perception of beauty [10]. After the baptism of heavy learning tasks, high school students are eager to show their personality through dressing up, which shows that they are eager to develop themselves, realize their own value and pursue aesthetic appreciation, which is worthy of praise. But as the matrix of mass culture, all kinds of the aesthetic elements based on the public identity has had a great impact on their aesthetic perception, lack of experience, high school students lack of discrimination of all kinds of aesthetic elements, vulnerable to the negative information, some high school students too much pay attention to their appearance and neglect the internal value of self to ascend, for "what is real beauty?" Lack of rational and profound thinking, without their own depth of thought, aesthetic and weathering phenomenon is prominent, constantly produce "if only my nose is higher" and other problems. The aesthetic perception of high school students is gradually deformed, and finally the sensibility oversteps the rationality, and it is difficult to resist the tide of "appearance anxiety", so that the active aesthetic cognitive activities are gradually added with invisible "shackles". In addition, unscrupulous media weakens positive aesthetic values and promotes the spread of deformed aesthetics. The new media era has provided various media with a more free and broad space for development. However, in the face of fierce market competition, some unscrupulous media are shifting their positioning from authoritative information propagandists to appearance in order to win more clicks. Supremacy inducers make the high school students exposed to them the victims of misguided aesthetics. For example, they tend to attract people's attention with titles with words like "little freshman" or "first love face", and when this aesthetic is over-hyped as an ideal aesthetic standard, high school students gradually become mentally and physically Passive acceptance begins to fall into self-doubt, resulting in "appearance anxiety", which makes the aesthetic standards of high school students present a trend of homogenization. At the same time, unscrupulous media is also based on big data analysis, and continuously pushes sensory "perfect images" to users through fragmented forms. Seeing the long legs and handsome appearance on the screen is easy to compare with oneself, and use this as a standard to judge self "defects", blindly call one's unique characteristics "defects", and finally sensibility surpasses rationality, which makes some high school students downplay or even dissolve the aesthetic education they originally received. Conclusion From the perspective of the main body of education, first, we must correctly view the emerging aesthetics of high school students. High school students, as a new force of the new generation, have some emerging concepts that are worthy of recognition. This is not only an inevitable need for the continuous updating and development of aesthetic education, but also an inexhaustible driving force for the continuous updating and development of aesthetic education. Second, it is necessary to reasonably meet the aesthetic needs of high school students. "Everyone has the love of beauty", but the "principle of moderation" is very important in aesthetic education. Excessive pursuit of beauty will generalize the subculture of "beauty is justice" and make the pursuit of beauty a superficial and vulgar behavior. Only by following the "principle of moderation" and reasonably satisfying the aesthetic needs of high school students can aesthetic education be properly played. effect. Third, we must consciously improve our aesthetic pursuits and update our aesthetic concepts in a timely manner. Mastering a solid theoretical basis is the basic requirement of the subject of education. Educators should take a solid theoretical basis as the premise to carry out aesthetic education activities and continuously improve their aesthetic quality. It is necessary to keep pace with the times and actively absorb the mainstream aesthetic culture of the society, and update the aesthetic concept in time. Use timely aesthetics and their own personality charm to guide high school students to pay attention to both internal and external cultivation, use the correct "beauty" to understand the world, help high school students to enrich their spiritual world, and improve the effectiveness of aesthetic education. From the perspective of educational objects, first, high school students should view themselves from a comprehensive and objective perspective and correct their aesthetic attitude. The deviation of self-awareness is one of the causes of "appearance anxiety". High school students need to be based on themselves and face up to themselves comprehensively. Inadequate correction, continuous improvement. Second, high school students should improve their aesthetic discrimination ability and treat aesthetic trends rationally. The current aesthetic trend reflects the narrowness of aesthetic orientation. High school students need to be treated rationally and not be led by vulgar and one-sided trends; on the basis of fully absorbing the aesthetic education content that conforms to the mainstream values of society, improve the ability of aesthetic discrimination, to overcome blindly following trends and comparisons, and adhere to the correct aesthetic orientation. In short, high school students should enhance their self-confidence and understand that appearance does not mean everything. Elegant demeanor, confident conversation, and rich knowledge content are far more important than appearance, and can bring a lasting sense of happiness and achievement. Don't fall into the predicament of inferiority because of a temporary deformed aesthetic value orientation and prevent yourself from moving forward footsteps.
2022-10-21T15:11:30.016Z
2022-10-18T00:00:00.000
{ "year": 2022, "sha1": "0f3a9a2a1e0120ceb5c1691ef771cc42f6fbc33e", "oa_license": "CCBY", "oa_url": "https://bcpublication.org/index.php/SSH/article/download/2419/2395", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "78cb7a0f44ac716fe2ea242399a63ada809d7759", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [] }
17645430
pes2o/s2orc
v3-fos-license
Extensive unidirectional introgression between two salamander lineages of ancient divergence and its evolutionary implications Hybridization and introgression, contrary to previous beliefs, are now considered to be widespread processes even among animal species. Nonetheless, the range of their possible outcomes and roles in moulding biodiversity patterns are still far from being fully appraised. Here we investigated the pattern of hybridization and introgression between Salamandrina perspicillata and S. terdigitata, two salamanders endemic to the Italian peninsula. Using a set of diagnostic or differentiated genetic markers (9 nuclear and 1 mitochondrial), we documented extensive unidirectional introgression of S. terdigitata alleles into the S. perspicillata gene pool in central Italy, indicating that barriers against hybridization were permeable when they came into secondary contact, and despite their ancient divergence. Nonetheless, purebred S. terdigitata, as well as F1, F2, and backcrosses were not found within the hybrid zone. Moreover, Bayesian analyses of population structure identified admixed populations belonging to a differentiated gene pool with respect to both parental populations. Overall, the observed genetic structure, together with their geographic pattern of distribution, suggests that Salamandrina populations in central Italy could have entered a distinct evolutionary pathway. How far they have gone along this pathway will deserve future investigation. Introgression, which is the invasion of foreign genetic material into a genome 16 , is a frequent albeit long underappreciated 21 outcome of hybridization, and a main driver for many of its major evolutionary consequences 4,12 . The nature and extent of introgression has been shown to substantially vary across interacting lineages. Introgression can be neutral or adaptive, geographically restricted to the contact zone or widespread, and transient or persistent; this process can reverse or accelerate the course of speciation events, and can also drive populations to follow independent evolutionary pathways 12,21 . Hybridization and introgression have been classically studied in natural hybrid zones where two previously allopatric lineages come into secondary contact 22 , although substantial contributions in this direction have recently emerged from the study of biological invasions 23 . A major realization coming from the extensive studies of hybrid zones is that substantial differences often occur in the extent of introgression among genomic regions. First, organelle and nuclear genomes commonly differ in the extent of introgression 24 , often as a consequence of 'Haldane's rule', which predict that heterogametic offspring (either XY or ZW) are less viable 25,26 . Second, there is growing evidence that variation occurs in the extent of introgression even among distinct regions of the nuclear genomes 27 . These observations provided strong support for the genic view of speciation 28 , which suggests that reproductive isolation is a consequence of the divergent selection acting on a few loci that are important for fitness and adaptation ('barrier loci') rather than to incompatibility between interacting genomes as a whole 12,27,28 . Accordingly, most of the genome can undergo substantial introgression, whereas genomic regions that are responsible or linked to reproductive or adaptive differences will experience little introgression and will show substantial divergence among hybridizing lineages (the so called 'islands of genomic divergence' 12,27 ; but see also 29 ). In turn, this view explains why several species remain cohesive evolutionary units while showing clear evidence of extensive introgression among them. In this paper, we investigated the hybridization dynamics between the only two extant species of salamanders of the genus Salamandrina, the Northern spectacled salamander S. perspicillata and the Southern spectacled salamander S. terdigitata. These species are endemic to the northern and central portion, and to the southern portion, of the Italian peninsula, respectively. There is a limited area of close contiguity in-between (see Figure 1). They are lungless and are mainly found in the undergrowth close to various slow running or small lenthic water bodies at altitudes ranging 200-700 m above sea level along the Apennine chain and some adjacent hilly areas 30 . Once regarded as a single species (S. terdigitata), they were recently identified as two deeply divergent species based on both nuclear and mitochondrial genetic data [31][32][33] and their divergence time was estimated to largely predate the onset of the Pleistocene epoch 32,33 (but see also 34 ). Recently, a small area was found where both the highly divergent mitochondrial DNA (mtDNA) lineages come into sympatry 35 , and a preliminary analysis of the hybridization was carried out (based on one sampling location, one mtDNA marker, and one diagnostic and one uninformative nuclear markers) 36 . The purpose of this study was to determine the extent, outcome, and consequences of hybridization between S. perspicillata and S. terdigitata. We characterized patterns of hybridization and introgression within the putative area of secondary contact, using 9 nuclear (allozyme loci) and 1 mitochondrial markers, whose patterns of variation among allopatric populations have been assessed in previous studies 31,32 . Results Allele frequencies at the 9 loci analysed in the 10 population samples are shown in Table 1. Alleles previously identified to be of diagnostic value for S. perspicillata were not observed in samples 9 and 10, whereas they were frequently prevalent among samples 1-8. Among the five fully diagnostic loci (Pgm-2, Gapdh, Aat-1 PepD-2, and Mdhp-1), alleles of both species were observed at high frequencies in samples 4-8, although S. persipicillata alleles were prevalent in most cases. Significant deviations from the expected Hardy-Weinberg (HW) and genotypic linkage equilibria (at the 5% nominal level) were not observed. Estimates of population genetic variability are presented in Table 1. At all the estimated parameters (H E , H O , and A R ) samples 4-8 were those showing higher values, whereas samples 9 and 10 were by far the least variable. The analysis of the genotypes simulated using HYBRIDLAB indicated 0.90 as the threshold value maximizing the confidence in identifying an individual as admixed using STRUCTURE, and 0.80 as the best threshold in assigning an individual to a hybrid class using NEWHYBRID. In both cases, the model performance was 0.95 (see Figure 2). The Bayesian clustering algorithm implemented in STRUCTURE suggested K 5 3 as the best clustering option for our data when the highest ln-probability is used as optimality criterion, while K 5 2 was indicated as the best option under the Evanno's DK optimality criterion (see Supplementary Information). As shown in Figure 3A, with K 5 2 all individuals from samples 9 and 10 were attributed to the southern species S. terdigitata, whereas individuals from samples 1-8 were attributed to S. perspicillata. Nevertheless, among samples 4-8 several individuals were identified as significantly admixed with S. terdigitata, with the average Q-value of these samples ranging between 0.8 and 0.15 (see Table 1). When K 5 3 was used, samples 9-10 and 1-3 were still assigned to separate clusters, with no evidence of admixture, whereas individuals from samples 4-8 were all assigned to a third cluster ( Figure 3B). Among the latter, all but one individuals from samples 5, 6, and 7 appeared admixed with the northern cluster (see also Table 1), whereas no such evidence was observed for individuals from sample 8 and for all but two from sample 4. When populations assigned to each cluster were grouped, the expected heterozygosity (H E ) of the northern (samples 1-3), central (4)(5)(6)(7)(8) and southern (9)(10) The analysis of individual genotypic data using NEWHYBRID indicated, with high confidence, that most of the individuals analysed were 'pure' S. perspicillata or S. terdigitata; it also showed the lack of F1 and F2 hybrids or backcrosses with S. terdigitata ( Figure 3C). However, several individuals from samples 4-8, although showing higher probability of assignment to pure S. perspicillata, did not reach the threshold value (0.80) suggested for an assignment with high confidence. Finally, the analysis of the mtDNA restriction profiles revealed the occurrence of only two composite haplotypes among the studied individuals, one specific to S. perspicillata and one specific to S. terdigitata ( Figure 3D and Table 1). The former was the only one observed among individuals from samples 1-6, and it was also carried by 2 individuals from sample 8, whereas the latter was found fixed in samples 7, 9 and 10 and was prevalent in sample 8 (19 out of 21 individuals analysed). Discussion Our analysis of the putative secondary contact zone between S. terdigitata and S. perspicillata highlights the importance of using multiple diagnostic markers in resolving evolutionary processes within such zones, even when studying deeply and anciently divergent species, such as those investigated in the present study. Indeed, the use of the sole mitochondrial genome (as is usual in many barcoding efforts; see e.g. 37 ) would have misleadingly suggested a more northern location for the contact zone, and the assignment of most of the individuals from the southern edge of the range of S. perspicillata to S. terdigitata. Furthermore, the high frequencies of several S. terdigitata diagnostic alleles within putatively S. perspicillata samples, together with the occurrence of both HW and linkage equilibria within samples, could have resulted in the misassignment of several individuals to pure S. terdigitata or to a recent hybrid progeny, if each locus had been analysed separately. This could explain, at least in part, the striking discordance between our results, and those of previous reports 36 that suggested extensive ongoing gene exchange and syn- topy between both species based on mtDNA, and a single nuclear locus of diagnostic value. Our data provided evidence of extensive, unidirectional, introgression of the southern species into the northern one. Nevertheless, they provided support neither to the current syntopy between species, nor to ongoing gene exchange, suggesting a more complicated evolutionary scenario for the interaction between the two spectacled salamanders than previously thought. Diagnostic alleles of S. terdigitata included in the present study were observed to various extents within the southern S. perspicillata populations, whereas the opposite was never observed. A similar pattern of extensive and asymmetric allele sharing among species could be the outcome of either a secondary contact between species followed by hybridization and introgression of one species' genes into the other species' gene pool or of an incomplete sorting of alleles. Nevertheless, we can be fairly confident in discarding incomplete lineage sorting and favoring secondary contact as the causal process of the observed pattern. Indeed, allele sharing was geographically limited to the area of contiguity between both species, not randomly distributed across the species' ranges as expected in the case of incomplete lineage sorting 38 . The co-presence of both S. perspicillata and S. terdigitata diagnostic alleles within the southern S. perspicillata populations thus serves as evidence of extensive and unidirectional introgression of the southern species' alleles into the northern one's genome. The frequency of introgressed S. terdigitata alleles within S. perspicillata populations varied conspicuously, ranging from ,5% to 55%, with several cases at $40% (see Table 1). Additionally, the geographic area where they are found is relatively wide considering the limited dispersal abilities of the studied species 30 . This is an interesting pattern, suggesting that the various alleles experienced distinct selection regimes once within the heterospecific genomic background 27 . Nonetheless, this hypothesis deserves further investigation. At the moment, it should be considered as speculative for at least 2 reasons: a) the scattered distribution of both species in the study area and the low number of samples investigated prevented us from comparing clines at each locus with the average extent of introgression (i.e., to undertake a formal genetic cline analysis 39,40 ), as well as to study the role of selection in shaping variation at the studied loci; b) without a more extensive sampling, allowing us to draw geographic trends, we could not discriminate between selection and genetic drift acting on single populations and loci in driving the variation of exogenous allele frequencies over space and time, following secondary contact. Despite these limitations in our data (which are mostly due to the actual species' distributions, see below), the observed frequencies of several introgressed alleles, as well as the average contribution of S. terdigitata to the genetic diversity of admixed S. perspicillata populations are conspicuous. They appear beyond what we could usually expect for two anciently divergent species with the barriers to gene exchange almost completed at the time of the secondary contact 41 . Our data suggest that such barriers were leaky, largely permeable when the species came into contact, and could have eventually been completed later. We found no evidence for the occurrence of pure S. terdigitata individuals or recent hybrids (two generations) within southern S. perspicillata populations (see Figure 2C). At least three scenarios could account for such an absence: 1) pure S. terdigitata individuals are present but rare within the study area, and our dataset lacks the resolution to reliably identify recent hybrids; 2) our sampling area did not cover the core of the hybrid zone, where both pure parentals and hybrids occur; or 3) pure S. terdigitata are no longer present within the range of S. perspicillata. The analysis of model performance using simulated hybrid genotypes using both STRUCTURE and NEWHYBRID indicated that our data provided the necessary resolution to identify recent hybrids, leading us to discount the first scenario as the least probable. Also, the key question to disentangle scenario 2 and 3 is whether the hybrid zone (and its center) could extend more to the south and east of our sample 8, towards the area where pure S. terdigitata populations thrive. Unfortunately, this question does not have a simple answer. Currently, the distribution of Salamandrina populations is not continuous along the north-west to south-east axis, and the geographic gap between samples 8 and 9 largely reflects a discontinuity in the distribution of the populations. However, this area has been intensively modified by past and present anthropogenic activities, and it is not implausible that the structure of the hybrid zone has been modified as well. Therefore, while scenario 3 appears to be the most plausible in current times, we cannot exclude that scenario 2 occurred at some point in the past. Completion of reproductive isolation barriers driven by production of unfit hybrids (i.e., by reinforcement 9,18 ,) followed by exclusion of S. terdigitata from the sympatric area (i.e., scenario 3) on the one hand, and the recent disappearance of part of the hybrid zone where the two species met and mated (i.e., scenario 2) on the other hand, could be tested experimentally. Indeed, under scenario 3, experimental investigations of mate choice using S. perspicillata and S. terdigitata individuals from the study area should reveal the occurrence of pre-zygotic barriers (by a strong deficit of heterospecific mates), whereas such barriers could not intervene when individuals from largely allopatric populations are tested 42,43 . The same outcome would not be expected if scenario 2 were true. Therefore, such an experimental design, based on the theory of reinforcement of reproductive isolation, would use the expected geographic structure of reinforcing selection and its outcomes to generate testable hypotheses 12 , and to shed light on the history of interactions between the two Salamandrina species. We are currently exploring this research direction. On a distinct but similar ground, results using STRUCTURE with K 5 3 as a clustering option identified the southernmost populations of S. perspicillata as belonging to a differentiated gene pool with respect to those located more to the north. Interestingly, when this clustering option was adopted, the degree to which S. terdigitata contributed to the gene pool of southern S. perspicillata populations appeared negligible, whereas gene flow from the northern populations was indicated. This pattern supports the idea that the S. terdigitata alleles have become integral to the gene pool of southern S. perspicillata populations, and it also suggests that these populations could have achieved some degree of evolutionary 'independence' from conspecific populations to the north. Further support for this interpretation comes from the lack of HW and linkage disequilibria, as well as from previous findings 35 that have indicated that the southern S. perspicillata populations belong to a distinct albeit weakly differentiated mtDNA haplogroup. To what extent this group of populations has entered its own evolutionary pathway will certainly deserve future investigation using a deeper genome scan, as well as a thorough analysis of trait variations in ecological and morphological characters, both among and within populations. Nonetheless, it is worth noting that the aforementioned genetic pattern shows striking parallels with patterns previously used in support of the hybrid origin of recently originated lineages [44][45][46] . Regardless of how far these have progressed in this pathway, the observed genetic structure, together with their patchy distribution within a heterogeneous and recently humandisturbed area, render Salamandrina populations in central Italy particularly interesting to contribute to investigations of introgressive hybridization, particularly in terms of its range of outcomes 47 . In times of resurgent and growing interest in the role of reticulate evolution in shaping current patterns of biodiversity, these appear to offer intriguing opportunities for future insights. Methods Sampling and laboratory procedures. Population samples were collected at 10 sites (157 individuals) from the area of close contiguity between the ranges of Salamandrina perspicillata and S. terdigitata. The geographic location of the population samples and sample sizes are shown in Table 2 and Figure 1. For each individual analysed, a tissue sample was obtained through tail-clipping, and the individual was released at its collection site. Tissue samples were then transported to the laboratory and stored at 280uC. Sampling activities and the tail-clipping procedure for tissue collection were approved by the Italian Ministry of Environment (permit number : DPN-2009-0026530). Standard horizontal starch gel (10%) electrophoresis was conducted to screen for variations at nine allozyme loci previously identified as showing diagnostic or differentiated electrophoretic patterns among the two species 31 visualization, and allele calling procedures were carried out following previously published protocols 31 . Genomic DNA was extracted using the cetyltrimethyl ammonium bromide (CTAB) procedure 48 . A fragment of the mitochondrial DNA (mtDNA) gene encoding for cytochrome b was amplified through polymerase chain reaction (PCR) and sequenced. The PCR mixture and cycling conditions followed strictly 32 . PCR products of two individuals per population sample were purified and sequenced by Macrogen Inc. (www.macrogen.com). These sequences were then checked and aligned using the software GeneStudio Pro and used to identify two restriction endonucleases of diagnostic value among S. terdigitata and S. perspicillata. The many putative restriction enzymes were further assessed for their diagnostic value using previously published sequences of both species, available in the Genbank database. The enzymes SspI and AluI were selected for the assessment of the restriction fragment length polymorphisms (RFLP) among all the individuals used in the present study. For this purpose, 10 ml of each PCR product was digested overnight with five units of enzyme, following manufacturer's instructions (Promega Corporation). Restriction fragments were separated on 3% agarose gels, stained with GelRed (Biotium), and visualized under UV light. Data analysis. Basic descriptive statistics of the allozyme dataset were computed using the softwares FSTAT 2.9.3 and BIOSYS-2. These included population allele frequencies, observed (H O ) and unbiased expected (H E ) heterozygosity, and allelic richness (A R , an estimate of the average number of alleles per locus corrected for sample size). FSTAT was also used to test departures from the expected Hardy-Weinberg (HW) equilibrium and genotypic linkage equilibrium between pairs of loci in each population sample. The analysis of the occurrence and extent of admixture between the two spectacled salamanders within their putative area of secondary contact was conducted using two methodological approaches: the Bayesian clustering algorithm implemented in the software STRUCTURE 2.3.4 49 , and the Bayesian analysis of the genotypic classes (pure, F1, F2, and backcrosses) as implemented in NEWHYBRID 50 . The analysis with STRUCTURE was conducted using a model allowing for admixture and independent allele frequencies among populations. Given the main purpose of this study, we were particularly interested in a model with two clusters (i.e., K 5 2), to assess the occurrence of individuals of mixed ancestry in our sample. Nevertheless, to explore the occurrence of further population structure within both species we ran STRUCTURE with K ranging from 1 to 10, and we analyzed results both with K 5 2 and with the best clustering option, as suggested by the postprocessing of the STRUCTURE output. For each value of K we carried out 10 replicates of the analysis, with 100,000 Markov Chain Monte Carlo (MCMC) iterations following a burn-in of 50,000 iterations, as these settings guaranteed convergence of the Markov chains to a stationary distribution (see Supplementary Information). The results of the analysis using STRUCTURE were summarized and analysed using STRUCTURE HARVESTER 51 . The assignment of individuals to the various hybrid classes with NEWHYBRID was performed by computing 100,000 MCMC iterations following 20,000 iterations discarded as burn-in (after checking for stationarity). The analysis was run with two population samples (which received a q . 0.95 during previous STRUCTURE runs) pre-assigned as parental (i.e., 'z' option in use), following suggestions by 50 . The best threshold values to confidently identify an individual as admixed in the STRUCTURE analyses, or to assign it to a particular hybrid class in NEWHYBRID, were identified using the approach of 52 . We selected 30 individuals receiving q . 0.95 during preliminary STRUCTURE runs and used them to simulate 100 individuals of each hybrid class (pure S. terdigitata, pure S. perspicillata, F1, F2, backcross to S. terdigitata, and backcross to S. perspicillata) using the program HYBRIDLAB 1.0 53 . This program generates hybrid genotypes by randomly sampling alleles at each locus as a function of the respective frequencies, and assuming random mating, linkage equilibrium and markers' neutrality. No specific parameter settings are allowed by HYBRIDLAB. We repeated this procedure 10 times and ran the analyses with both STRUCTURE and NEWHYBRID using the same settings employed for the real dataset. Results based on simulated genotypes were used to estimate efficiency (the proportion of individuals in a group that were correctly identified), accuracy (the proportion of an identified group that truly belongs to that category), and performance (the product of efficiency and accuracy, varying from 0 [min] to 1 [max]) of the two methods under the threshold values 0.95, 0.90, 0.85, 0.80, and 0.75. Finally, for each method, the threshold value maximizing the overall performance of the model was retained and used to analyse the real dataset.
2018-04-03T04:31:23.709Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "d910889bc0633d3e7647a2f0f4b5e43060ae6dfe", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep06516.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d910889bc0633d3e7647a2f0f4b5e43060ae6dfe", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }